An Overview Of Class Imbalance Problem In Supervised Learning

Satuluri Naganjaneyulu *   Mrithyumjayarao Kuppa **
* Associate Professor, Lakireddy Bali Reddy College of Engineering, Mylavaram, India.
** Professor, Vaagdevi College of Engineering, Warangal, India.

Abstract

In Data mining and Knowledge Discovery hidden and valuable knowledge from the data sources is discovered. The traditional algorithms used for knowledge discovery are bottle necked due to wide range of data sources availability. Class imbalance is a one of the problem arises due to data source which provide unequal class i.e. examples of one class in a training data set vastly outnumber examples of the other class(es). This paper presents an updated literature survey of current class imbalance learning methods for inducing models which handle imbalanced datasets efficiently.

Keywords:

  

Introduction

In Machine Learning community, and in Data Mining works, Classification has its own importance. Classification is an important part and the research application field in the data mining [1]. With ever-growing volumes of operational data, many organizations have started to apply data-mining techniques to mine their data for novel, valuable information that can be used to support their decision making [2]. Organizations make extensive use of data mining techniques in order to define meaningful and predictable relationships between objects [3]. Decision tree learning is one of the most widely used and practical methods for inductive inference [4]. This paper presents an updated survey of various decision tree algorithms in machine learning. It also describes the applicability of the decision tree algorithm on real-world data.

The rest of this paper is organized as follows. In Section 1, the authors presented the basics of data mining and classification. In Section 2, they present the imbalanced data-sets problem, and in Section 3, they present the various evaluation criteria's used for class imbalanced learning. In Section 6, we presented updated survive of class imbalance learning methods. Finally, in Section 7, we make our concluding remarks.

1. Data Mining

Data Mining is the analysis of (often large) observational data sets to find unsuspected relationships and to summarize the data in novel ways that are both understandable and useful to the owner [5]. There are many different data mining functionalities. A brief definition of each of these functionalities is now presented. The definitions are directly collated from [6]. Data characterization is the summarization of the general characteristics or features of a target class of data. Data Discrimination, on the other hand, is a comparison of the general features of target class data objects with the general features of objects from one or a set of contrasting classes. Association analysis is the discovery of association rules showing attribute value conditions that occur frequently together in a given set of data.

Classification is an important application area for data mining. Classification is the process of finding a set of models (or functions) that describe and distinguish data classes or concepts, for the purpose of being able to use the model to predict the class of objects whose class label is unknown. The derived model can be represented in various forms, such as classification rules, decision trees, mathematical formulae, or neural networks. Unlike classification and prediction, which analyze class-labeled data objects, clustering analyzes data objects without consulting a known class label.

Outlier Analysis attempts to find outliers or anomalies in data. A detailed discussion of these various functionalities can be found in [6]. Even an overview of the representative algorithms developed for knowledge discovery is beyond the scope of this paper. The interested person is directed to the many books which amply cover this in detail [5], [6].

1.1 The Classification Task

Learning how to classify objects to one of a pre-specified set of categories or classes is a characteristic of intelligence that has been of keen interest to researchers in psychology and computer science. Identifying the common -core characteristics of a set of objects that are representative of their class is of enormous use in focusing the attention of a person or computer program. For example, to determine whether an animal is a zebra, people know to look for stripes rather than examine its tail or ears. Thus, stripes figure strongly in our concept (generalization) of zebras. Of course stripes alone are not sufficient to form a class description for zebras as tigers have them also, but they are certainly one of the important characteristics. The ability to perform classification and to be able to learn to classify gives people and computer programs the power to make decisions. The efficacy of these decisions is affected by performance on the classification task.

In machine learning, the classification task described above is commonly referred to as supervised learning. In supervised learning there is a specified set of classes, and example objects are labeled with the appropriate class (using the example above, the program is told what a zebra is and what is not). The goal is to generalize (form class descriptions) from the training objects that will enable novel objects to be identified as belonging to one of the classes. In contrast to supervise learning is unsupervised learning. In this case the program is not told which objects are zebras. Often the goal in unsupervised learning is to decide which objects should be grouped together—in other words, the learner forms the classes itself. Of course, the success of classification learning is heavily dependent on the quality of the data provided for training—a learner has only the input to learn from. If the data is inadequate or irrelevant then the concept descriptions will reflect this and misclassification will result when they are applied to new data.

1.2 Decision Trees

A decision tree is a tree data structure with the following properties:

A decision tree can be used to classify a case by starting at the root of the tree and moving through it until a leaf is reached [7]. At each decision node, the case's outcome for the test at the node is determined and attention shifts to the root of the sub tree corresponding to this outcome. When this process finally (and inevitably) leads to a leaf, the class of the case is predicted to be that labeled at the leaf.

1.3 Building Decision Trees

Every successful decision tree algorithm (e.g. CART [8] , ID3 [9] , C4.5 [7]) is an elegantly simple greedy algorithm:

I.Pick as the root of the tree the attribute whose values best separate the training set into subsets (the best partition is one where all elements in each subset belong to the same class);

ii.Repeat step (i) recursively for each child node until a stopping criterion is met.

Examples of stopping criteria are:

The dominating operation in building decision trees is the gathering of histograms on attribute values. As mentioned earlier, all paths from a parent to its children partition the relation horizontally into disjoint subsets. Histograms have to be built for each subset, on each attribute, and for each class individually.

2. Problem of Imbalanced Datasets

A dataset is class imbalanced if the classification categories are not approximately equally represented. The level of imbalance (ratio of size of the majority class to minority class) can be as huge as 1:99 [10]. It is noteworthy that class imbalance is emerging as an important issue in designing classifiers [11], [12], [13]. Furthermore, the class with the lowest number of instances is usually the class of interest from the point of view of the learning task [14]. This problem is of great interest because it turns up in many real-world classification problems, such as remote-sensing [15], pollution detection [16], risk management [17], fraud detection [18], and especially medical diagnosis [19]–[22].

There exist techniques to develop better performing classifiers with imbalanced datasets, which are generally called Class Imbalance Learning (CIL) methods. These methods can be broadly divided into two categories, namely, external methods and internal methods. External methods involve preprocessing of training datasets in order to make them balanced, while internal methods deal with modifications of the learning algorithms in order to reduce their sensitiveness to class imbalance [23]. The main advantage of external methods as previously pointed out, is that they are independent of the underlying classifier. In this paper, we are laying more stress to propose an external CIL method for solving the class imbalance problem.

3. Data Balancing Techniques

Whenever a class in a classification task is underrepresented (i.e., has a lower prior probability) compared to other classes, we consider the data as imbalanced [24], [25]. The main problem in imbalanced data is that the majority classes that are represented by large numbers of patterns rule the classifier decision boundaries at the expense of the minority classes that are represented by small numbers of patterns. This leads to high and low accuracies in classifying the majority and minority classes, respectively, which do not necessarily reflect the true difficulty in classifying these classes. Most common solutions to this problem balance the number of patterns in the minority or majority classes.

Either way, balancing the data has been found to alleviate the problem of imbalanced data and enhance accuracy [24], [25], [26]. Data balancing is performed by, e.g., oversampling patterns of minority classes either randomly or from areas close to the decision boundaries. Interestingly, random oversampling is found comparable to more sophisticated oversampling methods [26] . Alternatively, undersampling is performed on majority classes either randomly or from areas far away from the decision boundaries. We note that random undersampling may remove significant patterns and random oversampling may lead to overfitting, so random sampling should be performed with care. We also note that, usually, oversampling of minority classes is more accurate than undersampling of majority classes [26] .

Resampling techniques can be categorized into three groups. Undersampling methods, which create a subset of the original data-set by eliminating instances (usually majority class instances); oversampling methods, which create a superset of the original data-set by replicating some instances or creating new instances from existing ones; and finally, hybrids methods that combine both sampling methods. Among these categories, there exist several different proposals; from this point, we only center our attention in those that have been used in under sampling.

The bottom line is that when studying problems with imbalanced data, using the classifiers produced by standard machine learning algorithms without adjusting the output threshold may well be a critical mistake. This skewness towards minority class (positive) generally causes the generation of a high number of false-negative predictions, which lower the model's performance on the positive class compared with the performance on the negative (majority) class. A comprehensive review of different CIL methods can be found in [27]. The following two sections briefly discuss the external-imbalance and internal-imbalance learning methods.

The external methods are independent from the learning algorithm being used, and they involve preprocessing of the training datasets to balance them before training the classifiers. Different resampling methods, such as random and focused oversampling and undersampling, fall into to this category. In random undersampling, the majority-class examples are removed randomly, until a particular class ratio is met [28] . In random oversampling, the minority-class examples are randomly duplicated, until a particular class ratio is met [27]. Synthetic Minority Oversampling TEchnique (SMOTE) [29] is an oversampling method, where new synthetic examples are generated in the neighborhood of the existing minority-class examples rather than directly duplicating them. In addition, several informed sampling methods have been introduced in [30].

4. Evaluation Criteria's for Class Imbalance Learning

This section follow a design decomposition approach to systematically analyze the different unbalanced domains.

4.1 Evaluation Criteria

To assess the classification results we count the number of True Positive (TP), True Negative (TN), False Positive (FP) (actually negative, but classified as positive) and False Negative (FN) (actually positive, but classified as negative) examples. It is now well known that error rate is not an appropriate evaluation criterion when there is class imbalance or unequal costs. In this paper, we use AUC, Precision, F-measure, TP Rate and TN Rate as performance evaluation measures.

Let us define a few well known and widely used measures for C4.5 [7] as the baseline classifier with the most popular machine learning publicly available datasets at Irvine [31]:

The Area Under Curve (AUC) measure is computed by,

[1]

The Precision measure is computed by,

[2]

The F-measure Value is computed by,

[3]

The True Positive Rate measure is computed by,

[4]

The True Negative Rate measure is computed by,

[5]

4.2 Benchmark datasets used in Class imbalance Learning

Table I summarizes the benchmark datasets used in almost all the recent studies conducted on class imbalance learning. The details of the datasets are given in Table I. For each data set, the number of examples (#Ex.), number of attributes (#Atts.), class name of each class (minority and majority), the percentage of examples of each class and the IR is given. This table is ordered by the IR, from low to high imbalanced data sets. The complete details regarding all the datasets can be obtained from Victoria Lópezet al. [38] and Machine Learning Repository [52].

Table 1. Summary Of Benchmark Imbalanced Datasets

5. Recent Advances on Class Imbalance Learning

Currently, the research in class imbalance learning mainly focuses on the integration of imbalance class learning with other AI techniques. How to integrate the class imbalance learning with other new techniques is one of the hottest topics in class imbalance learning research. There are some of the recent research directions for class imbalance learning as follows:

T. Jo et al. [32] have proposed a clustering-based sampling method for handling class imbalance problem, while S. Zouet al. [33] have proposed a genetic algorithm based sampling method. Jinguha Wang et al. [34] have suggested a method for extracting minimum positive and maximum negative features (in terms of absolute value) for imbalanced binary classification is proposed. They have developed two models to yield the feature extractors. Model 1 first generates a set of candidate extractors that can minimize the positive features to be zero, and then chooses the ones among these candidates that can maximize the negative features. Model 2 first generates a set of candidate extractors that can maximize the negative features, and then chooses the ones that can minimize the positive features. Compared with the traditional feature extraction methods and classifiers, the proposed models are less likely affected by the imbalance of the dataset. Iain Brown et al. [35] have explored the suitability of gradient boosting, least square support vector machines and random forests for imbalanced credit scoring data sets such as loan default prediction. They progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. They have given the suggestion for applying the random forest and gradient boosting classifiers for better performance. Salvador Garcı´aet al. [36] have used evolutionary technique to solve the class imbalance problem. They proposed a method belonging to the family of the nested generalized exemplar that accomplishes learning by storing objects in Euclidean n-space. Classification of new data is performed by computing their distance to the nearest generalized exemplar. The method is optimized by the selection of the most suitable generalized exemplars based on evolutionary algorithms.

Jin Xiao et al. [37] have proposed a Dynamic Classifier Ensemble method for Imbalanced Data (DCEID) by combining ensemble learning with cost-sensitive learning. In this for each test instance, it can adaptively select out the more appropriate one from the two kinds of dynamic ensemble approach: Dynamic Classifier Selection (DCS) and Dynamic Ensemble Selection (DES). Meanwhile, new cost-sensitive selection criteria for DCS and DES are constructed respectively to improve the classification ability for imbalanced data. Victoria Lópezet al. [38] have analyzed the performance of data level proposals against algorithm level proposals focusing in cost-sensitive models and versus a hybrid procedure that combines those two approaches. They also lead to a point of discussion about the data intrinsic characteristics of the imbalanced classification problem which will help to follow new paths that can lead to the improvement of current models mainly focusing on class overlap and dataset shift in imbalanced classification.

Yang Yong [39] has proposed one kind minority kind of sample sampling method based on the K-means cluster and the genetic algorithm. They used K-means algorithm to cluster and group the minority kind of sample, and in each cluster they use the genetic algorithm to gain the new sample and to carry on the valid confirmation. Chris Seiffertet al. [40] have examined a new hybrid sampling/boosting algorithm, called RUSBoost from its individual component AdaBoost and SMOTEBoost, which is another algorithm that combines boosting and data sampling for learning from skewed training data. V. Garcia et al.[41] have investigated the influence of both the imbalance ratio and the classifier on the performance of several resampling strategies to deal with imbalanced data sets. The study focuses on evaluating how learning is affected when different resampling algorithms transform the originally imbalanced data into artificially balanced class distributions.

Table 2 recent algorithmic advances in class imbalance learning available in the literature. Obviously, there are many other algorithms which are not included in this table. Nevertheless most of these algorithms are variation of the algorithmic framework given in Section 2 and 3. A profound comparison of the above algorithms and many others can be gathered from the references list.

Table 2. Recent Advances in Class Imbalance Learning

María Dolores Pérez-Godoy et al. [42] have proposed CO2RBFN, a evolutionary Cooperative–Competitive model for the design of Radial-basis Function Networks which uses both radial-basis function and the evolutionary Cooperative–Competitive technique on imbalanced domains. CO2RBFN follows the evolutionary cooperative–competitive strategy, where each individual of the population represents an RBF (Gaussian function will be considered as RBF) and the entire population is responsible for the definite solution. This paradigm provides a framework where an individual of the population represents only a part of the solution, competing to survive (since it will be eliminated if its performance is poor) but at the same time cooperating in order to build the whole RBFN, which adequately represents the knowledge about the problem and achieves good generalization for new patterns.

Der-Chiang Li et al.[43] have suggested a strategy which over-samples the minority class and under-samples the majority one to balance the datasets. For the majority class, they buildup the Gaussian type fuzzy membership function and a-cut to reduce the data size; for the minority class, they used the mega-trend diffusion membership function to generate virtual samples for the class. Furthermore, after balancing the data size of classes, they extended the data attribute dimension into a higher dimension space using classification related information to enhance the classification accuracy. Enhong Cheet al.[44] have described a unique approach to improve text categorization under class imbalance by exploiting the semantic context in text documents. Specifically, they generate new samples of rare classes (categories with relatively small amount of training data) by using global semantic information of classes represented by probabilistic topic models. In this way, the numbers of samples in different categories can become more balanced and the performance of text categorization can be improved using this transformed data set. Indeed, this method is different from traditional re-sampling methods, which try to balance the number of documents in different classes by re-sampling the documents in rare classes. Such re-sampling methods can cause overfitting. Another benefit of this approach is the effective handling of noisy samples. Since all the new samples are generated by topic models, the impact of noisy samples is dramatically reduced.

Alberto Fernándezet al. [45] have proposed an improved version of Fuzzy Rule Based Classification Systems (FRBCSs) in the framework of imbalanced data-sets by means of a tuning step. Specifically, they adapt the 2-tuples based genetic tuning approach to classification problems showing the good synergy between this method and some FRBCSs. The proposed algorithm uses two learning methods in order to generate the RB for the FRBCS. The first one is the method proposed in [46] , that they have named the Chi et al.'s rule generation. The second approach is defined by Ishibuchi and Yamamoto in [47] and it consists of a Fuzzy Hybrid Genetic Based Machine Learning (FH-GBML) algorithm.

J. Burezet al.[48] have investigated how they can better handle class imbalance in churn prediction. Using more appropriate evaluation metrics (AUC, lift), they investigated the increase in performance of sampling (both random and advanced under-sampling) and two specific modeling techniques (gradient boosting and weighted random forests) compared to some standard modeling techniques. They have advised weighted random forests, as a cost-sensitive learner, performs significantly better compared to random forests.

Che-Chang Hsu et al. [49] have proposed a method with a model assessment of the interplay between various classification decisions using probability, corresponding decision costs, and quadratic program of optimal margin classifier called: Bayesian Support Vector Machines (BSVMs) learning strategy. The purpose of their learning method is to lead an attractive pragmatic expansion scheme of the Bayesian approach to assess how well it is aligned with the class imbalance problem. In the framework, they did modify in the objects and conditions of primal problem to reproduce an appropriate learning rule for an observation sample.

In [50] Alberto Fernándezet al. have proposed to work with fuzzy rule based classification systems using a preprocessing step in order to deal with the class imbalance. Their aim is to analyze the behavior of fuzzy rule based classification systems in the framework of imbalanced data-sets by means of the application of an adaptive inference system with parametric conjunction operators. Jordan M. Malofet al.[51] have empirically investigates how class imbalance in the available set of training cases can impact the performance of the resulting classifier as well as properties of the selected set. In this K-Nearest Neighbor (k-NN) classifier is used which is a well-known classifier and has been used in numerous case-based classification studies of imbalance datasets.

Conclusion

In this paper, the state of the art methodologies to deal with class imbalance problem has been reviewed. This issue hinders the performance of standard classifier learning algorithms that assume relatively balanced class distributions, and classic ensemble learning algorithms are not an exception. In recent years, several methodologies integrating solutions to enhance the induced classifiers in the presence of class imbalance by the usage of evolutionary techniques have been presented. However, there was a lack of framework where each one of them could be classified; for this reason, a taxonomy where they can be placed has been taken as our future work. Finally, we have concluded that intelligence based algorithms are the need of the hour for improving the results that are obtained by the usage of data preprocessing techniques and training a single classifier.

References

[1]. Juanli Hu, Jiabin Deng & Mingxiang Sui (2009). A New Approach for Decision Tree Based on Principal Component Analysis, Proceedings of Conference on Computational Intelligence and Software Engineering(pp 1-4).
[2]. Huimin Zhao & Atish P. Sinha (2005, September). An Efficient Algorithm for Generating Generalized Decision Forests, IEEE Transactions on Systems, Man, and Cybernetics -Part A : Systems and Humans(VOL.35,NO.5, pp: 287-299).
[3]. D. Liu, C. Lai & W. Lee (2009). A Hybrid of Sequential Rules and Collaborative Filtering for Product Recommendation, Information Sciences 179 (20), pp: 3505-3519.
[4]. M. Mitchell (1997). Machine Learning. McGraw Hill, New York.
[5]. David Hand, HeikkiMannila, & Padhraic Smyth (2001, August). Principles of Data Mining. MIT Press.
[6]. Jiawei Han & MichelineKamber (2000,April). Data Mining: Concepts and Techniques. Morgan Kaufmann,
[7]. J. Quinlan (1993). C4.5 Programs for Machine Learning, San Mateo, CA:Morgan Kaufmann.
[8]. L. Breiman, J. Friedman, R. Olshen & C. Stone (1984). Classification and Regression Trees. Belmont,CA: Wadsworth.
[9]. J. Quinlan (1986). Induction of decision trees, Machine Learning – 1: 81-106.
[10]. J. Wu, S. C. Brubaker, M. D. Mullin, & J. M. Rehg (2008,Mar). “Fast asymmetric learning for cascade face detection,” IEEE Trans. Pattern Anal. Mach. Intell.,(vol. 30, no. 3, pp. 369–382).
[11]. N. V. Chawla, N. Japkowicz, & A. Kotcz, Eds (2003). Proc. ICML Workshop Learn. Imbalanced Data Sets.
[12]. N. Japkowicz, Ed.(2000). Proc. AAAI Workshop Learn. Imbalanced Data Sets.
[13]. G. M.Weiss (2004, June). “Mining with rarity: A unifying framework,” ACM SIGKDD Explor. Newslett., (vol. 6, no. 1, pp. 7–19).
[14]. N. V. Chawla, N. Japkowicz, and A. Kolcz, Eds. (2004). Special Issue Learning Imbalanced Datasets, SIGKDD Explor. Newsl.,vol. 6(1).
[15]. W.-Z. Lu & D.Wang (2008). “Ground-level ozone prediction by support vector machine approach with a cost-sensitive classification scheme,” Sci. Total. Enviro., (vol. 395, no.2-3, pp. 109–116).
[16]. Y.-M. Huang, C.-M. Hung, & H. C. Jiau, (2006). “Evaluation of neural networks and data mining methods on a credit assessment task for class imbalance problem,” Nonlinear Anal. R. World Appl., (vol. 7, no. 4, pp. 720–747).
[17]. D. Cieslak, N. Chawla, and A. Striegel (2006). “Combating imbalance in network intrusion datasets,” in IEEE Int. Conf. Granular Comput., , (pp. 732–737).
[18]. M. A. Mazurowski, P. A. Habas, J. M. Zurada, J. Y. Lo, J. A. Baker, & G. D. Tourassi (2008). “Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance,” Neural Netw.,(vol. 21, no. 2–3, pp. 427–436).
[19]. A. Freitas, A. Costa-Pereira, and P. Brazdil. “Cost-sensitive decision trees applied to medical data (2007),” in Data Warehousing Knowl. Discov. (Lecture Notes Series in Computer Science), I. Song, J. Eder, and T. Nguyen, Eds., Berlin/Heidelberg, Germany: Springer, (vol. 4654, pp. 303–312).
[20]. K.Kilic¸,O¨ zgeUncu & I. B. Tu¨rksen, (2007). “Comparison of different strategies of utilizing fuzzy clustering in structure identification,” Inf. Sci., (vol. 177, no. 23, pp. 5153–5162).
[21]. M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, & R. H. Moss (2007), “A methodological approach to the classification of dermoscopy images,” Comput.Med. Imag. Grap.,(vol. 31, no. 6, pp. 362–373).
[22]. X. Peng & I. King (2008), “Robust BMPM training based on second-order cone programming and its application in medical diagnosis,” Neural Netw., (vol. 21, no. 2–3, pp. 450–457).
[23]. RukshanBatuwita & Vasile Palade (2010,June) FSVM-CIL: Fuzzy Support Vector Machines for Class Imbalance Learning, IEEE TRANSACTIONS ON FUZZY SYSTEMS, (VOL. 18, NO. 3,pp no:558-571).
[24]. N. Japkowicz and S. Stephen (2002), “The Class Imbalance Problem: A Systematic Study,” Intelligent Data Analysis (vol. 6, pp. 429-450).
[25]. M. Kubat and S. Matwin (1997), “Addressing the Curse of Imbalanced Training Sets: One-Sided Selection,” Proc. 14th Int'l Conf. Machine Learning,(pp. 179-186).
[26]. G.E.A.P.A. Batista, R.C. Prati, & M.C. Monard (2004), “A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data,” SIGKDD Explorations,6(1): 20-29.
[27]. D. Cieslak & N. Chawla (2008), “Learning decision trees for unbalanced data,” in Machine Learning and Knowledge Discovery in Databases. Berlin, Germany: Springer-Verlag,( pp. 241–256).
[28]. G.Weiss(2004), “Mining with rarity: A unifying framework,” SIGKDD Explor. Newslett., (vol. 6, no. 1, pp. 7–19).
[29]. N. Chawla, K. Bowyer, & P. Kegelmeyer (2002), “SMOTE: Synthetic minority over-sampling technique,” J. Artif. Intell. Res.,( vol. 16, pp. 321–357).
[30]. J. Zhang & I. Mani (2003), “KNN approach to unbalanced data distributions: A case study involving information extraction,” in Proc. Int. Conf. Mach. Learning, Workshop: Learning Imbalanced Data Sets, Washington, DC,(pp. 42–48).
[31]. A. Asuncion D. Newman. (2007). UCI Repository of Machine Learning Database (School of Information and Computer Science), Irvine, CA: Univ. of California [Online]. Available: http://www.ics.uci.edu/∼mlearn/MLRepository. html.
[32]. T. Jo & N. Japkowicz(2004). “Class imbalances versus small disjuncts,” ACM SIGKDD Explor. Newslett.,( vol. 6, no. 1, pp. 40–49).
[33]. S. Zou, Y. Huang, Y. Wang, J. Wang, & C. Zhou (2008). “SVM learning from imbalanced data by GA sampling for protein domain prediction,” in Proc. 9th Int. Conf. Young Comput. Sci., Hunan, China, , (pp. 982– 987).
[34]. Jinguha Wang, JaneYou ,QinLi, & YongXu (2012). ”Extract minimum positive and maximum negative features for imbalanced binary classification”, Pattern Recognition 45 : 1136–1145.
[35]. Iain Brown, Christophe Mues (2012). “An experimental comparison of classification algorithms for imbalanced credit scoring data sets”, Expert Systems with Applications 39 : 3446–3453.
[36]. Salvador Garcı´a, Joaquı´nDerrac, Isaac Triguero, Cristobal J. Carmona, Francisco Herrera (2012). “Evolutionary-based selection of generalized instances for imbalanced classification”, Knowledge-Based Systems 25 : 3–12.
[37]. Jin Xiao, Ling Xie, Changzheng He & Xiaoyi Jiang (2012). ” Dynamic classifier ensemble model for customer classification with imbalanced class distribution”, Expert Systems with Applications 39 : 3668–3675.
[38]. Victoria López, Alberto Fernández, Jose G. Moreno-Torres, Francisco Herrera (2012). “Analysis of preprocessing vs. cost-sensitive learning for imbalanced classification. Open problems on intrinsic data characteristics”, Expert Systems with Applications 39 : 6585–6608.
[39]. Yang Yong. “The Research of Imbalanced Data Set of Sample Sampling Method Based on K-Means Cluster and Genetic Algorithm”, Energy Procedia 17 : 164 – 170.
[40]. Chris Seiffert, Taghi M. Khoshgoftaar, Jason Van Hulse, & Amri Napolitano (2010,Jan). ”RUSBoost: A Hybrid Approach to Alleviating Class Imbalance”, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART A: SYSTEMS AND HUMANS,(VOL. 40, NO. 1 pp 185-197).
[41]. V. Garcia, J.S. Sanchez , & R.A. Mollineda (2012). ”On the effectiveness of preprocessing methods when dealing with different levels of class imbalance”, Knowledge-Based Systems 25 : 13–21.
[42]. María Dolores Pérez-Godoy, Alberto Fernández, Antonio Jesús Rivera, María José del Jesus (2010). ”Analysis of an evolutionary RBFN design algorithm, CO2RBFN, for imbalanced data sets”, Pattern Recognition Letters 31 :2375–2388.
[43]. Der-Chiang Li, Chiao-WenLiu, & SusanC.Hu (2010). ” A learning method for the class imbalance problem with medical data sets”, Computers in Biology and Medicine 40 : 509–518.
[44]. EnhongChe, Yanggang Lin, HuiXiong, QimingLuo, & Haiping Ma (2011). “Exploiting probabilistic topic models to improve text categorization under class imbalance”, Information Processing and Management 47 : 202–214.
[45]. Alberto Fernández, María Josédel Jesus, & Francisco Herrera (2010). ”On the 2-tuples based genetic tuning performance for fuzzy rule based classification systems in imbalanced data-sets”, Information Sciences 180 : 1268–1291.
[46]. Z. Chi, H. Yan, T. Pham, (1996). Fuzzy Algorithms with Applications to Image Processing and Pattern Recognition, World Scientific.
[47]. H. Ishibuchi, T. Yamamoto, T. Nakashima (2005). “Hybridization of fuzzy GBML approaches for pattern classification problems”, IEEE Transactions on System, Man and Cybernetics B 35 (2) : 359–365.
[48]. J. Burez, D. Van den Poel (2009). ”Handling class imbalance in customer churn prediction”, Expert Systems with Applications 36 : 4626–4636.
[49]. Che-Chang Hsu, Kuo-Shong Wang, Shih-Hsing Chang (2011). ”Bayesian decision theory for support vector machines: Imbalance measurement and feature optimization”, Expert Systems with Applications 38: 4698–4704.
[50]. Alberto Fernández, María José del Jesus & Francisco Herrera(2009). ”On the influence of an adaptive inference system in fuzzy rule based classification systems for imbalanced data-sets”, Expert Systems with Applications 36 : 9805–9812.
[51]. Jordan M. Malof, Maciej A. Mazurowski, Georgia D. Tourassi (2012).” The effect of class imbalance on case selection for case-based classifiers: An empirical study in the context of medical decision support”, Neural Networks 25 : 141–145.
[52]. Blake, C., & Merz, C.J. (2000). UCI repository ofmachinelearning databases. Machine-readable datarepository, Department of Information and Computer Science, University of California at Irvine, Irvine, CA. at http://www.ics.uci.edu/mlearn/MLRepository.html.