References
[1]. Agrawal, R., Imielinski, T., & Swami, A. (1993).
Database mining: A performance perspective. IEEE
Transactions on Knowledge and Data Engineering, 5(6),
914-925.
[2]. Breiman, L., Friedman, J., Stone, C. J., & Olshen, R.
(1984). Classification and Regression Trees. CRC Press.
[3]. Chen, H. (1995). Machine learning for information
retrieval: neural networks, symbolic learning, and genetic
algorithms. Journal of the Association for Information
Science and Technology, 46(3), 194- 216.
[4]. Cheng, J., Fayyad, U. M., Irani, K. B., & Qian, Z. (1988).
Improved decision trees: A generalized version of ID3. In
Proc. Fifth Int. Conf. Machine Learning (pp. 100-107).
[5]. Deisy, C., Subbulakshmi, B., Baskar, S., & Ramaraj, N.
(2007, December). Efficient dimensionality reduction
approaches for feature selection. In Conference on
Computational Intelligence and Multimedia
Applications, 2007. International Conference on (Vol. 2,
pp. 121-127). IEEE.
[6]. Gehrke, J., Ganti, V., Ramakrishnan, R., & Loh, W. Y.
(1999, June). BOAT-optimistic decision tree construction.
In ACM SIGMOD Record (Vol. 28, No. 2, pp. 169-180).
ACM.
[7]. Gehrke, J., Ramakrishnan, R., & Ganti, V. (1998,
August). Rain Forest - A framework for fast decision tree
construction of large datasets. In VLDB (Vol. 98, pp. 416-
427).
[8]. Han, J., Cheng, H., Xin, D., & Yan, X. (2007). Frequent
pattern mining: Current status and future directions. Data
Mining and Knowledge Discovery, 15(1), 55-86.
[9]. Herling, T. J. (1995). Adoption of Computer
Communication Technology by Communication
Faculty: A Case Study. Information Development, 32(4),
986-1000.
[10]. Jin, R. & Agrawal, G. (2003, May). Communication and memory efficient parallel decision tree construction.
In Proceedings of the 2003 SIAM International
Conference on Data Mining (pp. 119-129). Society for
Industrial and Applied Mathematics.
[11]. Joshi, M. V., Karypis, G., & Kumar, V. (1998, March).
ScalParC: A new scalable and efficient parallel
classification algorithm for mining large datasets. In
Parallel Processing Symposium, 1998. IPPS/SPDP 1998.
Proceedings of the First Merged International and
Symposium on Parallel and Distributed Processing 1998
(pp. 573-579). IEEE.
[12]. Kass, G. V. (1980). An exploratory technique for
investigating large quantities of categorical data.
Applied Statistics, 29(2), 119-127.
[13]. Loh, W. Y. & Vanichsetakul, N. (1988). Tree - structured
classification via generalized discriminant analysis.
Journal of the American Statistical Association, 83(403),
715-725.
[14]. Mehta, M., Agrawal, R., & Rissanen, J. (1996). SLIQ: A
fast scalable classifier for data mining. Advances in
Database Technology- EDBT'96 (pp. 18-32).
[15]. Mehta, M., Rissanen, J., & Agrawal, R. (1995,
August). MDL- Based Decision Tree Pruning. In KDD (Vol. 21,
No. 2, pp. 216-221).
[16]. Quinlan, J. R. (1986). Induction of decision trees.
Machine Learning, 1(1), 81-106.
[17]. Quinlan, J. R. (2014). C4.5: Programs for Machine
Learning. Elsevier.
[18]. Quinlan, R. (2004). Data mining tools See5 and C5.
0. Rulequest Research.
[19]. Ruggieri, S. (2002). Efficient C4.5 [classification
algorithm]. IEEE Transactions on Knowledge and Data
Engineering, 14(2), 438-444..
[20]. Safavian, S. R., & Landgrebe, D. (1991). A survey of
decision tree classifier methodology. IEEE Transactions on
Systems, Man, and Cybernetics, 21(3), 660-674.
[21]. Shafer, J., Agrawal, R., & Mehta, M. (1996,
September). SPRINT: A scalable parallel classier for data
mining. In Proc. 1996 Int. Conf. Very Large Data Bases (pp.
544-555).