Analysis and Benchmarking of feature reduction for classification under computational constraints

Omer Subasi, Sayan Ghosh,Joseph Manzano, Bruce Palmer,Andres Marquez

MACHINE LEARNING-SCIENCE AND TECHNOLOGY(2024)

引用 0|浏览0
暂无评分
摘要
Machine learning is most often expensive in terms of computational and memory costs due to training with large volumes of data. Current computational limitations of many computing systems motivate us to investigate practical approaches, such as feature selection and reduction, to reduce the time and memory costs while not sacrificing the accuracy of classification algorithms. In this work, we carefully review, analyze, and identify the feature reduction methods that have low costs/overheads in terms of time and memory. Then, we evaluate the identified reduction methods in terms of their impact on the accuracy, precision, time, and memory costs of traditional classification algorithms. Specifically, we focus on the least resource intensive feature reduction methods that are available in Scikit-Learn library. Since our goal is to identify the best performing low-cost reduction methods, we do not consider complex expensive reduction algorithms in this study. In our evaluation, we find that at quadratic-scale feature reduction, the classification algorithms achieve the best trade-off among competitive performance metrics. Results show that the overall training times are reduced 61%, the model sizes are reduced 6x, and accuracy scores increase 25% compared to the baselines on average with quadratic scale reduction.
更多
查看译文
关键词
machine learning,feature reduction,feature selection,feature extraction,classification,memory,computational costs,Scikit-Learn library
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要