Inverse Feature Learning: Feature learning based on Representation Learning of Error

IEEE ACCESS(2020)

引用 2|浏览39
暂无评分
摘要
This paper proposes inverse feature learning (IFL) as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach. The key contribution of this method is to learn the representation of error as high-level features, while current representation learning methods interpret error by loss functions which are obtained as a function of differences between the true labels and the predicted ones. One advantage of this error representation is that the learned features for each class can be obtained independently of learned features for other classes; therefore, IFL can learn simultaneously meaning that it can learn new classes' features without retraining. Error representation learning can also help with generalization and reduce the chance of over-fitting by adding a set of impactful features to the original data set which capture the relationships between each instance and different classes through an error generation and analysis process. This method can be particularly effective in data sets, where the instances of each class have diverse feature representations or the ones with imbalanced classes. The experimental results show that the proposed IFL results in better performance compared to the state-of-the-art classification techniques for several popular data sets. We hope this paper can open a new path to utilize the proposed perspective of error representation learning in different feature learning domains.
更多
查看译文
关键词
Training,Feature extraction,Learning systems,Machine learning,Clustering methods,Neural networks,Measurement,Representation learning of error,inverse feature learning,classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要