Experiments in combining boosting and deep stacked networks

2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP)(2016)

引用 1|浏览8
暂无评分
摘要
Both boosting and deep stacking sequentially train their units taking into account the outputs of the previously trained learners. This parallelism suggests that it exists the possibility of getting some advantages by combining these techniques, i.e., emphasis and injection, in appropiate manners. In this paper, we propose a first mode for such a combination by simultaneously applying a general and flexible enough emphasis function and injecting the aggregated previous outputs to the learner which is being designed. We call this kind of classification mechanism Boosted and Aggregated Deep Stacked Networks (B-ADSNs). A series of experiments with some selected benchmark databases reveal that, if carefully designed, B-ADSNs never perform worse than the ADSNs (DSNs which work with aggregated output injection), and that in some cases their performance is better. We analyze and discuss the conditions to get these favourable results, and, finally, we explain that there are other combination possibilities that merit to be studied.
更多
查看译文
关键词
Deep learning,DSN,NN,boosting,emphasis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要