Two-stream convolutional networks for end-to-end learning of self-driving cars.

CoRR(2018)

引用 23|浏览2
暂无评分
摘要
We propose a methodology to extend the concept of Two-Stream Convolutional Networks to perform end-to-end learning for self-driving cars with temporal cues. The system has the ability to learn spatiotemporal features by simultaneously mapping raw images and pre-calculated optical flows directly to steering commands. Although optical flows encode temporal-rich information, we found that 2D-CNNs are prone to capturing features only as spatial representations. We show how the use of Multitask Learning favors the learning of temporal features via inductive transfer from a shared spatiotemporal representation. Preliminary results demonstrate a competitive improvement of 30% in prediction accuracy and stability compared to widely used regression methods trained on the Comma.ai dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要