所有文章 > 正文

NeurIPS 2021华为诺亚方舟实验室发表32篇主会议论文及3篇dataset track

作者: 诺亚实验室

时间: 2021-10-20 15:50

NeurIPS 是全球最负盛名的 AI 学术会议,诺亚方舟实验室今年被收录35篇论文,去年同大会诺亚20篇。中稿方向涵盖AI基础理论、AI无损压缩、视觉、极简计算、transformer、强化学习、AutoML、数据集构建等。35篇中包含1篇Oral,2篇Spotlight论文和3篇dataset track,据大会官方统计,今年 NeurIPS 共有 9122 篇有效论文投稿,总体接收率 26%,只有 3% 被接收为 Spotlight 论文,Oral论文录取率低于1%。

NeurIPS 是全球最负盛名的 AI 学术会议,诺亚方舟实验室今年被收录35篇论文,去年同大会诺亚20篇。中稿方向涵盖AI基础理论、AI无损压缩、视觉、极简计算、transformer、强化学习、AutoML、数据集构建等。35篇中包含1篇Oral,2篇Spotlight论文和3篇dataset track,据大会官方统计,今年 NeurIPS 共有 9122 篇有效论文投稿,总体接收率 26%,只有 3% 被接收为 Spotlight 论文,Oral论文录取率低于1%。

我们将会对本次诺亚实验室研究工作进行多期系列专题报告。

1、强化学习

2、Out Of Distribution研究

3、AI无损压缩

4、优化算法理论

5、数据集构建

6、自动驾驶与基础模型

7、精简模型

Paper List:

Reinforcement Learning

1. Model-based reinforcement learning via imagination with derived memory.

2. A reinforcement learning based bi-level optimization framework for large-scale dynamic pickup and delivery problems.

3. An Efficient Transfer Learning Framework for Multiagent Reinforcement Learning.

4. Adaptive Online Packing-guided Search for POMDPs. 

5. Setting the Variance of Multi-Agent Policy Gradinets.

6. Discovering Multi-Agent Auto-Cirricula in Two Player Zero-Sum games.

7. Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning. 

Out-of-Distribution Generalization:

8. Towards a Theoretical Framework of Out-of-Distribution Generalization.

9. MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps.

10. No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data.

AI Lossless Compression:

11. iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder. (Spotlight).

12. OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression.

13. On the Out of Distribution Generalization of Probabilistic Image Modelling.

Optimization Theory:

14. Stability and Generalization of Bilevel Programming in Hyperparameter Optimization.

15. On Effective Scheduling of Model-based Reinforcement Learning.

16. Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence.

Dataset track:

17.  NATURE: Natural Auxiliary TextUtterances for Realistic Spoken Language Evaluation.

18.  SODA10M: A Large-Scale 2DSelf/Semi-Supervised Object Detection Dataset for Autonomous Driving.

19. One Million Scenes for Autonomous Driving: ONCE Dataset

Autonomous Driving and Basic Model:

20. Learning Transferable Features forPoint Cloud Detection via 3D Contrastive Co-training

21. Transformer in Transformer.

22. Augmented Shortcuts for VisionTransformers.

23. Neural Architecture Dilation for Adversarial Robustness

24. SOFT: Softmax-free Transformer with Linear Complexity (Spotlight).

25. Manifold Topology Divergence: a Framework for Comparing Data Manifolds.

26. Do NerualOptimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark

Efficient Model

27. Learning Frequency Domain Approximation for Binary Neural Networks (Oral).

28. Dynamic Resolution Network.

29. Post-Training Quantization for Vision Transformer.

30. Handling Long-tailed Feature Distribution in AdderNets

31. An Empirical Study of Adder Neural Networks for Object Detection

32. Adder Attention for Vision Transformer

33. Towards Stable and Robust AdderNets

34. S3 : Sign-Sparse-Shift Reparametrization for Effective Training of Low-bitShift Networks.

35. Demystifying and Generalizing Binary Connect. 

[关于转载]:本文转载于诺亚实验室 ,仅用于学术分享,有任何问题请与我们联系:report@aminer.cn。

二维码 扫码微信阅读
推荐阅读 更多