Split-Ensemble: Efficient OOD-aware Ensemble via Task and Model Splitting
CoRR(2023)
摘要
Uncertainty estimation is crucial for machine learning models to detect
out-of-distribution (OOD) inputs. However, the conventional discriminative deep
learning classifiers produce uncalibrated closed-set predictions for OOD data.
A more robust classifiers with the uncertainty estimation typically require a
potentially unavailable OOD dataset for outlier exposure training, or a
considerable amount of additional memory and compute to build ensemble models.
In this work, we improve on uncertainty estimation without extra OOD data or
additional inference costs using an alternative Split-Ensemble method.
Specifically, we propose a novel subtask-splitting ensemble training objective,
where a common multiclass classification task is split into several
complementary subtasks. Then, each subtask's training data can be considered as
OOD to the other subtasks. Diverse submodels can therefore be trained on each
subtask with OOD-aware objectives. The subtask-splitting objective enables us
to share low-level features across submodels to avoid parameter and
computational overheads. In particular, we build a tree-like Split-Ensemble
architecture by performing iterative splitting and pruning from a shared
backbone model, where each branch serves as a submodel corresponding to a
subtask. This leads to improved accuracy and uncertainty estimation across
submodels under a fixed ensemble computation budget. Empirical study with
ResNet-18 backbone shows Split-Ensemble, without additional computation cost,
improves accuracy over a single model by 0.8%, 1.8%, and 25.5% on CIFAR-10,
CIFAR-100, and Tiny-ImageNet, respectively. OOD detection for the same backbone
and in-distribution datasets surpasses a single model baseline by,
correspondingly, 2.2%, 8.1%, and 29.6% mean AUROC. Codes will be publicly
available at https://antonioo-c.github.io/projects/split-ensemble
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要