Minimal Assumptions for Optimal Serology Classification: Theory and Implications for Multidimensional Settings and Impure Training Data

CoRR(2023)

引用 0|浏览9
暂无评分
摘要
Minimizing error in prevalence estimates and diagnostic classifiers remains a challenging task in serology. In theory, these problems can be reduced to modeling class-conditional probability densities (PDFs) of measurement outcomes, which control all downstream analyses. However, this task quickly succumbs to the curse of dimensionality, even for assay outputs with only a few dimensions (e.g. target antigens). To address this problem, we propose a technique that uses empirical training data to classify samples and estimate prevalence in arbitrary dimension without direct access to the conditional PDFs. We motivate this method via a lemma that relates relative conditional probabilities to minimum-error classification boundaries. This leads us to formulate an optimization problem that: (i) embeds the data in a parameterized, curved space; (ii) classifies samples based on their position relative to a coordinate axis; and (iii) subsequently optimizes the space by minimizing the empirical classification error of pure training data, for which the classes are known. Interestingly, the solution to this problem requires use of a homotopy-type method to stabilize the optimization. We then extend the analysis to the case of impure training data, for which the classes are unknown. We find that two impure datasets suffice for both prevalence estimation and classification, provided they satisfy a linear independence property. Lastly, we discuss how our analysis unifies discriminative and generative learning techniques in a common framework based on ideas from set and measure theory. Throughout, we validate our methods in the context of synthetic data and a research-use SARS-CoV-2 enzyme-linked immunosorbent (ELISA) assay.
更多
查看译文
关键词
optimal serology classification,training,multidimensional settings
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要