Computationally Selected Multivalent HIV-1 Subtype C Vaccine Protects Against Heterologous SHIV Challenge
Vaccines(2025)
Duke Human Vaccine Institute
Abstract
Background: The RV144 trial in Thailand is the only HIV-1 vaccine efficacy trial to date to demonstrate any efficacy. Genetic signatures suggested that antibodies targeting the variable loop 2 (V2) of the HIV-1 envelope played an important protective role. The ALVAC prime and protein boost follow-up trial in southern Africa (HVTN702) failed to show any efficacy. One hypothesis for this is the greater diversity of subtype C viruses in southern Africa relative to CRF01_AE in Thailand. Methods: Here, we determined whether an ALVAC prime with computationally selected gp120 boost immunogens maximizing coverage of diversity of subtype C viruses in the variable V1 and V2 regions (V1V2) improved the protection of non-human primates (NHPs) from a heterologous subtype C SHIV challenge compared to more traditional regimens. Results: An ALVAC prime with Trivalent subtype C gp120 boosts resulted in statistically significant protection from repeated intrarectal SHIV challenges compared to the control. Evaluation of the immunogenicity of each vaccine regimen at the time of challenge demonstrated that different gp120 combination boosts elicited similar high magnitudes of gp120 and breadth of V1V2-binding antibodies, as well as strong Fc-mediated immune responses. Low-to-no neutralization of the challenge virus was detected. A Cox proportional hazard analysis of five pre-selected immune parameters at the time of challenge identified ADCC against the challenge envelope as a correlate of protection. Systems serology analysis revealed that immune responses elicited by the different vaccine regimens were distinct and identified further correlates of resistance to infection. Conclusions: Computationally designed vaccines with maximized subtype C V1V2 coverage mediated protection of NHPs from a heterologous Tier-2 subtype C SHIV challenge.
MoreTranslated text
Key words
HIV-1,subtype C,variable loops 1 and 2,correlate of protection,systems serology
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined