The PDX Data Commons and Coordinating Center (PDCCC) for PDXNet in Support of Preclinical Research
CANCER RESEARCH(2019)
Seven Bridges Genom
Abstract
Patient-Derived Xenografts (PDX) are proven models to study novel drugs or drug combinations and test hypothesis in preclinical studies. The overarching goal of the PDXNet is to coordinate the development of appropriate PDX models and methods for preclinical drug testing to advance CTEP clinical development of new cancer agents.The PDXNet is an NCI-funded consortium of six PDX Development and Trial Centers (PDTCs) and one PDCCC. Four PDTCs are responsible for developing PDXs and executing specific preclinical trials focused on cancer types including breast cancer, melanoma, and lung cancer. The other two recently awarded centers are specifically focused on minority PDX models and preclinical trials. Besides the PDTCs, the NCI Patient-Derived Models Repository (PDMR) at the Frederick National Laboratory for Cancer Research (FNLCR) is also providing models and data to the PDXNet. The PDCCC is responsible for coordination and developing standards for PDX generation as well as data analysis and metadata harmonization. The PDX Data Commons is built on top of existing NCI resources, leveraging the Cancer Genomics Cloud maintained by Seven Bridges Genomics, where PDXNet data is co-located with TCGA and other large-scale datasets. The PDCCC is co-led by experts from the Jackson Laboratory, providing scientific leadership in xenograft methods and cancer biology to ensure the promulgation of standards that are well-suited for the PDX community.A new portal has been set up at https://www.pdxnetwork.org/ to serve as the point of access to PDXNet resources. In addition, we established ongoing network-wide meetings to facilitate knowledge exchange, held PDXNet portal trainings, and set up working groups to tackle specific challenges. For instance, the Data Ontology working group has been working towards building a common data ontology model specifically for PDX datasets. We are in the process of annotating the very first dataset using this new ontology on the PDXNet portal. Also, the Workflows working group has been working on building and benchmarking various RNA-seq and whole exome sequencing analysis workflows to standardize data processing between PDXNet grantees and create a harmonized PDXNet dataset. These PDX models and the accompanying data will be opened to the community for data mining and/or preclinical research.The PDXNet is a strong step toward building a consensus around PDX models, so that the power for discovery can be expanded by making multi-institutional PDX cohorts a reality. As the coordination center, we are also working closely with the EuroPDX project to exchange standards and knowledge to support the PDX community with a set of standards going forward. The PDCCC is a central part of this process to systematically capture and analyze the variables most influential to PDX models and share protocols and tools to make PDXs an interchangeable research currency for preclinical discovery.Citation Format: Jacqueline Rosains, Anuj Srivastava, Wingyi Woo, Vishal Sarsani, ZiMing Zhao, Javad Noorbakhsh, Ogan D. Abaan, Christian Frech, Jack DiGiovanna, Ryan Jeon, Steve Neuhauser, Peter Robinson, Yvonne A. Evrard, Carol Bult, Jeffrey A. Moscow, Brandi Davis-Dusenbery, Jeffrey H. Chuang. The PDX Data Commons and Coordinating Center (PDCCC) for PDXNet in support of preclinical research [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2019; 2019 Mar 29-Apr 3; Atlanta, GA. Philadelphia (PA): AACR; Cancer Res 2019;79(13 Suppl):Abstract nr 1074.
MoreTranslated text
Key words
Phenotypic Profiling
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined