Microarray and Proteomics Expression Profiling Identifies Several Candidates, Including the Valosin‐containing Protein (VCP), Involved in Regulating High Cellular Growth Rate in Production CHO Cell Lines
BIOTECHNOLOGY AND BIOENGINEERING(2010)
Dublin City Univ
Abstract
A high rate of cell growth ( µ ) leading to rapid accumulation of viable biomass is a desirable phenotype during scale up operations and the early stages of production cultures. In order to identify genes and proteins that contribute to higher growth rates in Chinese hamster ovary (CHO) cells, a combined approach using microarray and proteomic expression profiling analysis was carried out on two matched pairs of CHO production cell lines that displayed either fast or slow growth rates. Statistical analysis of the microarray and proteomic data separately resulted in the identification of 118 gene transcripts and 58 proteins that were differentially expressed between the fast‐ and slow‐growing cells. Overlap comparison of both datasets identified a priority list of 21 candidates associated with a high growth rate phenotype in CHO. Functional analysis (by siRNA) of five of these candidates identified the valosin‐containing protein (VCP) as having a substantial impact on CHO cell growth and viability. Knockdown of HSPB1 and ENO1 also had an effect on cell growth (negative and positive, respectively). Further functional validation in CHO using both gene knockdown (siRNA) and overexpression (cDNA) confirmed that altered VCP expression impacted CHO cell proliferation, indicating that VCP and other genes and proteins identified here may play an important role in the regulation of CHO cell growth during log phase culture and are potential candidates for CHO cell line engineering strategies. Biotechnol. Bioeng. 2010; 106: 42–56. © 2010 Wiley Periodicals, Inc.
MoreTranslated text
Key words
growth rate,CHO,microarray,proteomics,VCP
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper