Machine-learning-optimized Cas12a Barcoding Enables the Recovery of Single-Cell Lineages and Transcriptional Profiles
Molecular Cell(2022)SCI 1区
Stanford Univ
Abstract
The development of CRISPR-based barcoding methods creates an exciting opportunity to understand cellular phylogenies. We present a compact, tunable, high-capacity Cas12a barcoding system called dual acting inverted site array (DAISY). We combined high-throughput screening and machine learning to predict and optimize the 60-bp DAISY barcode sequences. After optimization, top-performing barcodes had ∼10-fold increased capacity relative to the best random-screened designs and performed reliably across diverse cell types. DAISY barcode arrays generated ∼12 bits of entropy and ∼66,000 unique barcodes. Thus, DAISY barcodes—at a fraction of the size of Cas9 barcodes—achieved high-capacity barcoding. We coupled DAISY barcoding with single-cell RNA-seq to recover lineages and gene expression profiles from ∼47,000 human melanoma cells. A single DAISY barcode recovered up to ∼700 lineages from one parental cell. This analysis revealed heritable single-cell gene expression and potential epigenetic modulation of memory gene transcription. Overall, Cas12a DAISY barcoding is an efficient tool for investigating cell-state dynamics.
MoreTranslated text
Key words
CRISPR barcoding,machine learning,online learning optimization,Cas12a,high throughput screening,single cell genomics,lineage tracking,transcriptional memory,PRC2,melanoma
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
Clonally Heritable Gene Expression Imparts a Layer of Diversity Within Cell Types
CELL SYSTEMS 2024
被引用18
Bandit Theory and Thompson Sampling-Guided Directed Evolution for Sequence Optimization
NEURIPS 2022 2022
被引用6
A Multifaceted Signal Recorder of Cellular Experiences Using Cas12a Base-Editing.
Trends in biotechnology 2022
被引用0
DEVELOPMENTAL NEUROSCIENCE 2023
被引用0
High-Throughput Identification, Modeling, and Analysis of Cancer Driver Genes in Vivo
COLD SPRING HARBOR PERSPECTIVES IN MEDICINE 2023
被引用1
PhyloVelo Enhances Transcriptomic Velocity Field Mapping Using Monotonically Expressed Genes
NATURE BIOTECHNOLOGY 2024
被引用6
Expressed Barcoding Enables High-Resolution Tracking of the Evolution of Drug Tolerance.
Cancer Research 2023
被引用1
CELL REPORTS METHODS 2024
被引用0
Loss of YTHDC1 M6a Reading Function Promotes Invasiveness in Urothelial Carcinoma of the Bladder
Experimental & Molecular Medicine 2025
被引用0
Nature Communications 2025
被引用0
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper