WeChat Mini Program
Old Version Features

Challenges for Quality and Utilization of Real-World Data for Diffuse Large B-cell Lymphoma in REALYSA, a LYSA Cohort.

BLOOD ADVANCES(2024)

Hosp Civils Lyon

Cited 2|Views58
Abstract
Real -world data (RWD) are essential to complement clinical trial (CT) data, but major challenges remain, such as data quality. REal world dAta in LYmphoma and Survival in Adults (REALYSA) is a prospective noninterventional multicentric cohort started in 2018 that included patients newly diagnosed with lymphoma in France. Herein is a proof -ofconcept analysis on patients with first -line diffuse large B -cell lymphoma (DLBCL) to (1) evaluate the capacity of the cohort to provide robust data through a multistep validation process; (2) assess the consistency of the results; and (3) conduct an exploratory transportability assessment of 2 recent phase 3 CTs (POLARIX and SENIOR). The analysis population comprised 645 patients with DLBCL included before 31 March 2021 who received immunochemotherapy and for whom 3589 queries were generated, resulting in high data completeness (<4% missing data). Median age was 66 years, with mostly advanced -stage disease and high international prognostic index (IPI) score. Treatments were mostly rituximab, cyclophosphamide, doxorubicin hydrochloride, vincristine, and prednisone (R -CHOP 75%) and reduced dose R -CHOP (13%). Estimated 1 -year event -free survival (EFS) and overall survival rates were 77.9% and 90.0%, respectively (median follow-up, 9.9 months). Regarding transportability, when applying the CT's main inclusion criteria (age, performance status, and IPI), outcomes seemed comparable between patients in REALYSA and standard arms of POLARIX (1 -year progression -free survival 79.8% vs 79.8%) and SENIOR (1 -year EFS, 64.5% vs 60.0%). With its rigorous data validation process, REALYSA provides high -quality RWD, thus constituting a platform for numerous scientific purposes.
More
Translated text
Key words
Liquid Biopsies
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined