Development and Validation of a Machine Learning Algorithm for Prediction of Platelet Transfusion Efficiency in Patients with Hematological Diseases
Blood(2019)
Southeast Univ
Abstract
Background: Prophylactic transfusion of platelets provides a good protection for the implementation of invasive procedures and the prevention of bleeding events during chemotherapy and hematopoietic stem cell transplantation in patients who are suffering from hematological diseases. The issues that what is the optimal dose of platelet transfusion and how to monitor platelet transfusion efficiency are important due to the short storage time of platelet products, rising clinical demands, and decreasing donors. It is worthy that there are amount of studies have been conducted in the past to explore factors that may affect the clinical efficacy of platelet transfusion and characteristics of patients under these different transfusion effects. While, previous studies failed to exactly address the problems of clinical needs for platelet transfusion. Methods: The aim of our study is to develop a model to evaluate the efficacy of platelet transfusion and the statistic method of machine learning algorithm (ML) was involved. The differences between this algorithm and traditional methods are that the former one can continuously be learning from the data and form a self-training model, therefore ML is more accurate than traditional artificial models and generally independent of the model and parameters themselves. We further take the multi-layer fully connected layer neural network model (MLNN) into our consideration because it simulates the multi-layer interconnection of human nervous systems, which is suitable for processing inaccurate and fuzzy information. In our study, the establishment of a neural network model was used to make a multiple-dimension analysis between factors affecting platelet transfusion efficacy and platelet count added value correction index (aCCI), and explore the correlation among these influencing factors as well. Results: The study utilized the data relative to 1840 platelet transfusions performed in 460 patients with hematological diseases. The participants ranged in age from 16 to 92, and the median age was 59.5. There were 199 females and 261 men. The whole data was divided to 2 parts, 2/3 of them were analyzed as a training set and the others were used for validating. We selected 30 factors (including patient-related factors and transfusion product-related characteristics) that may affect the efficacy of platelet transfusion except the storage time of platelet (all data < 2 days), and established a model for predicting platelet transfusion efficacy based on the volume of platelet transfusion. After the model was established, it was tested for goodness of fit, and the results showed that the LOSS value tended to be stable. Conclusions: The establishment of this model may not only be used for predicting the platelet count after platelet transfusions in patients and the amount of platelets that need to be transfused, but also provide supports for the solution of related problems. Disclosures No relevant conflicts of interest to declare.
MoreTranslated text
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined