WeChat Mini Program
Old Version Features

Switch-Type Electrochemiluminescence Aptasensor for AFB1 Detection Based on CoS Quantum Dots Encapsulated in Co-LDH and a Ferrocene Quencher.

Zhuoliang Hu,Qiqin Wang, Zelin Yang, Mengjie Chen,Maoqiang Wu,Hao Liang,Jianbin Pan, Jun Dong,Luyong Zhang,Duanping Sun

ANALYTICAL CHEMISTRY(2025)

Cited 0|Views16
Abstract
Among the various aflatoxin B1 (AFB1) assays, performing accurate detection is difficult because false positives and false negatives are frequent due to limited sensitivity, expensive equipment, or inadequate pretreatment during operation. Here, an "off-on" switch-type electrochemiluminescence (ECL) aptasensor armed with cobalt-sulfur quantum dots was encapsulated in hollow cobalt-layered double hydroxide nanocages as an enhanced luminescent probe (Co-LDH@QDs), and a ferrocene-modified aptamer (Fc-APT) was used as a luminescent quencher. In general, when Fc-APT was hybridized with complementary DNA modified with a DNA nanotetrahedron, electron transfer between ferrocene and Co-LDH@QDs was facilitated, leading to efficient quenching of the ECL intensity into an "off" state in the absence of AFB1. In the presence of AFB1, the intensity of ECL increased with an increasing AFB1 concentration after Fc-APT specifically recognized that AFB1 was detached from the electrode interface to achieve an "on" state. The linear range of the proposed ECL aptasensor for AFB1 detection was 0.1 pg mL-1 to 10 ng mL-1, with a detection limit of 0.03 pg mL-1. We successfully employed the proposed ECL aptasensor for corn application, which provides an economical and simple alternative to complex and costly enzyme-linked immunoassays. The switch-type ECL aptasensor provides quick, accurate, and prospective technological support for pinpoint management of food safety.
More
Translated text
上传PDF
Bibtex
收藏
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined