A Low Power, Large Dynamic Range, CMOS Amplifier and Analog Memory for Capacitive Sensors
openalex(1996)
Abstract
I. Abstract This paper has been written t o announce the design of a CMOS charge to voltage amplifier and it’s integration within an analog memory. Together they provide the necessary front e n d electronics for the CMS electromagnetic calorimeter (ECAL) preshower detector system in the LHC experiment foreseen at the CERN particle physics laboratory. The design and measurements of t h e amplifier realised in a 1.5μm bulk CMOS process as a 16 channel prototype chip are presented. Results show the m e a n gain and peaking time of = 1.74mV/mip, = 18ns w i th channel to channel variations; σ(peak_voltage) = 8% and σ(peak_time) = 6.5%. The dynamic range is shown to be linear ove r 400mips with an integral non l inearity (INL)=0.05mV as expressed in terms of sigma from the mean gain over t h e 400mip range. The measured noise of the amplifier was ENC=1800+41e/pF with a power consumption of 2.4mW/channel . The amplifier can support extreme levels of leakage current. The gain remains constant for up to 200μΑ of leakage current. The integration of this amplifier within a 32 channel, 128 cell analog memory chip “DYNLDR” is t h e n demonstrated. The DYNLDR offers sampling at 40MHz with a storage t ime of up to 3.2μs. It provides continuous Write/Read access with no dead time. Triggered data is protected within t h e memory until requested for readout which is performed at 2.5MHz. The memory is designed to have a s teerable dc level enabling maximum dynamic range performance. Measurements of the DYNLDR are presented confirming t h e original amplifier performance. The memory itself has a very low pedesta l non uniformity (σ (ped)) of 0 .9mV and a gain of 10mV/mip .
MoreTranslated text
求助PDF
上传PDF
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined