WeChat Mini Program
Old Version Features

A Robust Positioning Method Based on Semantic Segmentation Network for DIE Chips

JOURNAL OF MANUFACTURING PROCESSES(2024)

Hunan Univ

Cited 0|Views1
Abstract
The accurate and fast positioning of DIE chips is crucial for conducting high and low temperature online performance tests during the manufacturing process of chips. The original online performance test equipment for DIE chips cannot accurately locate polluted chips, resulting in serious missing and false detections. To address this issue, this paper proposes a robust positioning method based on semantic segmentation network for DIE chips. Unet is used to segment the two circular electrodes of the DIE chip. After that, image processing techniques, such as contour extraction and circle fitting, are employed to extract the precise location of the DIE chip. To satisfy the requirements of fast and precise positioning, improved strategies such as modifying convolutional blocks, compressing the channels of convolutional layers, the weighted combination of the dice and the crossentropy loss function, and so on are adopted to improve Unet (named IUnet). Additionally, to address the issue of insufficient samples for the low temperature performance test, model-based transfer learning is employed to improve the segmentation accuracy of IUnet. The experimental results show that the proposed method, compared with mainstream methods such as traditional methods (template matching and Hough circle detection), target detection methods (SSD, YoloV7, and FasterRCNN), and semantic segmentation methods (SegNet, DeeplabV3+, Unet, Res-Unet++, and Swin-Unet), has better positioning performance and robustness. Under the premise of achieving a positioning beat of 0.5 s per chip, the positioning accuracy rate of DIE chips can reach 100 %, thereby satisfying the positioning requirements of the enterprise.
More
Translated text
Key words
DIE chip,Target positioning,Machine vision,Semantic segmentation
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined