GPU-Accelerated Deep Learning Models for High-Volume Signal Processing in VLSI Testing
Keywords:
VLSI testing, deep learning, GPU acceleration, CNN, LSTM, signal processing, defect detection, high-throughput testing.Abstract
The high rate of Very-Large-Scale Integration (VLSI) technology has posed great testing problems because of the huge amount and the complexity of the signal data produced by the modern integrated circuits. In this paper, the authors propose to build a high-throughput and GPU-accelerated deep learning framework to further increase both the efficiency of VLSI signal processing and precise defect detection in the VLSI test. The methodology proposed is combining the mechanism of the convolutional neural network (CNN) and long short-term memory (LSTM) network in order to extract spatial and temporal characteristics in scan test responses. The models will run on GPUs with support with CUDA to perform real-time inference, and scalable parallel processing. The framework was also tested with synthetic dataset as well as with realistic industrial scan data. Experimental findings show that the GPU-accelerated CNN-LSTM model generates a considerably lower inference latency and an impressive increase in both classifications correctness as compared to the conventional CPU-only and LSTM model’s instance. In particular, the proposed system will achieve more than 6 times in processing speed and 6-8 % improvement in the detection accuracy with little to no penalties in communication and memory requirements. It shows the desirability of the industrial applicability of the application of deep learning models to high signal volume signal analysis within the context of VLSI flows in order to integrate with well-established uses of inline ATE and diagnostic systems. This forms a solid base of real-time, scalable, and smart test automation in the semiconductor manufacturing of the future generation.