Intelligent Design Automation in VLSI Systems: Leveraging AI for Future Electronics Applications
Keywords:
AI for EDA, VLSI Design Automation, Deep Reinforcement Learning, Graph Neural Networks, Power Optimization, Layout Synthesis, Next-Gen ElectronicsAbstract
The complexity of the current VLSI (Very Large-Scale Integration) systems is increasing because of the number of the transistor, tighter design rules, and faster, power-efficient chips. The functionality of traditional Electronic Design Automation (EDA) tools has been inadequate at supporting the challenges that lie ahead that come with decreasing technology nodes like 7nm and 5nm and the requirements to meet several design constraints. The work will introduce and verify an intelligent design automation system with an AI-enhanced intelligent design, which uses state-of-the-art machine learning algorithms to support intelligent design efficiency, layout quality enhancement and power-performance optimization. The framework consists of Deep Reinforcement Learning (DRL), Graph Neural Networks (GNNs), and Transfer Learning at the various levels of the VLSI design flow such as logic synthesis, floorplanning, placement, routing, and timing analysis. The assessments were performed based on industry-related benchmark circuits (simulated using open-road and tensor flow containing modules). Some of the key performance indicators were design time; power-delay product (PDP); wirelength and thermal violation rate. As proved experimentally, the proposed framework leads to an improvement of the overall design time by an amount of up to 27% and PDP enhancement by a margin of around 19 %, in comparison to the traditional EDA methods. Improved performance was also noted in problems with congestion management, wirelength and thermal constraint at both the 7nm and 5nm nodes. Such findings serve to testify to the viability of AI-powered approaches in the redefinition of EDA frameworks and position the designed framework as a viable means of addressing next-gen electronics, including edge AI hardware and more elaborate SoCs.