[ 기사 ]Research Team Led by Prof. Byungsub Kim Solves Key Challenges in Generative Foundation Models for Analog Layout Automation

A research team led by Prof. Byungsub Kim from the Department of Electrical Engineering at POSTECH has published a paper titled “Foundation Model for Analog Layout Generation with Self-Supervised Learning” in IEEE Transactions on Circuits and Systems-I (TCAS-I), one of the most prestigious journals in semiconductor circuit design. The team successfully addressed key challenges in developing foundation models for automated analog layout generation.

The research team consists of Soonkyu Jung (integrated M.S./Ph.D. student), Wonjun Choi (integrated M.S./Ph.D. student), Junwoong Choi (Ph.D. candidate), Anik Biswas (alumnus, currently at Samsung Electronics), and Prof. Byungsub Kim (advisor).

Foundation models—widely known through technologies such as ChatGPT—are a core artificial intelligence technology used for tasks such as text and image generation. These models are pre-trained on large-scale datasets and later fine-tuned for a wide range of applications. Because their performance improves systematically with larger datasets, they are increasingly regarded as an important industrial AI platform.

Researchers have long attempted to apply foundation models to semiconductor design automation, but significant challenges have remained unresolved. In particular, analog layout design is considered one of the most difficult processes to automate in semiconductor design. Even today, skilled engineers must manually draw numerous geometric patterns according to circuit schematics and design rules, making the process extremely time-consuming and costly.

To develop a foundation model for analog layout generation, the research team addressed two major challenges:

  1. Limited availability of high-quality data, since semiconductor design data is typically treated as confidential corporate information, and labeling such data requires extensive manual effort.

  2. Extreme diversity of analog layout patterns, which makes it difficult for artificial intelligence models to learn how to generate them effectively.

To overcome these issues, the researchers proposed a novel self-supervised learning approach for analog layout generation by decomposing analog layouts into smaller elements and reconstructing them during training. Using this method, the team generated approximately 320,000 training samples from six design datasets without manual labeling. After pretraining and fine-tuning, the AI model successfully performed five different layout generation tasks.

Through this research, the team demonstrated the generalization and adaptability of foundation models, two key characteristics of this emerging AI paradigm. The researchers plan to further scale the dataset and continue developing a full foundation model framework for automated analog layout generation in the future.

December 2025

List