Circuit Transformer: End-to-end Circuit Design by Predicting the Next Gate
CoRR(2024)
摘要
Language, a prominent human ability to express through sequential symbols,
has been computationally mastered by recent advances of large language models
(LLMs). By predicting the next word recurrently with huge neural models, LLMs
have shown unprecedented capabilities in understanding and reasoning. Circuit,
as the "language" of electronic design, specifies the functionality of an
electronic device by cascade connections of logic gates. Then, can circuits
also be mastered by a a sufficiently large "circuit model", which can conquer
electronic design tasks by simply predicting the next logic gate? In this work,
we take the first step to explore such possibilities. Two primary barriers
impede the straightforward application of LLMs to circuits: their complex,
non-sequential structure, and the intolerance of hallucination due to strict
constraints (e.g., equivalence). For the first barrier, we encode a circuit as
a memory-less, depth-first traversal trajectory, which allows Transformer-based
neural models to better leverage its structural information, and predict the
next gate on the trajectory as a circuit model. For the second barrier, we
introduce an equivalence-preserving decoding process, which ensures that every
token in the generated trajectory adheres to the specified equivalence
constraints. Moreover, the circuit model can also be regarded as a stochastic
policy to tackle optimization-oriented circuit design tasks. Experimentally, we
trained a Transformer-based model of 88M parameters, named "Circuit
Transformer", which demonstrates impressive performance in end-to-end logic
synthesis. With Monte-Carlo tree search, Circuit Transformer significantly
improves over resyn2 while retaining strict equivalence, showcasing the
potential of generative AI in conquering electronic design challenges.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要