Layout decomposition for triple patterning lithography

IEEE Trans. on CAD of Integrated Circuits and Systems(2015)

引用 186|浏览116
暂无评分
摘要
As minimum feature size and pitch spacing further decrease, triple patterning lithography (TPL) is a possible 193nm extension along the paradigm of double patterning lithography (DPL). However, there is very little study on TPL layout decomposition. In this paper, we show that TPL layout decomposition is a more difficult problem than that for DPL. We then propose a general integer linear programming formulation for TPL layout decomposition which can simultaneously minimize conflict and stitch numbers. Since ILP has very poor scalability, we propose three acceleration techniques without sacrificing solution quality: independent component computation, layout graph simplification, and bridge computation. For very dense layouts, even with these speedup techniques, ILP formulation may still be too slow. Therefore, we propose a novel vector programming formulation for TPL decomposition, and solve it through effective semidefinite programming (SDP) approximation. Experimental results show that the ILP with acceleration techniques can reduce 82% runtime compared to the baseline ILP. Using SDP based algorithm, the runtime can be further reduced by 42% with some tradeoff in the stitch number (reduced by 7%) and the conflict (9% more). However, for very dense layouts, SDP based algorithm can achieve 140× speed-up even compared with accelerated ILP.
更多
查看译文
关键词
integer linear programming formulation,double patterning lithography,approximation theory,triple patterning lithography,ilp formulation,acceleration technique,pitch spacing,semidefinite programming approximation,layout decomposition,integer programming,dpl,stitch number,baseline ilp,linear programming,vector programming formulation,lithography,effective semidefinite programming,tpl,tpl decomposition,accelerated ilp,layout graph simplification,tpl layout decomposition,dense layout,bridge computation,vectors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要