Feb 28 2013

EUV and eBeam at SPIE ADV Litho 2013

Published by under Uncategorized

While the main manufacturing flows are still focusing on optical lithography, eBeam and EUV are still making progress on advancements waiting for their chance in the main flow of the fab line.

EUV is still working on power and throughput. A challenge for the industry is the dual sided consolidation for the advanced node development. As AMSL completed its acquisition (shareholder approval) of Cymer earlier this month, this leaves only a few companies such as Xtreme Technologies and Gigaphoton as the independent developers/suppliers of EUV light sources to the industry. With the loss of one (1) of the large scanner companies from the world wide list of under 5 scanner manufacturers, there are not a lot target users of the technology once it is developed. Additionally, the list of scanner companies is being squeezed by a end fabrication facilities list of only about a half-dozen companies that will be moving to the new sub 10nm processes. The new forecast for EUV is applicability at the sub 10nm (target 7nm) node.

With only a couple of end customers, and very few OEMs, the cost and pace of technology development for the EUV sources at high power are at risk. This year Gigaphoton showed a 20W net power output over a 168hr operating shift for their new source. This output is being targeted for use with 300mm and 450mm flows. This dual track brings additional dilution to the development efforts. A new challenge facing the technology is the 13.5nm light sources may not appear until the processes are already sub-wavelength, further impacting the wafer throughput, due to the need to introduce sub-wavelength imaging solutions rather than just “print and go” flows. These sub-wavelength solutions include working with Directed Self Assembly (DSA), OPC and multi-patterning similarly to current DUV litho flows.

Similar to EUV, ebeam is looking for its place in the flow. In an update and discussion with Aki Fujimura if D2S and the eBeam Intitative, the fine geometry writing tool is being used for making optical masks for traditional lithography, creating masks for nano-imprint masters and direct write on wafers. The last area is facing the challenge of creating a high throughput technology in a time frame that will still allows enough fabrication customers at the advance nodes to justify the development by buying the machines. Companies like MultiBeam and IMS are working on high throughput writing tools with hundreds of beams at once allowing for reduced writing times and heating of both 300mm and 450mm wafers.

Unlike the EUV light source developers, the ebeam marketplace has a large stable target base of mask manufacturing in addition to the wafer clients. The masks writing comprises the majority of ebeam activities in industry. The rise of the nano-imprint technology, not only for semiconductors, but the high volume disk drive industry and other precision industries, allows for a growth market independent of the migration to sub 20nm semiconductor process flows.

The masking industry supply chain is facing a faster challenge that just imaging the geometries. This high profile challenge is felt only by the scanner and resist companies. The rest of the supply chain is challenged by compression of the industry which may result in just 1 or 2 technologies & suppliers being adopted and no one to fund the next generation development.

No responses yet

Jan 28 2013

Process Nodes Step Back at CES 2013

Published by under Uncategorized

This year’s CES show in Las Vegas had a major change in the semiconductor components driving the CE products. In the past, the show was the spotlight for the latest systems with the most advance process nodes making their entry. This year there was a change – the focus was on application diversity not just speed and power.

As more and more specifications for communication, and data interchange are based on timing and performance windows (minimum and maximum limits), the need to move to new process nodes to improve the component and lower prices has stalled. The cost of the new processes, while reducing die size, have disproportionately increased in mask tooling and test costs, so for small die and mid-volumes, the cost benefits are not being realized. This combined with foundry problems and availability issues at 32nm, 28nm and 22nm resulted in a lot of 40nm, 55nm and 65nm technology on the show floor.

In order to address the power and use targets in these larger nodes, chips were redesigned to support software and application level management of power down modes, more options for power down modes, and finer granularity on speed control of the parts. These parts were in new cell phones and tablets as well as communications (networking), storage, and entertainment products. The plethora of Bluetooth enabled devices (the 65 we saw that new the parts inside) had 55nm silicon solutions for the new 2013 products. Most of the automotive electronics were 55nm or 65nm. The imaging support and audio products were on the order of 130nm and 90nm solutions.

The 2013 CES has now shown the split in the process chase – processors and memory are on the cutting edge nodes, and the rest of CE is on high volume, low cost, large IP library processes. The day of chasing the front of the process curve for CE is now over – application and functional diversity have once again taken over – it is time to re-spin the 1970′s again.

No responses yet

Dec 20 2012

Chenming Hu’s 65th B-day – FinFETs, SOI, BSIM and Art

Published by under Uncategorized

At UC Berkeley they held a symposium celebrating the 65th Birthday of Prof Chenming Hu that was titled Electrons to Electronics. The well attended event sponsored by the Citris Program, the Center for Energy Efficient Electronics Science and the College of Engineering.

The speakers were past alumni of UCB who were Dr. Hu’s students and are now industry luminaries in their own right. While the discussion traveled into the depths of technology, the overall event was a festive one of celebration. The technology themes were great – EDA, FinFETs, SOI, III-V materials, BSIM Models, Energy efficiency, and the future of sub 20nm semiconductor technology. The commonality of these themes is the pioneering work done by Dr Hu since the 1970′s when these ideas and concepts were introduced and now are the basis of the industry.

The repeated theme was the number of citations of the work, papers and inventions that were created by Dr Hu and his students. The reality is that the entire industry at 20nm and forward is embracing his developments, and a great deal of the device work to date that enabled staying on the Moore’s law curve was due to his activities.

Surrounding this deep technical discussion was the genuine enjoyment and excitement of working with him that was shown and discussed in opening stories before each presentation by all the speakers. This reflects his complete role as a Renaissance man – inventor, scientist, artist, and humanist. His devotion and caring for the students was evident from his one-on-on interaction at the event, and his  co-workers.

He was joined at the event by his family, and he took the opportunity to display his other passion – art. The lobby was adorned with an exhibition of paintings created by Dr. Hu and his children – who share his same passion for creative expression. One of his former students and now a Sr technologist at Intel, demonstrated this strength in STEAM (science, technology, engineering, art and mathematics) by performing several songs from her recent self-produced album for the audience.

The event was a true celebration of how the creativity, imagination and avid curiosity can change the world in a positive manner.

No responses yet

Nov 18 2012

The Sun Never Sets on the British Empire – Again

Published by under Uncategorized

Over the centuries, the little island that is the United Kingdom has exerted a strong hold of influence over the rest of the world. This influence was cultural, economic, religious, and technological. Just as this influence gained over long periods of time, it also has receded to being a minor participant in world events at times. Once again the British rise to the forefront and have dominated an influence that is changing and influencing cultural shifts in the world.

Unlike thier attempt to get the whole world to adopt the concept of warm beer, they have been successful as the driver and core technology behind the mobile and embedded computing revolution. Rather than dominate through manufacturing excellence, they utilized thier prime resource – innovation and excellence in engineering. This engineering was used to create an IP licensing model for core processors and graphics processors along with RF technology.

This market place is dominated by only a few major players – two of which – ARM and Imagination Technology. They are respectively the #1 and #3 in players in the IP licensing market. Their cores have long been used in automotive, communication and industrial embedded applications in the western world. The rapid rise of the mobile phone and mobile devices worldwide is the new driver for their applications. With shipping levels from their licensees in the billions of units and targeting the billions/year level – the marketplace for the IP is now truly global.

With the advent of these products and the soon to be omnipresent Internet of Things (IoT) with sensors and M2M communication, the world will be forever changed to being the touch of the hand of England in every corner of the earth. Wireless and IoT are bringing British technology, as manufactured by local and foreign groups, to all countries and locations – independent of religious, political and sociological structure. This time the worldwide British invasion is actually being embraced around the world as it is offering the ubiquity of a communication and information technology that is compatible with their cultural and societal models. Historically, the British control was not always so palatable. This is also not to say that is it being universally praised, as some groups and societies are not open to the speed of change that is enabled by instant point to point communication.

The RF world has long been dominated by the UK. With ARM as a core processor and peripherals, and Imagination Tech as a dominant graphics & communications processors, they are well positioned to not only expand this dominance for England, but for the first time, hold the position as the marketplace has a long life cycle for legacy technology. These two areas – IP and RF – show the ability that a small geographic area can have on the world when they properly use their primary resource – the people.

So in the long history of the world, we once again have a period of British domination that is influencing the live of the majority of the world. The spread of their technology has been adopted by the globe and is driving advancement and job growth worldwide, and it appears to be a mostly beneficial. As long as we can keep their spread of influence focused on their technology and contain the proliferation of the influence of their cuisine, we will continue to benefit.

One response so far

Oct 25 2012

Semiconductors & EDA Loose an Icon -the Passing of Dr. Ivan Pesic of Silvaco

Published by under Uncategorized

This weekend, the EDA and semiconductor worlds lost a strong advocate – Dr. Ivan Pesic who founded and ran Silvaco Inc. started in the early 1980′s silvaco was able to capture the majority of the world’s TCAD market and a large portion of the custom analog design marketplace for simulation, modeling, layout, and capture.

While there are many stories about Ivan and his company – there are a few simple realities. I had the opportunity to work with and know Ivan since I started my consulting practice in 1985, and both utilize his companies products & services as well as jointly support semiconductor clients. During that time, there were a few guidelines he used to run his business and support his clients. these were:

1. if you make a commitment – you are expected to honor it. this goes for schedules & scope of work & costs.

2. semiconductor companies are not an ATM for the EDA industry – they are there to make chips & and the tools should help them NOT make their job harder.

3. if something is broken or wrong, fix it. don’t justify the error.

4. if you develop and create something – software, IP, chips, workflows, etc – you have the right to protect it from those who want to skip the r&d and just take it.

These concepts made for a very loyal long term client base and a strong support from small design groups working on new technologies. they appreciated that they were seen as a key part of the business (the client side) and not just a bother to support & development who were trying to run their own agenda. as a result, many on the outside saw “different” behavior in the way he ran the company and addressed product updates vs the quarterly stock market driven firms.

While there is no denying he had a somewhat volatile personality and was a challenge to work with on some days, understanding that this was due to his passionate and un-waivering belief that getting the customer to their goal was job #1.

To achieve this goal, he lead every aspect of his company – from r&d, to product release schedules and features, to the product marketing and his infamous billboards, to the gardening and running the jack hammer for bldg renovation. While many in the EDA industry watched in disbelief, he found and held long term clients who embraced the simple, directed goal of the company – do what is needed to help them build the best chip in the shortest amount of time.

A strong advocate for IP protection in all forms, and for providing a reasonable tool at a reasonable price, his passing is a major loss to the analog and EDA communities. he will be missed by many – clients. the industry, the many universities he support, and mostly by those of us who were lucky enough to called him friend. To continue his work and insure his customers are able to reach their goals, Ivan’s son, Iliya Pesic, is taking over for his dad as Chairman of Silvaco and the business will continue to honor those same values.

No responses yet

Sep 22 2012

D2S launches Ebeam Mask Data Prep System

Published by under Uncategorized

September 2012 – In a discussion with Aki Fjimura of D2S, we reviewed their product announcement for their TrueMask MDP full chip model based Mask Data Preparation system.  The software and customized HPC compute platform called the D2S Computational Design Platform which is based on both multi-core CPU and GPU computing engines – utilizes a standard server rack configuration to address the issue of performing mask data prep on a full chip in a single day.  The scalable system is targeting a system that is as fast as the mask writer and can produce output of 80B shots/day up to 300B shots/day.  This will address processing a typical SOC os 1600mm2 with a shot density of 50 shots/um2 in 24 hours.

The challenge in the system is not just throughput buy also accuracy and the ability to minimize write time by employing shot reduction.  The system has been beta tested by Samsung (see paper #8522-04 at Bacus 2012 by Byung-Gook Kim) that is addressing memory patterns and advanced curvilinear shapes required by multi-patterning systems.  These designs suffer from a systematically created, but non-correctable random fluctuation in printing critical dimension width when created with conventional fraction.  The variation, while tolerable at 120nm masking dimensions, become unmanageably large at the 60nm feature size which is used for the 20nm processes.  The methodology, that is model based, in the TrueMask system minimizes these variations to manageable levels.

The TrueModel engine does not provide a great deal of improvement on simple rectilinear shapes that can conform to rule based designs and checking.  It starts to bring advantages to the speed and accuracy when dealing with complex shapes that usually require a table look up model, or overlapping shot and curvilinear shapes that cannot be described by rules or simple lookup tables (see figure).  The physics based model addresses multiple shot sizes and is also compatible with the eBeam Initiative’s shape based shot methodology.

D2S TrueMask Model Based MDP

The platform is targeted for large multi-run designs such as standard products and firmware programmable platform SOCs.  The benchmarks in the test environment (100TFLOP platform configuration) provided a shot reduction of over 50% and a mean error < 0.03nm.  The modeling system is being supported by the ecosystem by DNP, the eBeam Initiative, HOYA, JEOL, KLA Tencor, and NuFlare.

No responses yet

Aug 27 2012

Hogan and Bose talk about SOC trends

Published by under Uncategorized

August 2012 – at the NIT Alumni association dinner in August, Jim Hogan of Vista Ventures and Ajoy Bose of Atrenta talked pretty openly about the shifts and changes in the SOC marketplace and what needs to change to allow the semi guys to hold value and profits. The challenge in the technology world today is that the economics have shifted to where the system and its function – including IT infrastructure – is where the market value and profits are held – not in the components and creative hardware design any more.

The shift has taken place to have application verification and adaptability of the hardware to multiple uses be the driver, similar to the market forces that were at work to shift from microcontrollers & workstation hardware to the industry convergence on general purpose single & multicore x86 processing as the dominant platform. This shift for mobile however, is not driving a single core device, rather a large set of multiple SOCs addressing the various segments of the mobile device marketplace including graphics, central compute, wireless & networking, and memory/storage interface. Figure 1 shows the major subsystems in these devices and the technology migration that is driving them.

Hogan - SOC Technology Roadmap

The table show the complexity of process knowledge that is needed for the various SOCs and as a result, the whole system platform is no longer dominated by the semiconductor vendors, but by the system architect. The commonality between these two technologies is the software emulation and verification platform that is needed to check that the semiconductor device and the larger application platform (hardware system) are working properly prior to committing the design to manufacturing, The rising cost, in both schedule turn around due to extended lithography and manufacturing steps as well as direct expense for the use of foundries and sub-contract manufacturers, requires that the hardware solutions not only work the first time out, but be fully functional in all parametric aspects.

In order to insure these systems will work, the verification function has to move up from just covering in-chip path timing to full SOC SDK validation and multi-chip software application validation. While not a single tool the verification platforms must interoperate to insure that functionality at lower levels connects with the testing environment at the higher levels. This opportunity allows new companies to enter the verification space as there are multiple languages and application verticals that need different tool optimization.

This trend is driving the embedded system space to diverge from a single unified market to a complex set of independent vertical markets, even in the sub category of mobile devices, that need specialized tools, technology and SOC solutions around the common hardware cores. The next wave of hardware is the dominance of the platform design at both the board and SOC level, as product differentiation is in the application software and the infrastructure to support that software on a global market basis.

No responses yet

Jul 24 2012

Semicon2012 – EUV pushes out

Published by under Uncategorized

At this years Semicon in San Francisco, the focus on new items was the inclusion of MEMS, LEDs, Sensors and a start of the 450mm wafer movement, but EUV was still the “tech of the future”. While progress was made on the light sources for the imaging system, the challenge and costs of EUV blanks, masks, and overall throughput were still not up to implmentable levels. It is now a sub 14nm technology.

The solution space for the 2Xnm nodes and the 1Xnm nodes is to push immersion litho with double, triple and quad patterning, and in the case of CEA Leti – direct write eBeam imaging for the prototypes. The discussion did not extend to the inclusion of the eBeam Initiative style shape based flash or just shooting circles as is traditionally done. Some of the development from other vendors includes use of polymer self assembly on 1-2 layers as a new step. This option has a different cleaning step from traditional flows.

The litho issue was not a challenge for the MEMs and LED worlds which are process driven rather than litho driven. The show had some major processing equip for MEMs back ends, but they are still running older processes that allow them to be patterned with standard 193nm Immersion litho. The LETI group and the IMEC groups were both working on new device structures, stacked die, interconnect and novel circuits. These studies are suppored by the Alliance for Nanascale VLSI which includes Leti as a member.

By contrast, the big US effort is the 450mm fab building and flow taht is being deveilped at University of Albany (CSNE Albany) where IBM, Samsung, and Intel are all building the facility to create and test 450mm wafer process equip and flows.

This years show was fairly small as the sessions dominated the show floor and the Intersolar show appeared to be the equipment center for the US.

No responses yet

Jun 21 2012

Update on Physical Verification at DAC

Published by under Uncategorized

June 2012 – At the DAC show this year, we took a look at the direction in Physical Verification for the IP development and Custom SOC space. The industry is still plagues with privatized programming lanuages for the tools, and tis is driving the centralized problem of rule context and interpretation.

As the design rules increased in complexity to include optical effects and now design application effects, there is ambiguity in what the rules are trying to cover and how it impacts the design flows. These interpretability effects are driving the issue of “it is clean with one verification tool, but not another”. This is a problem as the major foundries use multiple tools for signoff, and the IP developers typically only support one. As the SoCs grow more complex, the IP that is acquired as “clean” from individual sources, no longer shows “clean” when they are simultaneously placed on the same die and checked with a single tool.

One of the major issues is the “waiver” methodology. The range of interpretation of the design rules, rather than being absolute, result in a “clean” design having several aspects and design data that “false flags” in the tools when using standard runsets. The challenge with modifying the option on these runsets, is it will no longer conforms to the “signoff” spec from the fabs, so the end result is not “clean”. The workaround is to use a “waiver” methodology, where the known errors are marked in the tools, so they do not continue to flag. The difficulty, is that each of the PV vendors uses different flows, and some of the customers have their own legacy methods from before the tools formally supported this feature.

This issue is aggravated at the new 20nm and below nodes which also have to deal with 3D devices, designed parasitic devices that need to be extracted along with the primary device and the shift from restricted design rules to prescriptive design rules. This change to “this is how you build a known good device” from the “here are the minimums you cannot violate to get a good design” brings up a major issue on application interpretation. Most of these prescriptive rules, do not take into account context or application for the devices. As a result, there needs to be a whole level of functional rules, as are targeted by the Mentor PERC tool, to support these new designs.

From a performance point of view, the capacity requirements of these sub 40nm designs is tasking the limits of the IT infrastructure and compute to return a “same shift” result from a run. The distributed computing engines that are used for the core verification work, however are not employed in the analysis and data review cycle. The debug and error review are still single core, single task, single memory operations. This is a major schedule impediment for these designs as the large data object size that results, is not compatible with high speed interactive operation when used with older versions of layout editors as the graphic platform. The review of this large results data on an interactive basis is one of the few catalysts in recent years to migration away from the Virtuoso platform to layout editors from Synopsys, SpringSoft, Silvaco, and Mentor.

The main direction for these new technologies is to include the PV earlier in the flow. They have already been integrated into detailed Route tools with correction capabilities, and now they are targeting moving up through placement to IP library selection and validation. This shift is consistent with the prescriptive design rule methodology, and will require a major change in signoff methods for release of the designs.

No responses yet

May 28 2012

New Designs Focus on Devices

Published by under Uncategorized

In the midst of big chip announcements (Intel’s IvyBridge at 22nm, Nvidia’s Kepler at 7B devices) the discussion has centered on techniques for managing the design of these very large chips. On topic that has not been highlighted, but is apparent in the industry is the resurgence on basic devices and device level circuit design.

With the rise of restricted design rules, there is also a rise in restricted device design. This means the is a higher reliance on differentiation of the circuit function and performance though design architecture and device topology rather than just “throwing gates at it”. These new design blocks are setting new levels of power/performance.

The new circuits are not just on logic gates, they are analog blocks and memories also. Flash is shifting from SLC to MLC and now a TLC structure. This is requiring new sense amp topologies and performance to accompany the new cells. The new analog blocks are allowing RF to implemented in CMOS using the same die as digital functions.

This rise in device level design is different from the 60′s-90′s which focused on single blocks with small device counts due to the limitations in device simulation. New high capacity device simulators and the use of multi-threaded and distributed computing now allow for full functions and subsystems to be simulated at the device level, and perform optimization. These tools are well suited to statistical design methods, as is the design techniques for device level design. These new tools are one of the fastest growing areas of the EDA community, and are forging a tight relationship with the process development and operations groups.

The emphasis on device level, or bottom up design methodologies, works well up to the macro cell and mega cell level. At this point, the blocks meet top down design methods that are focused on data, I/O and bus architecture. This combination works well for advanced process technologies, however it requires a more traditional chip design skill than is available from the recent trend of “C and C++ programmers” who are creating chip designs.

No responses yet

Next »