Archive for August, 2009

Aug 26 2009

The Evolution of Engineering Education – by Peter Chatterjee

Published by under Uncategorized

A discussion with Dean Jim Plummer of Stanford University’s College of Engineering.

At the 2008 IEDM conference Dean Plummer of Stanford University gave a lunch time keynote that discussed some of the changes to engineering education that he thought were needed to help keep the profession current, innovative and still provide a valued basis for people entering a 40+ year career path.  This is of great interest as there has been lots of news about a down state world wide economy, people losing jobs and changing professions – in all aspects of work, not just engineering, and having to make decisions on what education areas are open, available and in growing topics.  These decisions have been difficult as of late, due to the 1-5 year education cycle and the advancement and obsolescence of technology taking place on a 2-4 year cycle.

The discussion was focused on the shift to “T-shaped” graduates, which are people with the traditional technical training an depth (forming the vertical portion of the T) and then enhancing this knowledge with “horizontal” education which includes applications, business and marketing information, innovation and creativity from the Arts, and the interaction to societal and environmental issues.  This shift is still part of an elective program, there is a need for both types of engineering – those with a great deal of technical depth and those with a broad but shallower technical depth and a breadth of lateral learning.

In the discussion about how this duality between the old and the new focus works for current and new technology the response was full compatibility.  To paraphrase – they have about 240 faculty in the Engineering Department, there is a full gradation of areas of interest from high level societal issues and applications, to sector specific applications to general engineering to very detailed almost “sciences” like depth of detail.  The faculty run fairly autonomously getting their own funding and research topics and then attracting like students to that program.  As such, they follow their own interest path and comfort level based on the engineering programs they were trained in.  We have diversity in our program by assembling the mix of these faculty in one place.

A few of the other innovations that have been brought to the program include: Engineering project classes that mimic industry (interdisciplinary teams with engineers, marketing, sociologists, business school members rather than traditional just engineering teams); internship opportunities in foreign locations to help students experience the global nature of the engineering supply chain, encouragement of dual majors or a major in engineering and minor in arts (performing arts such as music, theater, dance, and creative arts), and formal entrepreneurship classes.

Although all engineering programs nationwide have not adopted this direction, several are also trying it out.  Dean Plummer felt that in the long run this program shift will show benefits as the students who follow the traditional programs will in fact been more tactically productive straight out of school but within 3-5 years their basic technology learned is obsolete and the continuing career path is based on the new technologies, interpersonal skills and business knowledge that is learned on the job.  In comparison, the “T-shaped” students get a more strategic capability right out of school and tend to excel in the promotion to positions of higher responsibility faster.

PC

No responses yet

Aug 14 2009

Flash Memory Summit 2009 – New Memory, Samsung, Unity Semi and HLNand

Published by under Uncategorized

The final day of the Flash Memory Summit started with a panel on new memory technologies. The theme on all of the technologies presented was smaller, faster, less power, equal or more reliable than NAND flash, and targeting the non-commodity (special application) markets that are currently occupied by NAND flash.

Crocus Technologies presented thier TAS MRAM design which is targeted at SRAM and flash applications. Their product compared to SRAM at a 25% smaller cell, adding Non-Volatile capability, and a zero standby current.  The product compared to NAND flash by having a smaller cell and only 1X area overhead for controlling circuitry.  It is currently being built on a 130nm node and can be scaled.  It is targeted at Cache memory, data logging, medical instrumentation, casino gaming and industrial control applicaitons.  They are targeting several business models – selling the standard product ICs, licensing IP a process technology licensing service and providing a foundry service.

Unity Semiconductor presented their CMOSx passive R/W crosspoint array memory.  It is based on an ionic Oxygen charge movement technology similar to the memristor.  It is a vertical, transistor-less memory cell that supports layer stacking so it can result in a very small cell size and uses a small write current.  The product compares to NAND flash with a size advantage per cell, higher performance (read and write times), and lower power. It is being built with a two step manufacturing flow – standard base CMOS wafers from a commercial foundry or IDM on a larger process geometry (130nm-65nm) and then a specialized low temp BEOL for the top memory processing using thier own flow at different smaller geometries.  The BEOL flow is scalable to 20nm and allows the memory cells to be placed over the control circuitry for a very dense design.  The product is targeting enterprise class storage applications.  There are targeting the business model of being an IC supplier along with a licensed JV partner and second source supplier.  Unity and the JV would be owner/operators of the common 300mm BEOL fab facility which uses standard fab equip. As the designs will support Terabit densities and have a two step flow, they are still in the design stage and have to still finalize packaging, testability/BIST using the outsource service model.  This business model and the two stage wafer processing is targeted at allowing their inventories to be set by the demand flow rather than a stocking flow, which they believe will give them price stability and sustained profitability.

SiDense presented their OTP Oxide breakdown Anti-fuse 1T cell.  The memory can be made using standard process equip and flows, no special processing, no critical poly-Si lithography, and a predictable programming voltage based on the Oxide thickness – not an implant value – for minimized variability. The design can be used in single ended mode or differential mode.  In the differential mode it supports a 10ns read access time, and temperature stability.  It is scalable to 32/28nm nodes.  In addition to size and power, it offers the unique advantage of being a high security memory, even with cross-sectioning analysis, the oxide breakdown on the cell cannot be visually differentiated as either a programmed or blank cell.  The memory cell is available under the IP licensing model for inclusion in custom designs.

Grandis displayed thier SST-RAM which is based on their Spintronic Memory technology first presented in 2002.  Their product is a Universal Memory replacement that can be used for DRAM, SRAM or Flash applications.  The current design utilized a low power operation, scalability below 45nm and a much simpler design and mfg process that the first generation products.  It is targeted at automotive, handset, and combined PC applications where it is the central memory for most NV, SRAM cache and high performance DRAM functions.  They are targeting a business model of products, IP and technology licensing.

The final panelist was Joel Cobern representing group of researchers and student from UCSD on their investigations into new NV storage technologies.

In addition to the panel, Samsung was able to discuss thier current SSD products and NAND direction along with HLNand (a Mosaid licensee) was able to discuss their storage class memory products.

The Samsung discussion opened with the continuing issue of – is the cost of the litho and process migration worth the return for the Flash business?  The answer is still, in order to meet the price points and densities the customers want, they have to shrink the design.  This shrink also addresses the new design targets of these applications which is higher speed and lower power.  In the PC application space, the use of flash is growing.  40% of all Flash units end up in netbook/notebook PC, and 10% end up in enterprise class products.  The business realities are, with the architecture and performance differences between the consumer and enterprise products, 1/3 of the revenue is derived form the enterprise class products.

On the issues of speed and power, the major bottleneck/driver is the controller.  The IC designs are not at their limit for minimum speed and maximum power as yet, there is still some design room available at the current nodes.  However, power and performance beyond the controller spec do not address the price point aspect of the products.   The new major driver is the switch to the DDR3 memory interface and the upcoming Energy Star program for Data Centers and Servers.  This move would drive a shift to DDR3 main memory and SSDs as a replacement for Hard Disks in these products and if an overall shift was made would result in a 0.75% annual savings in the entire power usage of the United States.  The driver for this however, is the replacement and upgrade market which is driven by the OPEX budget NOT the CAPEX budget and has some fairly sweeping financial and tax implications which are in the process of being understood in light of the current world economy.

The SSDs are still being looked at for inclusion in other application of CE devices and automotive but right now the target market is SATA interface devices for notebooks and enterprise class machines.  The forecast is for SSD penetration for both the enthusiast (Gaming)  and performance (business class and graphics) PC to still be in the tradeoff stage, until mainstream adoption starts in 2012.

MOSAID has licensed several aspects of their IP portfolio to HLNand which makes a high density, high reliability and high performance memory module for use in SSDs.  It is based on thier Hyperlink NAND Flash module.  This is made from 4 tested and known good MLC flash chips (currently Samsung 16Gb die) and a custom Hyperlink bridge chip. for a single 64G module.  The module is assembled as a SIP, and provided buffering to the bus just the Hyperlink module which allows the 4 die to be accessed simultaneously.  This provides the capability of using the HL1-266 (DDR 266) supporting a 266MB/s read and write bandwidth.  These die can be connected via a daisy chain of up to 255 devices per channel which provides for a very high density design solution.    The SIP parts are assembled into a memory module featuring 8 parts for a 64GB byte wide product on a 200 pin SODIMM board.  The boards support 4 channels and due to their daisy chain internal configuration, with the proper controller firmware and the use of an additional error bit, can support ordered simultaneous read/write along the hyper channel.  This results in a data through put of 533MB/sec for both read and write after the ECC.  The byte wide modules are usually assembled into a system with a SATA from end controller so it operates as a high reliability enterprise class SSD.  As the product is modular, should there be any failures in application, the SATA interface supports hot swapping of the drive so the enterprise is not disrupted, and can provide information on which module is having an issue, and the drive can be opened and just the one module can be replaced rather than the whole drive.  These modules are currently shipping in volume.

PC

No responses yet

Aug 06 2009

DAC 2009 – Methodics, Lynguent, Tela Innovation, TSMC PDKs

Published by under Uncategorized

Continuing the diversity of tool vendors at DAC, Methodics was showing their Data Management (DM) tools.  The product is targeted at supporting a Diff/Merge function on the design data in a Cadence 5.1 and 6.1 design environment.  The core of their tool is a change and revision control manager that are based in the software industry.  Their current product can se either SubVersion or PerForce as revision and build engines, and they are working on a ClearCase version.  These products support Revsion Contol (RCS) and also branch and merge on the data formats, this group of functions is call Data Management (DM) in the Methodic terminology.

The tool is targeted towards supporting remote design reviews as a collaboration tool.  To support this function, the database contains only links to design objects and there is no proprietary metadata involved. With the assumption that the customer already has a core license for SubVersion/PerForce/or ClearCase, the Methodics licenses are available on a per version/per user/per year basis.

Lynguent introduced a major modeling enhancement to their environment – interface directly to the Simulink environment in addition to the existing MAST models.  Their product is a modeling tool only and is simulator agnostic, so it works with most SPICE and High Capacity simulators.  It also has an integration with the Cadence design environment versions 5.1 and 6.1.  The resulting models behavioral models for the blocks in either Verilog A, Verilog AMS or VHDL AMS format.

The new version of the tool has a topology editor for the subsections of the circuits that identify functions such as filters, gain stages, etc.  These help create the signal flow model for the blocks and describe the event dynamics in both signal slope and time domain response.  When used in the mode for Rad Hard by Design, the resulting HDL model is not just post dose degradation effects of the single devices, rather it includes adjacent device ionization effects.  This is a key enhancement for memory designs and bus based designs that are trying to analyze SEE effects.  The new version supports hierarchical design construction, but does not require the inside of the Verilog block to keep the hierarchical order.

Tela Innovation was showing some aspects of their relationship with TSMC on advanced libraries.  After the acquisition of Blaze DFM, Tela is expanding their IP & royalty model with TSMC.  The Blaze product and service is branded as “Power Trim” and focuses on post layout power optimization.  The Tela library product and service is branded as “Area Trim”.  Their design architecture, when applied to standard rules on existing processes result in a smaller aggregated block size for a function cell.  This reduced cell size is based on creating a “slim” library with only device level (diffusion, contact, poly, implant) changes [FEOL] and does not affect the metal pattern or pin locations for the block.  This reduced cell size helps improve the yield of the design.

TSMC introduced their iPDK libraries as part of their direction for design support on 65nm and smaller processes.  The iPDK libraries are core only libraries that have traditional fixed hard I/O cells which incorporate the pads and the ESD devices.  These PDKs suppliment the iRCx and iDRC tool independent flows that are already in place.  Thier iLVS kits are in progress.  An iDFM kit is being planned where 65nm is traditional rule based, but for 40nm and 28nm the rules are model based in order to accommodate both design and physical effects.  The iDFM kits will include compliance tables of which tool and who does what (design, vs layout vs litho) and include litho and CMP rules.

The iPDK libraries are based on Python, and currently are being supported by Synopsys, Ciranova, Springsoft and Magma flows.  They support the standard tech files which are fixed per process.  For Place and Route tools – there is a list of qualified tools that use the libraries and there is also a support kit including the iRCx and iDRC packages to insure the major 4 P&R tools (Cadence, Synopsys, Magma, Mentor) can run.    They also introduced reference flow 10.0 which is for 28um processes.  This reference flow is primarily for qualification of the process and the support kits rather than an advocation of certain tools.  Currently all the customers that are targeting the 28nm node, have their own development tool flows.

PC

No responses yet