The ESL Edge

21
Apr

There is C, C and oh yes C

In the early days of ESL, one of the rally cries was that both hardware and software engineers would finally use the same language and thus it would be possible to have a single group of engineers develop a system from concept to implementation without having to worry about what would eventually run on a processor and what would be implemented in hardware. Implementation decisions could be delayed until much later in the flow. For a while hardware engineers were worried that this would mean they would lose their jobs to the much larger numbers of software engineers available. I tried to tell people at the time that this was a fallacy and even today the high-level synthesis vendors want to try and convince us that they can synthesize nearly “the whole language”. At the same time they do not develop push button tools that would enable someone without intimate hardware knowledge to create a good implementation. So what gives?

Quite simply there are fundamental differences in the way that code is written to run on a processor and code that is intended to be implemented in hardware. The confusion point comes because they share a common syntax and semantics, but what is shared is a very specific subset of the language that conforms to particular idioms. Those differences primarily revolve around the way that they use memory.

The bad part about software is that it uses a processor that conforms to a Von Neumann architecture. This restricts the way that memory is accessed (one item at a time) and has driven into us the mentality of sequentiality. A very smart move in the early days of computing, but it creates a big problem today. Having said that, it led to the way in which software is constructed. It provided a memory access mechanism that allowed random access and over the years software has exploited that fact. If you take a look at a computer science curriculum, there are lots of courses that are basically about data structures, how to organize and manipulate data. These rely on things such as runtime storage management, linked lists, hash tables, data bases, string manipulation and so much more. In fact it is hard to think about writing any piece of software without using these kinds of constructs. Almost all of these rely on the use of pointers – things that point to where a related piece of data is. Software never needs to keep track of things such as data dependencies because they are taken care of by the language and the execution paradigm. The only time it becomes an issue is in multi-processor systems where data coherence is an issue.

Now let’s think about hardware. Pointers are one of the primary things that high-level synthesis does not allow, and as for dynamic creation of data – forget it. They will allow a pointer that is basically a handle to the beginning of an array, but don’t try and dereference it or to create data structures. Why is that? Fundamentally, hardware is throwing the notion of sequentially out of the window and so it has to be able to work out the data dependencies. If memory access happens all over the place in memory, and in parallel, then this becomes a close to impossible computational task. There is another fundamental difference. When you optimize a piece of hardware you decide the exact amount and arrangement of the physical memory which will maximize throughput, or minimize latency. This is expensive memory – memory that can be read from and written to in a single clock cycle (sometimes both in a single cycle). You do not want systems that have a more variable data size or access mechanism which would upset those optimizations. Hardware needs more structure and regularity in its data accesses. So the kinds of software that can be implemented in hardware are those that perform manipulation on a packet of data.

Recently we have heard a lot about these high-level synthesis programs being able to handle control systems, and this is true. The kinds of control we are talking about are those that primarily transfer data. Interfaces, DMA controllers and busses come to mind. These have a fairly well contained state machine that can pass data packets around a system but do not manipulate them. Most of this code is code that would never be written as software – it describes particular pieces of the hardware.

Recently, there have been several press releases that talk about different aspects of high-level synthesis. I will briefly discuss two of them. The first is from The Mathworks. They announced MATLAB coder “which enables design engineers to automatically generate readable, portable C and C++ code directly from their MATLAB algorithms”. The code that is generated is intended for embedded systems and specifically to run on a processor. In an interview, I asked them about multi-processor code and about the suitability of the code to be used for high-level synthesis. They re-iterated that the code they produce is for running on a processor. They have some support for multi-processor, but primarily targeted at single processor. They have looked at what it would take to output C suitable for high-level synthesis but have decided that such a product is not a viable product for them at this time. The restrictions they would have to put on the original MATLAB descriptions in order to be suitable for hardware would limit the power of their existing language and system. For software engineers this new coder will save them a lot of time and bridge the gap that has restricted them from doing higher-level algorithmic programming.

The second release comes from Impulse, who teamed up with Convey to create a software flow for a hybrid processor, FPGA solution. The creation of accelerators from standard C code is a very difficult task as it involves detecting which portions of the C code might be suitable for high-level synthesis and then actually managing to speed it up when they both basically share the same memory structure. I have personally worked on this problem in the past, and I know how difficult this can be. There were many programs that we could not accelerate, or required substantial rewriting to make them suitable.

C is not C is not C – they are the use of a single language to define methods that are the most suitable for a particular form of implementation. If you don’t understand the implementation method, you will never be able to write the code in the correct manner and thus software engineers are safe in their jobs and hardware engineers are safe in their jobs and high-level synthesis will continue to work or expanding the middle ground between them, but the language is not the thing that creates that middle ground!

Brian Bailey – keeping you covered
http://brianbailey.us

07
Apr

A Plethora of Hierarchies

HEIRBIt wasn’t so very long ago that the high-level synthesis vendors were arguing over the correct language to use.  Should it be C, C++, SystemC or some other language, such as M? Their arguments ranged over several issues including which was more abstract, which was faster to write, simulate and debug, which one contained more detail, such as timing, concurrency, structure and many other factors. Thankfully most of the arguments are over because the major vendors now support C, C++ and SystemC equally. Some vendors that weren’t so inclusive have been gobbled up or quietly disappeared. Remarkably new ones seem to be popping up to replace them. The market is big enough for perhaps two of them, not 22! But many of the issues still remain, except in a slightly different form. It really wasn’t a question of which was better, but of which was better for describing certain aspects of a system and which ones were necessary.

In this blog, I would like to consider hierarchy, but which hierarchy? That is where it gets interesting. I remember a long time ago, there was one of the big researchers (sorry I can’t remember which one so that I could give proper credit) that coined the term heterarchy. He used it to describe how several hierarchies could exist at the same time, based on the perspective you were using. Consider this in a very crude way in a software program. We could consider the file hierarchy that is used to contain the source code for a program. This is clearly a hierarchy. Then, some of the files may have includes, so there is a dependence between them, and include files can contain other includes, or definitions. This creates a reference type of hierarchy. But then when we look into the contents of the file we have the call hierarchy. Which routines call other routines and this can of course be recursive. In addition, we could consider the data hierarchy. What are the basic types and enumerated types that are used. How are these combined into structures or unions, and then how are these structures semantically connected together. Each of these forms a hierarchy.

If we extend the program example into the C++ arena, we can add the class hierarchy, but this becomes a lot more complex because of capabilities such as overloading and polymorphism. When we move to SystemC we set up a whole new set of hierarchies based on a different type of decomposition. This can be both a structural and functional decomposition and is used to make code sharing, interface definition and concurrency easier to digest. Code sharing could also be considered to be IP reuse, so this is the way in which external IP blocks are divided and if necessary protected.  It also provides some hints as to how the final implementation is to be structured and in an EDA flow, some of this hierarchy can extend way down even into the physical implementation stage.

Structural hierarchy is very important in the hardware flow and it is possible that as software becomes more parallel, this concept may need to migrate over to that side of the fence as well. One of the great things about structural hierarchy is that when we trim a part of the tree, the trimming is complete in that it is a fully functional piece of the entire design. It can be verified independently, synthesized independently (although the interfaces have to be taken into account) and many other operations can be done, then when complete, it can be stitched back into the larger context of the system. This means in part that the functional hierarchy and structural hierarchies overlap each other, but not completely.

Similar to the capabilities in C++, interfaces provide the necessary degree of functional independence between structural entities. The separation of those interfaces can also give you options for verification that otherwise would not exist, such as being able to use those interfaces to directly provide the stimulus desired. In the context of the full design it may be very difficult to do this, or may require very long simulation runs.

The Specman e language added yet another form of hierarchy, that they called Aspect Orientation. While it was not fully developed within the language, it has become a principle that many are looking towards when trying to layer capabilities onto a base model. A good example of this can be found with power modeling. You do not want to modify the original functional model with information about power, you would prefer to see this as an overlay. In that way it is not possible to accidently modify the base description, plus it allows multiple different power versions to be defined from the same functional model. Similar layers could talk about timing.

With all of these hierarchies, you would think that we have plenty of ways to look at a design, and yet we are still missing some important forms of hierarchy, or at least they are not in active usage. The two I would mention are models of computation hierarchies and behavioral hierarchy. What I mean about the first of these is the ability to bring together multiple types of models and have them work together. Famous examples of this are Ptolemy and to a limited extent in things such as Verilog-AMS and VHDL-AMS. There are also some examples of multi-physics simulators used for higher level modeling of electro-mechanical, or electo-optical systems. The second missing hierarchy is behavioral and this one is perhaps more difficult to differentiate compared to what we have today. We can define state machines, which are a piece of functionality, but we have more problems putting together more complex state machines out of simpler ones. Several research programs have operated in this area, but little of it has found its way into tools yet. As we start to have more concurrent, interacting state machines this will become more important over time.

Brian Bailey – keeping you covered
http://brianbailey.us

07
Apr

I am back and welcome to “The ESL Edge”

It has been about 18 months since I was actively blogging on this site and it feels good to be back. In the last incarnation, I blogged on the Verification Vertigo feed. This time, I am changing the scope a little to blog about the emerging area of ESL. I intend to talk about languages, methodologies, tools in both the design and verification arenas. I want to look at some of the areas that appear to be working in the industry, areas where there do not appear to be adequate solutions and interesting research that I find out about.

If you find something that you think may be of interest to readers, please let me know. If you have new tools or significant new technology in a tool release (not interested in small enhancements), then please let me know. If you think a vendor is giving you some bull, let me investigate. I can always be reached at brian_bailey@acm.org

http://brianbailey.us

15
Oct

Webcast: EDA-ESL and More Ideas from DAC

I listened in today to a webcast titled EDA-ESL and More Ideas from DAC. Quite what this had to do with DAC I am not sure, but it contained speakers from Mentor (Shabtay Matalon), Cadence (Jason Andrews), and Synopsys(Frank Schirrmeister) talking about their ESL strategies.

Several things were common to all of the presentation including:

  • They all talked about the need to perform concurrent engineering between hardware and software and the only way to do this is to make a prototype of the hardware available sooner
  • It is not always possible to create a prototype out of models that execute in software only. Sometimes emulation and physical prototypes are necessary
  • While all three companies now have synthesis solutions, none of them mentioned it except in a glancing manner
  • While all three told roughly the same story, they are all based on very different levels of product support and differing levels of abstraction
  • They are all SystemC based in terms of interfaces

One interesting question was raised which regarded the different levels of timing accuracy (PV, AT, LT in OSCI terminology). All of the vendors deferred to the standard bodies to say that they were defining these and that they supported them. But the real question really is: why are the standards bodies getting it so wrong by not defining what these terms actually mean? Thus as expected the answers were wishy-washy and basically said: use what ever level of timing accuracy makes sense, and the only note of caution said: it is possible that you may get models at different level of timing even though they are meant to be the same.

Towards the end the moderator (Don Dingy) asked each of the panelists what was clearly a canned question.

To Cadence he asked: What are the benefits of using a Virtual prototype when an FPGA is available. Answer: because it provides increased controllability and visibility and doesn’t suffer from intrusive debug.

To Synopsys he asked: What are the use-cases for a virtual prototype. Answer: 5 use-cases. 1) When RTL exists and you need to integrate it into a VP 2) To provide a balance of speed and accuracy for different tasks 3) Verification of implementation model. Using  the VP as a testbench for implementation 4) Connection to outside world. Integrate with real devices and 5) Remote access so that SW engineers don’t have to go into the lab.

To Mentor he asked: Why do you need real use-cases for power analysis. Answer: Need to actually run end-user applications to see real power profile. He started to claim that this was unique to the Mentor tool, but Synopsys chimed in with – we do that too.

So at the end of the day, it was an interesting set of product pitches, but little was accomplished.

——————————————-

Brian Bailey – keeping you covered

08
Sep

Randal Bryant wins Kaufmann award for 2009

Randal E. Bryant joins Aart de Geus, Phil Moorby, Joe Costello, Richard Newton, Alberto Sangiovanni-Vincentelli, Hugo De Man, Carver A. Mead and other EDA industry greats in being presented with the Phil Kaufman award (2009) for his contributions to the EDA industry. Randy was singled out for his contributions in the formal verification space.

Randal Bryant

Dr. Bryant’s research focuses on methods for formally verifying digital hardware and some forms of software. Notably, he developed efficient algorithms based on ordered binary decision diagrams (OBDDs) to manipulate the logic functions that form the basis for computer designs. His work revolutionized the field, enabling reasoning about large-scale circuit designs for the first time.

I spoke with Ziyad Hanna, VP of research and chief architect at Jasper Design Automation who has worked with and followed Randy’s research for many years. Ziyad says of Randy – “he is a real innovator with great hands on research. He is indeed one of the fathers of formal in the last 3 decades.”

Dr. Bryant’s Accomplishments

In 1979, while a Ph.D. student at the Massachusetts Institute of Technology, Bryant developed the MOSSIM, the first switch-level simulator. This form of simulation models the transistors in a very-large scale integrated (VLSI) circuit as simple switches, making it possible to simulate entire, full-custom chips. Subsequently, Bryant and his graduate students developed successive simulators MOSSIM II and COSMOS. Major companies used these to simulate microprocessors and other complex systems. Switch-level models were also incorporated into the Verilog Hardware Description Language.

Over time, Dr. Bryant’s focus shifted from simulation, where a design is tested for a representative set of cases, to formal verification, where the design is shown to operate correctly under all possible conditions. It was in this context that he developed algorithms based on ordered binary decision diagrams (OBDD). His OBDD data structure provides a way to represent and reason about Boolean functions. OBDDs form the computational basis for tools that perform hardware verification, logic circuit synthesis, and test generation.

Today it is standard practice for hardware engineers to use OBDD-based equivalence checkers, and symbolic model checkers. Along with Dr. Carl Seger, now at Intel Corporation, Dr. Bryant developed symbolic trajectory evaluation (STE), a formal verification method based on symbolic simulation. STE is now in use at several semiconductor companies, and variants of it are used in commercially available property checking tools.

Dr. Bryant is an educator as well as a technical visionary. He and Carnegie Mellon Professor David O’Halloran co-authored a book, Computer Systems — A Programmer’s Perspective, that is currently used by more than 130 schools worldwide, with translations in Chinese and Russian; a revised version is set for publication later this year.

An IEEE and ACM Fellow and a member of the National Academy of Engineering, Bryant earned a bachelor’s degree in applied mathematics from the University of Michigan, and a doctorate in electrical engineering and computer science from MIT. He was on the faculty of the California Institute of Technology from 1981 to 1984 and has been on the faculty of Carnegie Mellon University since 1984. He received the 1989 IEEE W.R.G. Baker Award for papers describing the theoretical foundations of the COSMOS simulator, and the 2007 IEEE Emmanuel R. Piore Award for his simulation and verification work.

The award, given by the Electronic Design Automation Consortium and the IEEE Council on Electronic Design Automation

11
Aug

Interview with Tom Sandoval, CEO of Calypto

At DAC, I missed an appointment with a company CEO. While I had excuses, at the end of the day I just blew it. Many CEO’s would have said, you had your chance and lost it, but not Tom Sandoval at Calypto. He hardly seemed bothered, or at least did not show it when I turned up to apologize. Since neither of us had any additional free slots during DAC, we arranged to talk after DAC. It would have been very simple for them to just forget. But some companies have blood hounds for their PR agencies and in this case Diane Orr was not going to let the opportunity slip away. So thanks to both of them at the outset for treating a member of the blogosphere with such respect. Of course that is not going to buy them any favors! I will still write what I believe needs to be written, but in this case they have nothing to worry about.

Tom and I talked for well over an hour on the phone, with presentations that had been sent to me in advance and with WebEx for some more detailed technical information. At no point in the discussion did Tom appear to evade any of my questions or have anything to hide. I was also amazed at how well he conveyed the technical concepts. Most CEOs would have clammed up and said nothing in response to some of the questions I asked, but Tom gave me thoughtful answers and we exchanged ideas on things that I think are only suitable for EDA gossip rags. This is not one of them!

What we did talk about was the strategy for the company and how it seemed to have changed over time. Calypto made a big splash at DAC 2005 with their sequential equivalence checking product (SLEC). Their booth was so packed at this show, that some of the nearby vendors were grumbling that people could not get to their booth, or that other people had to walk through their booth because the isles were too congested. Of course it could have been something to do with the stuffed toys they were giving away, but more likely it was one of those rare times when a completely new tool concept was shown for the first time. It seemed to come out of nowhere. Everyone wanted to know what it was. But a product such as SLEC is dependent on a vibrant high-level synthesis (HLS) market, which if you will remember back in 2005 was not really in place. Hmmm – sounds like a product ahead of its time.

But now in 2009 HLS is taking off and sales are ramping quite fast, and with Cadence being the new entrant on the block, it is now seeing quite a product battle raging between them, Mentor and Forte. Of course we should not forget Bluespec as well, but they are in a somewhat different class than the rest – but that is for another posting. All three of these HLS vendors have a close working relationship with Calypto, as they all depend on them to some extent. As Tom put it, equivalence checking is still finding problems with RTL synthesis tools and the bug rates for HLS are a lot higher given the rate at which this technology is advancing.

So with the HLS market catching up with them at last, it seemed strange to me that they were broadening their product line with some RTL optimization tools – particularly in the power optimization space. Tom told me that the early founders of the company had always wanted to do a system level power optimization tool, but that at the time they felt the market for this was even smaller than for SLEC (I can image these were not quite the same talks and messages that they gave to their early investors). He also provided an interesting twist on this that I hadn’t thought of. When Synopsys first came out with a logic equivalence checker, it was met with a high degree of skepticism. Was it the fox guarding the hen house? Weren’t they going to make the same mistakes in their equivalence checker that they made in the synthesis tool? So Calypto wanted to make a name for itself in verification first and then use some of the same technology to build design tools. Tom reminded me that the RTL market is a LOT larger than the HLS market and will be for quite a long time, and thus tools that work at the RTL level are likely to deliver them with more immediate gains, but that sequential equivalence checking remains a core technology on top of which they will continue to develop other products.

With most of the HLS vendors relying on Calypto to provide the sequential equivalence checking, I asked Tom how long he thought it would be before one of them bought his company. At that he chuckled and said I should ask them, but he did not appear to be too uncomfortable about the position he is in. I would imagine Mentor, Cadence and Forte would not feel quite so comfortable if one of them made a move and left the others out in the cold. I am taking bets now as to who will be the first to move.

The next question I had for Tom, was one I was sure he would punt on. I asked whether he saw more opportunities based on C, C++ or SystemC? While he did not say that he favored any one of them, he said that performing equivalence checking on SystemC tended to be easier because it was often closer to the RTL and that finding the right match points was relatively straight forward. With the tools based on C or C++ (or for that matter an early C description that is refined into SystemC), there were additional opportunities for doing C to C equivalence checking, when for example floating point to fixed point transformation are made, or other architectural changes are made. It is important at that stage to ensure that the functionality has not been compromised.

That seemed to me to be a huge challenge, so I asked Tom how much manual intervention was necessary in order to compare at this level of abstraction. He said that their goal is for it to be completely automatic, but with large designs or complex interconnect at the architectural level some manual intervention may be required. There is some information that is passed from the HLS tool to SLEC that helps it find the right match points, but even when these are provided, the first thing they have to do is to verify that this is indeed true, otherwise all of the rest of their analysis may be based on false information. When the HLS tool vendors implement new optimization strategies, they do not always provide this help in the first release, and then more manual work is required, but as the optimization matures, then these hints are provided. I was surprised to hear that Calypto has almost weekly calls with each of the HSL vendors to ensure that the tools are kept in sync as much as possible.

At this point in the conversation, we switched over to more technical discussions about the tools themselves, and I will be reporting on that at another time.

So thanks Tom and Diane. These are the kind of calls that provide me with the information I need to do my job.

——————————————————————————————————————

Brian Bailey – keeping you covered.

brian_bailey at acm.org


05
Aug

No more press releases!!

We have suspected for some time now, that EDA is in trouble. We has seen analyst coverage decline, VC funding decline. Profits decline. Many startup companies have disappeared. We have seen many publications disappear because of a lack of advertising dollars. We have seen many of the editors or columnists let go and some of them have started to move into the industry they once wrote about. One could be excused for thinking that the EDA industry made buggy whips!

But this industry is not about to disappear even though the number of chip starts is declining – it is just going through perhaps the biggest transition it has ever had to navigate. Business models are changing, technology advances are making tools that were once central and important decline while others are emerging. But wait – this is normal, just happening a little faster than usual perhaps.

So if you were hoping to hear doom and gloom from me, then sorry you came to the wrong place. If you wanted me to say that this is the best time ever for the industry or that the future looks the best I have ever seen, then you will not get that out of me either. What you will hear me saying is that there are enormous opportunities out there for the people or companies that can successfully see the path ahead and have the ability to capitalize upon it.

On of the areas seeing that change is the way in which news and information is reported. Right before DAC, when many bloggers received press passes for the conference, there were many articles from the old school and the new school throwing punches at each other in the editorials and blogosphere. I was so tempted to throw a few myself, but for once managed to restrain myself (Oh no – something is changing in me – I have to stop it). I am not going to start throwing punches now, but I am going to start raising a few issues regarding the role that each community plays.

Ever since being put on the press list for DAC, I started to receive copious numbers of press releases. Every one of them went in the trash can, not because I couldn’t be bothered with them, but because they contained nothing to interest me. I don’t care if so-and-so just landed a new customer, or that the latest release of the tool runs 20% faster or has an extra feature that was already available in the competitors tool offering. I don’t care that two tools that were obtained through acquisitions have just been integrated or that the company big-wig will be presenting somewhere. This is all irrelevant information to the blogosphere. Keep giving that to the tradition press – they know what to do with it!

Let’s start by looking at who puts the blogs together. Most of them are senior technologists or experienced marketing people. While many of them are tied to companies and only report about internal stuff, there are many other bloggers who are fully employed by a company and yet report on a wide range of topics. They do this because they have interests and opinions on things outside of what they encounter in their day jobs. They want to keep in touch with the broader industry – they use the blogs as a way of continuing their education and contacts within the industry. These people want real information and when they get it will feel that they have something that they want to pass on to their readers. It may be a condensation of the information, it may be putting it into a historical perspective, it may be related to other things they have observed and suggest how it could be made better.

Now let’s look at me. According to one traditional press person who had a lot to say about bloggers at DAC, I am just a consultant who only blogs to try and prove that I know my stuff and thus will be more likely to get hired. I don’t think so. If blogs were my marketing tool, then I know they are not very effective and I probably would not spend a lot of time on them. I do it because I want companies to feed me with information – relevant technical information that I can use in a number of ways. I want current information that I can use in my books or other articles, I want to be able to ensure that when I do give advice to companies, that it is based on the best information I can obtain, I want to be able to provide advice to companies on development directions, I want to be able to report on directions that I see important for the future so that I can help guide the industry. Some of these may make money for me, many of them I do as a service to the industry – especially to small start-up companies.

Let me give you a concrete example (with names and everything changed to protect the innocent and guilty). Several months ago I received a press release from a company that talked about the integration of two products. Big deal. No “news” there. I even had a couple of traditional editors call me to ask if this was significant or not. I told them – no it wasn’t. But what I did tell them was that along with the press release the company had provided an FAQ, and that hidden in that FAQ was one sentence that got my attention. A sentence that if true, would be something to talk about – something quite big. None of those editors bothered to follow up on that tip I had given them. But I did. I emailed the company and talked to several people and found out that this was real and that it was working with a few early customers, it just hadn’t been fully released yet. I believed that the ramification of that new piece of engineering would be big. I still do, and I blogged about it. I did something that the traditional press seems not to do. I could see the value in something and project the impact that it would have because I understand the technology, I understand the needs of their customers. I also included it in my latest book “ESL Models and their Application” which should be out in January 2010 (sorry for the blatant plug).

Another thing that really irks me are non-disclosure agreements (NDAs). Many companies want to talk to me, but insist that I sign an NDA with them before they will share information with me. First off – this is stupid. If they are talking with me it is to share information that I can use. How can I do that if I am them bound by their NDA. Many companies have made me sign an NDA and then all they “share” with me are a few high-level marketing slides that I could have created in my sleep. They somehow think this is their crown jewel – something that will totally reveal their strategy and that if it got out then every other company would race to copy them and remove their advantage in the field. Brother!

I am currently working with another company who is providing me with a lot of internal and confidential materials. Yes – I signed an NDA with them, but the point is to use that information to turn it into something that is for public consumption. At the end of the project everything will become public. I am helping them to get information out to the community in ways that the traditional press can no longer do. I am providing in-depth, highly technical information that will help customers decide if certain tools are right for them, what to expect from them, how to adopt them, what mistakes to try and avoid etc.

So it is not just the physical form of the press that is changing. Not only is print almost dead, but the Internet is only working for a small number of publications as well. It is about the type of information that is being transferred between an EDA company and their customers. This is not the kind of material that has been made available before. The content is changing. The format is changing and the way in which companies provide information to people like me has to change as well. Stop sending the press releases, stop making me sign NDA’s for useless marketing dribble, start sending me announcements about significant technical advances and the contact people that I can talk to for more information if and when I need it. Then I may blog about you, or include the information in some of my more substantial writing ventures or ensure that systems companies that seek my advice get the information they need and know who to talk to if they are interested.

EDA companies – listen up. We bloggers are part of your new information channel – it just has to be on different terms than the past.

———————————————————————————-

Brian Bailey – keeping you covered

brian_bailey at acm.org

30
Jul

Accuracy does not imply accuracy!!

It is always great to receive compliments after giving a presentation. While many people may say good job, or nice presentation, it is even better when you get the kinds of comments I received after a presentation I gave at the DAC Workshop for virtual platforms in San Francisco on Wednesday morning. To put the comments in a nutshell they said, “that could be one of the most important and insightful comments I have heard at DAC this year”. So what did I say that produced such comments?

The presentation was about putting timing into virtual platforms. A virtual platform is a software model of an actual or intended system that captures functional or behavioral aspects of a system. It normally involves the use of some form of abstraction such that adequate performance can be obtained at a desired level of accuracy. It is very common to hear people say that they need very accurate timing as this is the only way they can really know what is going on in the system. In other words they want the same level of accuracy and detail that have been used to at the RTL level, but want the performance that can only be obtained by applying abstractions. So while it is true that greater detail provides more accuracy about a specific transaction, it is not the whole story. There are times when the addition of accurate timing details actually makes the information that is obtained less accurate. Let me explain.

If you want to examine how two parts of a system interact with each other, you could do this at an untimed transaction level and gain some knowledge about the amount of traffic that pass between them. The good news is that you can obtain that information very quickly. If you were to add detailed timing, you would see exactly when that data is transferred and any contentions that may exist on the communications channel. This will take a lot longer to simulate, so you may have to limit the length of the simulation, or the number of cases are considered. But what if you included just a small amount of timing information. You can then get the best of both worlds. You can see the expected data rates, any periods of time when there may be congestion, and you can do it fast, which means that you can put enough data through to get a real picture of how the data patterns may change over time; if there are conditions under which extremely heavy load is expected etc. In short you can get an accurate statistical picture of that communications that will allow you to tune the communications, or the architecture. This data is way more accurate than you would have gotten from the detailed timing simulation that could only look at transactions over a short period of time.

This goes back to the definition of ESL that was captured in the book “ESL Design and Verification” which starts – The utilization of appropriate abstractions in order to increase comprehension about a system…

The key element in that definition is that there is no ONE right abstraction that tells you everything you may need to know. You need to consider the abstraction and match it to the type of comprehension that you are seeking to obtain. If you want to know how an implementation works – stick to RTL. If you want to understand how a system is performing or to get a bigger view of system level operations, you must adjust your view of accuracy accordingly.

So, thank you to those people who made the comments. It confirms that I am helping people in the industry understand the bigger issues.

—————————————————————————————————————-

Brian Bailey – keeping you covered


28
Jul

Is DAC dying?

People came to DAC this year wanting to be depressed. The industry is in bad shape, the organizing committee made mistakes in taking away free Monday – you name it, I have heard the excuses about why they did not expect much. As I walked through the doors of the Moscone center I was greeted by smiles from Kevin Levine, one of the organizers of DAC. If there were problems with this year’s conference, it was not showing in him. He quickly told me where I needed to go to for my registration. There, I found more smiles and it was difficult to feel so down.

Having registered, I walked into the North hall. As in other years, it took me an hour to walk just a few feet as everyone that I knew, came to say hi, to shake hands and to ask me a multitude of questions about verification, ESL, the books – you name it. This seemed just like a DAC from years gone by. I started to ask people about how they were feeling and many said to me that they also felt that people wanted to be depressed but they were having a hard time not feeling just a little bit optimistic, that we had seen the bottom, that the people had come, that DAC was not dead. In fact one vendor that I talked to said that many people, many companies, that had told him they were not coming, were in fact there. Within the first few hours of the show he had already talked to more people than he thought he was going to talk with in the whole show.

I had lunch with another person who was excited about some of the changes that were going on in the industry. We talked about the new opportunities, the new challenges and how so many people just did not seem to notice the changes that were going on around them. That did not understand why some companies were falling in terms of significance while others seemed to be making huge strides. Out of adversity come great change and I feel that this time out of the recession will be very different from the rest.

Sure, the exhibits halls are not so full in terms of booth space or people, sure some companies have cut back on the number of people that they send, but this does not portend the end of DAC.

Then I started to put a clearer picture together. The doom and gloom is coming from the people who do not like or cannot prosper from the changes that are happening in our industry. It is affecting the whole tool chain, the fabs, the semiconductor companies, the systems companies, the EDA industry, the press. There is so much change happening, that some people probably just want the “Good Ol Days” to return. The ones they understood, that were good to them. Progress is unrelenting, not just in the technology, but in the processes and methodologies that the industry uses. Some see that change as perilous, other see it as a great opportunity. Which camp are you in?

13
Jul

Innovations in formal verification

Last week, I received another press release from Jasper Design Automation that talked about advances they have made in a number of areas, including performance (a 2X increase in speed/memory footprint), packaging (common backend for several front ends) and technical (introduction of quiet trace and more parallel operations). This is not the first major product announcement that they have had in the last year. Towards the end of the year, there was the announcement of ActiveDesign – a product that really excites me. For full disclosure, I am on the technical advisory board of Jasper.

This got me to thinking about the other companies that have formal tools. Mentor and Cadence have been active in this area in the past, as has Synopsys. Real Intent seems to have gained some stability in CDC, a product category also being fought by many of the other formal companies. But I do not remember seeing any announcements from any of these about significant advances for quite some time. I went back and checked – nothing that I could find on the Mentor site for the past year, nothing substantial for Real Intent, nothing for Cadence. Calypto is still a relatively new entrant, but attacking a different market (sequential equivalence checking), for which they still have the market to themselves.

At the same time, Bill Murray in a recent article on SCDsource showed that usage of formal technologies had doubled over the past 3 years, which means that there is a real need and growing adoption of these tools. Jasper CEO Kathryn Kranen talks about 100% bookings growth for Jasper in 2008 compared to 2007 on Deepchip. So are the other companies just not talking about their technical advancements, or are they just trying to get what they have working properly? Are they giving up and leaving the market to the only one who appears to be innovating? Is Jasper becoming the formal gorilla?

© 2017 The ESL Edge | Entries (RSS) and Comments (RSS)

Design by Web4 Sudoku - Powered By Wordpress