The ESL Edge

15
Sep

Refreshingly different MathWorks

In my last blog, I talked about model-based design and Ken Karnofsky commented on it. Ken is the senior strategist for signal processing algorithms at MathWorks. He said that I got it somewhat wrong and so I was eager to find out more from him. He basically told me that considering model-based design as only being an abstraction method misses some important aspects of it. He also said that both graphical and textual representation can be important. For example, MathWorks has several different languages such as Simulink, Stateflow and MATLAB. The first two are graphical languages, the latter is textual. In many cases, users do not restrict themselves to just a single language, they use whichever is best for the task at hand and they are free to mix them in just about any way they please.

Putting that into perspective for hardware design, we do tend to restrict ourselves to one language, and for most people that is SystemC. While you can throw in some C and C++, it really doesn’t add anything new or different. Others prefer Bluespec which they claim is better at describing abstract hardware because you don’t have to try and extract whatever possible parallelism happens to exist in a serial language. Ken made a similar point in our conversation.

I then brought up the subject of hardware synthesis – what we generally call behavioral synthesis. I was told that they really just consider it more as code generation and you can use most of Simulink, Stateflow and a subset of MATLAB to get to an RTL description. He told me that they are seeing substantial growth in this area. I asked him how much MathWorks charges for this. While he was reluctant to provide an actual figure, I got enough information out of him to ascertain that it is an order of magnitude cheaper that what traditional EDA charges for behavioral synthesis. So, immediately this says to me that either their product is much simpler, the EDA companies are charging way too much, or there is something else going on. I will come back to this point shortly.

So, in my previous blog I talked about how in the hardware world interfaces are quite central to the refinement process and Ken accepted. He said that typically they just treat this as a modeling problem as well. They approach design flows by considering a particular application and the type of people who will need that kind of solution and then prepackage the models and interfaces necessary to make it work for that group of people. That very much resonated with me. At DAC 2010, I gave a keynote address to a system-design workshop where I said that the people who were most likely to be successful with an ESL flow were those who restricted the problem in ways that it became soluble. I talked about how the platform developers could create tools that only had to deal with their specific platform, and FPGA vendors who had certain constraints on the problem space that meant that they didn’t have to solve everyone’s problems at the same time, and then it dawned on me that I had never considered MathWorks in this category. They are not making the restrictions because of the possible market they are going after, they are making choices that enable them to create solutions for targeted groups of people in a very economic manner.

And here, I think is the key. MathWorks is attacking the market in a way that means that while they cannot solve everyone’s problem, for defined groups of people, they can fully provide a totally adequate solution. They are not trying to do it all at once and some of their customers are not the ones attempting to create the biggest or most far reaching SoCs. At the same time, many of their customers are dealing with very complex systems, including the environments in which they operate, which may need to be modeled as well. In short, they attempt to understand and address the issues of specific classes of customers rather than trying to create a completely generic solution. Over time, as more groups of customers are included, full solutions may emerge. This basically means that they take a much more pragmatic approach to their development. Ken mentioned how he believes the typical EDA notion of a virtual prototype is overly complex for many design problems and yet at the same time they do not address the growing need to have analog represented at this level. He talked about the difficulty of really assessing a design without including things such as high-speed wireless or RF which are part of the tradeoff space.

Another difference between the MathWorks approach to the synthesis problem and traditional EDA is that EDA still lacks much knowledge about the embedded space. EDA always thinks about taking software and mapping it into hardware, but when you start with abstract descriptions of the complete problem, targeting software is equally valid and this also requires code generation. Again, by focusing on specific customer problems and the environment that is being targeted, they can deal with this and thus have hardware/software co-generation capabilities where everyone else still thinks they are not possible.

Simplify the problem until you know how to solve it and then charge a reasonable price for the people you are attempting to target. This seems to be the MathWorks philosophy.

Brian Bailey – keeping you covered

P.S. I also learned that they are no longer The MathWorks. They have dropped “The” from their name.

01
Sep

Grappling with Model-Based Design

A couple of weeks ago I saw a press release go by. The title was “Faraday Accelerates the Development of SoCs with Model-Based Design”. They claimed that this helped them speed up simulation by more than 200X and reduce gate count by more than 50%. Not bad I thought. But then I stopped to think for a bit and asked myself – why would model-based design do that? Do I correctly understand what model-based design actually means?

First step – I went to Wikipedia to see what they had to say. There I found out that text-based tools are inadequate for the complex nature of modern control systems. They go on to say that “Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was difficult, time-consuming, and highly prone to error. In addition, debugging text-based programs was a tedious process, requiring much trial and error before a final fault-free model could be created…”.

In other words, because of the limitations of graphical methods we used text methods, but these are too difficult to use so we should use model-based techniques. OK – so what are model-based tools? Well it appears that “These challenges are overcome by the use of graphical modeling tools […]. These tools provide a very generic and unified graphical modeling environment, they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks.” Now wait a minute! We have used hierarchy even when we did gate-level design, which was graphical. These things do not represent complexity in any way. About the only useful thing I found there was “Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another.” Ah, now that is an idea we have aspired to for quite a while but the problem has always been in the way in which interfaces map in hardware. An interface in the software world can remain the same even when the abstraction of the model itself changes, but in the hardware world we have to refine the interfaces just as much as we refine the model contents.

Cadence recently took a stab at this problem in their (ok, I was one of the co-authors) book titled “TLM-Driven Design and Verification Methodology”. Here a subset of the OSCI TLM 1.0 standard was identified and then extended with some of the notions from TLM 2.0. The result was an interface description that could be used to connect models at the TLM level and was also synthesizable. The same interface that was used to connect functional blocks together could be used to connect the functional block to an interface model, such as an APB interface, and thus the interfaces physically got refined while at the same time, the original functional interfaces would fade away in the synthesis process. Of course, there were some tricky bits along the way. For example many blocks needed to know how the data was to be made available and the use of simplistic interfaces did not allow “knowledge” to be passed across the interfaces allowing them to become optimized.

I have also heard from the software world that synthesizing things where interfaces were involved, or OS calls required, generally resulted in inefficient implementations for much the same reasons. The other problem is that software synthesis does not have many of the constraints necessary to allow for anything more than localized analysis and optimization whereas in the hardware world we basically constrain what can be described so that it does enable such optimizations.

So, going back to the press release. It is clear that what they are really describing are the gains that come from employing abstraction. They talk about how abstraction enables them to explore the design architecturally and that provides the increase in simulation performance. Architectural exploration can also explain why they had reduced gate counts. While a 50% reduction in gate count is certainly not typical of the figures I hear, it is possible.

In summary, I don’t think this has anything to do with model-based design. I has to do with the gain associated with moving to a higher level of abstraction and it really doesn’t matter if you want to do that textually, graphically, or model-based, you are likely to see significant productivity and optimization gains. Long live whatever development process you prefer!

Brian Bailey – keeping you covered.

17
Aug

Am I Getting Old?

IMG_7692

For several decades, the pace of technology development has continued to accelerate. Technologies that used to take years to see significant adoption now seem to happen overnight and the number of things that get integrated together keeps rising. But it seems equally important that everything we buy becomes obsolete as quickly as possible so that we will have to buy a new product every couple of years – just to keep up. We used to pity our parents when they could not understand these new fangled devices, and it is clear that our children tend to gravitate towards different things than we do at times. Maybe it is a sign that we are now becoming that generation that will have pity piled on us before much longer. Maybe, maybe not, but I know that I still enjoy and appreciate much of it, even though sometimes I may be slower on the uptake than I used to be.

But there are a couple of things recently that I just don’t get. I cannot see the sense in them and I cannot understand how they are ultimately going to help me. The first of these is The Cloud. So they promise to offer me unlimited processing power and unlimited storage. Big deal! Disk is so cheap that I can buy Terabytes of storage for less than 10 cents per gigabyte and it can be connected at Gigabit per second communications speeds. Now yes, I still have a data security concern in that if my house were to be destroyed, I would lose all of my data and backups, but that doesn’t make me want “The Cloud”. It makes me want to be able to have a disk located somewhere else, that I can access through my backup software. If and when that is not enough space, I can add another disk. I expect that disk would probably be at a friend’s house – someone who I trust, but it could also have hardware security built into it as well so that all of the data is encrypted. Does such a thing exist? I haven’t been able to find such a product, but if anyone knows about it, please let me know. And as for unlimited processing power – what do I need that for? The amount of time it would take to get data there and back is probably more than I would save, especially since the most computer centric things I do are photo processing etc.

That brings me onto the second issue and that is security and privacy. I do not want to give my data to an unknown 3rd party who just tells me to trust them. How do I know that their security can be trusted? How do I know that my data won’t land in the hands of someone else? How do I know that someone wont, one day, be mining that data – anonymously of course – and providing that information to people I don’t even know for who knows what purpose! Today was the same as usual with news of another hacker getting into an established company and messing with their data. Why is data in The Cloud going to be any more secure than financial transactions data, or personnel data?

But it is not just about data security – the security issue is beginning to pop up everywhere. For example – GM vehicles contain OnStar. I do not believe that I can trust OnStar, or at least GM has not done enough to convince me that the system is secure. I have seen the reports about the system being used to track stolen vehicles. I have seen it being used to stop a car that police were pursuing – and who knows what else they can do. Now if we also start to get car-to-car networking in place, what is being done to ensure that nobody can interfere with my car? I have decided that until the car companies can persuade me of the efficacy of their security then I will not buy a car equipped with that kind of capability. It is not that I don’t understand it, it is that I don’t trust it.

So maybe that is why I will get left behind in the technology race. I am no longer willing to have blind faith in the technology being produced, especially when it concerns my safety and privacy. Younger people don’t seem to even think about these issue, but then they also don’t seem to worry about what they post on Facebook, or that what they do today will be there for an eternity, for everyone to see, and may come back to bite them later in life. Kids have no fear, teenagers think they are invincible and us old folks just want to preserve what we have – but privately.

Brian Bailey – keeping you covered.

04
Aug

Amazing but True

I am often amazed that any chips actually work. We all know that verification by simulation is based on a sampling of possible stimulus and we also know that the number of samples we provide is a tiny, tiny fraction of all of the possible stimulus patterns. Even so, verification is taking up a larger percentage of our time and increasing. We use things such as functional coverage to try and steer some of those samples towards particular areas of the design, but as system complexity grows and the amount of concurrency increases, the fraction of the total stimulus space that is actually covered is being reduced every day. Some people are also aware that most simulation never actually checks that things work correctly; they just look for obvious signs that nothing has gone wrong. As the sequential depth of designs increase, constrained random is also less likely to hit many interesting cases, so more companies are being forced to spend a greater percentage of their time directing testing. So why is it that any chips actually work?

So first of all, there are the cases where the chips doesn’t fully work, but it works well enough. This can be at two different levels: either the problems are fixable in software (problem avoided), or the functionality is reduced, but still good enough to ship the product (reduced functionality).

One of the big saving graces is that few SoCs use all of the available functionality. The system places constraints that prevent many capabilities from ever being activated, so this means that many problems can exist with no effect. Just think of an IP block – perhaps one that comes from a third party. It is possible that it has been used in many designs in the past, and yet as soon as you integrate it into your design, you find problems with it, and start to question the quality of the block. The same is true with software as well. I can remember back to the days when I actually developed software (yes a long time ago). Each and every time we got a new customer we learn to expect a rash of new bugs being filed and the customer getting upset about our perceived quality levels. Those bugs came because they were using the product in different ways than the previous customers, and as soon as that first batch of bugs were cleared, they were unlikely to find many other problems. Software often constrains the functionality that can be activated in the hardware in addition to the hardware constraints, although if software updates are possible, it can trigger problems being found in the hardware at a later date.

Let’s consider a specific example that I have heard other people mention. If we have a state machine, it is clear that we need to verify that we can reach all of the states, make the right transitions between the states, and that the right things happen in each state. If we add a second state machine, we also need to verify the same for that one. If each state machine executes concurrently, then do we need to verify all combinations of each test in each state machine? In other words do we have 2X the tests or is it a square of the number of tests. Reality is probably somewhere in between based on how one machine can influence the other and produce different outcomes. But how does that get formalized?

So, in the verification world it seems as if we have never been able to get a handle on the real space that needs to be verified. How much time do we spend performing verification at the block level that could never be activated once integrated? How many actual or implied constraints exist in the system that carve out great chunks of the possible verification space? For most chips to work, it would seem as if we must be doing some of this without really thinking about it, but one would think that after doing this stuff for 40 or more years now, we would have found a way to formalize this. How much time do we waste doing simulations that don’t give us any useful information? How much more efficient could verification be?

If anyone has a better handle on this problem, I would love to hear from you. If there is any promising research that can help guide us towards the important test cases to be run I am sure we would all be thankful. Until then it would appear that we enjoy gambling and we have had a pretty good run. When does our luck run out? Do you think it is luck or a sixth sense on the part of the verification engineers?

Brian Bailey – keeping you covered

21
Jul

To the Virtual Prototype and Beyond

Ah, the joys of summer vacations and scheduling around them. I was disappointed that Synopsys was to have a press release while I was on vacation – especially when it was on one of my favorite topics – virtual prototypes. To make matters worse, by the time I got back, their key guy was about to start his vacation – but Mark Serughetti agreed to spend some time talking to me and we had a great chat.

Now given that everyone else will have already regurgitated their release and the plentiful other pieces of information that Synopsys normally supplies with their releases, I will not bore you with that, except to put a context in place. Synopsys bought several companies recently including CoWare, and Vast and a few years before that Virtio. This is their bringing together of those technologies and using them to create Virtualizer – a product that focuses on analysis and debug, and flow integration. All models developed for those prior products are fully compatible with this product, but going forward the preferred modeling language is SystemC TLM, although any language can be used under the hood for even more performance.

CS728fig3

They have indentified two primary user groups for the product – those who create models and platforms and those who use them. Different capabilities are necessary for those two groups. They supply a library of models to go along with the product, plus some reference designs that can help people get a prototype up quicker than starting from scratch.

When a virtual prototype is developed for SW development, most people today attempt to use it in the same way that would use a development board. They attach a debugger to it and start doing functional verification of the SW. But a virtual prototype is capable of doing so much more, and it will take time for the methodologies to change. In addition, the prototype is the place where the HW and SW come together and this provides a whole new set of things that people could do. For example, power is not dictated by either HW or SW individually, but it is affected by the way they interact with each other. The analysis capabilities that Synopsys is providing concentrate on this type of need and allow their existing methodologies (compilers, debuggers etc) to be used for the functional verification.

So why does it take so long for methodologies to make significant change? Mark explained that up until now, the big issue has been about the availability of models, and the ease with which a platform can be put together. Many of those problems have now been mitigated which makes it more likely that more companies will start using prototypes. But there are also secondary issues such as group communications, roles, and responsibilities in the user organizations that slow down adoption, especially for the new types of capabilities we are now beginning to see. Who creates and maintains the models, what models do they need to create and what levels of abstraction are going to be used? Companies are now gaining enough experience to decide on these issues, and to identify what they expect of the platform. Different users have different needs and a way has to be found to integrate those varying needs in an economic manner.

I asked Mark about the role he thinks Synopsys has in defining new flows for the system level. This includes things such as the definition of good verification practices for this level, how hybrid prototyping needs to be setup and used and the impact that this can have on things such as how a model is transformed over time. He said that the customers are basically the ones who need to decide what flows work for them and the tool must support those flows rather than the tool defining the flow. Only when there is significant alignment in the industry does it make sense to bring this into a standardized flow with added forms of automation. I think he is spot on with this, and is part of the learning flow associated with a new abstraction and moving into ground that has never been covered today. We are where RTL methodologies were about 20 years ago and at this point we are all in learning mode. The important thing is to be open, flexible and to listen. I think that is exactly what Synopsys is doing today and they are taking the right approach, but as an industry we have to work on moving forward as quickly as we can. We need a system level verification methodology that includes how stimulus is generated, coverage is defined and all that good stuff, but across multiple levels of abstraction using many different models. We also need to consider things other than functional verification as we have now introduced capabilities for power and performance as well as functionality being distributed amongst hardware and software. Treating them as separate entities is only part of the picture.

Brian Bailey – keeping you covered

07
Jul

Are you Positive about that Verification Approach?

There have recently been several articles published that relate to System Level Virtual Prototypes (SLVP) and the use of software to verify them. For example Synopsys’ Achim Nohl and Frank Schirrmeister penned an article titled Software Driven Verification. In it they say “…more and more designers not only use embedded software to define functionality, but also use it to verify the surrounding hardware.” They go to say “Such a shift is quite fundamental and has the potential to move the balance from today’s verification, which is primarily done using SystemVerilog, to verification using embedded software on the processors.” They go on to show in the paper the various roles that the virtual-prototype can play in the verification process. It is a well written paper, but it leaves out an enormous issue that must be addressed and one that I have written about in the past, but clearly need to come back to.

When we think about the way in which verification is performed today, we start with individual blocks written in RTL. We create testbenches for these and attempt to get 100% coverage of all of the functionality that we expect that block to perform. There may be some assumptions that we make about how the block is to be used, but in many cases, the developer does not really know about them. An extreme case of this is the development of a piece of IP intended to be sold to other companies. Here the way in which the block is to be used is totally unknown, except that its usage should conform to a set of documents provided along with the IP block or by an industry standard. For a general purpose interface block this may include many thousands of configuration register bits, different ways in which a bus can communicate with it and so much more. One way to think about it is that there are few constraints on the inputs of the block, and this means that all possible functionalities need to be verified.

Now as that block is integrated into a sub-system, the ways in which that block can be used become more constrained, certain aspects of it become fixed and that makes certain pieces of functionality in the block unreachable. Thus for one particular instantiation of that block, some of the unit testing that was done was unnecessary. Had we know about the environment it was to be used in we could have saved that effort. Unfortunately if that block is used in several locations in the design, each instantiation may have used a different sub-set of functionality, and thus we would have to ensure that the full union of functionality was covered. Even more so, if we hope to re-use that block in a future design, it is probably just worth verifying the whole thing because making assumptions about the usage environment can lead to disasters such as the Arianne 5 rocket.

Now let’s move up to larger system, those containing a processor. Several years back there was a debate about whether a microprocessor model should be included in the design when performing verification. The arguments was that if you left the processor out, the simulation would run faster and you could pump in whatever bus cycles you wanted to fully verify the design. This could be fed by constrained random pattern generation and it would be easier to get to verification closure. The other side of the argument was basically, the use of an ISS model does not slow down verification in any perceptible way and you can write tests in software much quicker or even use some aspects of the actual software, such as drivers. This means that you start to see real life traffic rather than make-believe bus operations. So which is right? Actually they are both right and this brings us back to the major point. When you execute actual software you have constrained the design in that it only needs to perform the exact functionality necessary to execute that software. If you change the software, you may change the way in which it uses the hardware and you may now execute cases that you did not execute before. There have been many cases where only certain versions of software will run on specified versions of hardware. This is a result of only having done the system verification using software. Simply put we are under-verifying the hardware.

On the other hand, if we take the processor out of the equation and drive bus cycles, we may be verifying many conditions that could never happen in real life because the processor is incapable of driving that sequence of operations using that timing. So this is a situation where we are over-verifying the hardware. But what is the happy middle ground – well, the industry has not come up with an answer to that yet.

SLVPs are a game changer in many ways. They can take what used to be a bottom up verification process and make it a top down verification process. We can start by verifying that the abstract model of hardware is capable of running software and thus we can ensure that the overall specification is correct before we even begin implementation. Now that is something we have never been able to do before. We can substitute abstract blocks, within the SLVP, with implementation blocks to ensure those blocks in effect match the requirements as defined by the virtual model – but we can never stop doing block level verification. If we do not relax the constraints at each step in the verification process then we are likely to have software updates suddenly expose hardware problems. Of course, if you never intend to modify the software once the product has shipped, then you can avoid all block level verification as it has already been sufficiently verified, but in most cases that is not true.

The terms I use, which may not be ideal, are that when we verify that a system actually perform a task as defined by the specification, I call that an act of positive verification. So running actual software is positive verification. When we perform verification that attempts to ensure that all bugs are removed from a block or system, I call that negative verification. In real life positive and negative verification have to be balanced and that balance point is dependent on the type and usage of the system. Do you have better terms for these? If so, please let me know.

Brian Bailey – keeping you covered

15
Jun

Don’t listen to the experts. They have it backwards.

Let me take a stroll down memory lane. The year is 2009 and it is DAC. On a panel put together by Lucio Lanza, a long time EDA and semiconductor investor, they came to the conclusion that chips are costing $50-100M and that is too much. They said the return figures for EDA companies are even worse. They also said that if EDA could reduce those costs to $5-10M there would be an industry rebirth.

Also, remember that Gary Smith said (not sure if it was in 2009 or 2010) that cadence was no longer the top gun in EDA. The reason for this is that for the first time in history the bleeding edge customers were spending more than the early majority. Cadence had traditionally served this early majority market, but many of the top tier companies had now realized that by spending more on tools, and getting the best tools, they could compete better. Simply put, buying second rate tools made you a second rate company and there was little money to be made there.

Now we can fast forward to two events at DAC 2011. First when Gary Smith gave his address at DAC 2011 and said that the EDA industry is failing to deliver the necessary productivity. Part of the problem here is that EDA is totally infatuated with 40nm, 28nm, 20nm and smaller – things only of interest to those very few bleeding edge customers. EDA no longer has an interest in the early majority, which may have disappeared, and the late majority can just do with what they already have.

The second event of interest at DAC was another Lucio Lanza panel where they again said that the cost of building chips is a $100M and nobody can afford to be a startup with these costs. They also concluded that EDA needs to fix the productivity problem. They also complained that 3rd party IP is not the magic wand that they hoped for. On that panel Behrooz Abdi (EVP at Netlogic systems) made the prediction that in 5 to 10 years, the top design houses will be chipless and be constructed from 10-20 software engineers and system engineers.

With that as a backdrop, I say CODSWALLOP!!! (For those not in the know – that is English for nonsense). It is just too easy to say that and spread the blame to someone else. Let’s look at a few examples of this already. The first is Nokia. Once, they were the largest mobile phone company, they almost defined what a phone should be. They used to design their own chips, their own IP, their own software. They stopped designing chips because they believed that they could produce even more value by concentrating on the system and software and leaving the hardware to the experts. Now look around you – who has a Nokia phone? They are no longer a top tier player and have even been pushed out of many of the low cost markets as well. So much for just being a software and systems company.

Now let’s look at Apple. They bought existing hardware or contracted out pieces of the design and built first rate systems through good design. Having been hugely successful, they are now reducing costs by taking on more of the vertical aspects of the design, including producing their own chips. So we have an example of what the pundits believe will be the future diving off a cliff and the opposite of what they predict being displayed by the most successful design house in this country.

Here is how it does make sense. First of all, there are way too many companies out there making me too products. They are all chasing the leader and coming out with repetitive, crap products. Design houses stopped designing a long time ago (with a few exceptions), they just regurgitate. Of course a VC is not going to give a startup money when the track record of their larger brethren is so poor – they are likely to squander it and even if they are successful, the pack will hunt them down if they show any signs of success. But you don’t need fancy new chips to be the best on the block – Abdi is kind of right on that, but it is not 5-10 years in the future – it is now. A company that truly designs a good product will succeed. They do not need to be on the latest and greatest hardware, or have their own chips – they just need to be good designers and come up with fresh products that people want to buy. If they succeed at that first step, and if they can show they were not a one hit wonder, then maybe they can make their products better by using custom chips, and maybe they don’t need VC money to do it. Do they need to be on the latest and greatest technology that will cost $50-$100M to design – NO, older technologies will be fine for most chips and products. Only if they hit it really, really big will it be worth going to the new fangled technologies, and all of the additional headaches that come with them.

So my message. Design houses – stop chasing each other, building crap products and start putting some thought into design. Be the best at what you do, don’t believe that by buying the best or the smallest, that it will somehow make you better. A good design is a good design. A bad design can be designed faster with the best tools and cost more with the latest technologies, but it is still a bad design. EDA companies – stop concentrating all your development money on the new technologies. Fix the ones you have and improve the productivity for the masses. Spend more time and attention on FPGA related design technologies that will enable early products and prototypes to be designed quickly and cheaply and then sell them the more expensive tools to migrate those designs onto silicon.

Brian Bailey – keeping you covered

02
Jun

Design Evolution locks you into Local Minima

Last week I was helping a friend put on an art exhibition in Columbia, CA, a preserved gold mining town from the mid 1800s. I wrote about that in my personal blog. In the few hours that I had free, I went to nearby Jamestown – a historic short line railway that has been turned into a state park. It is home to one of the most famous steam engines in the US, having appeared in many films including Rawhide, Back to the Future III and more than 200 others since 1919. But it was another engine that really caught my attention. The drive system was completely different from any other steam engine that I had ever seen before – and that got me thinking.

In England, all of the steam engines have a common basic design. The boiler generates the steam that is led to a cylinder that lies horizontally. Some of the larger engines have a couple of cylinders there, but rather than being for extra power, I believe they are to enable reversing. If you need more power, you create bigger cylinders. The pistons transfer the power to a set of driving wheels. This is the same common design used by most of the American steam engines as well.

IMG_8204

But the one shown here is different. There are three cylinders each mounted vertically.

IMG_1894

IMG_1895

They each contribute to the turning of a drive rod and the wheels are in turn driven by a worm gear

IMG_1896

In many respects this is the drive system in a car. Multiple cylinders turning a crank which is attached to the drive wheels through a worm drive in the differential. It appeared to me that it makes some very different design tradeoffs. It would be easy to add additional cylinders, but are the worm gears the weak spot in the design? How would you go about analyzing the tradeoffs? Back then they had no CAD tools to help them make such decisions which is why it would make sense to do design by incremental refinement. But does this technique only allow you to find a local minimum – the optimal set of refinements on a sub-optimal solution?

This is something that many people are beginning to realize when they start using ESL tools. In my book “ESL Models and their Application” I provided a specific example where a company was designing a wireless receiver using a MIMO approach. There are several ways in which this can be implemented including performing a QR decomposition or using Eigen equations. They had been using the Eigen equations in the past and expected to continue using them for future designs. With ESL tools available to them, they started to explore the architectural space and that seemed to confirm that they had made the right decision but then they found a different micro-architecture that totally favored the QR decomposition approach, producing a smaller design that operated at higher frequency. This was not a small improvement; it was a 38% area savings and an 11.5% performance improvement. Without the ability to perform this higher level analysis they never would have started to use a different approach. I wonder how many other designs are like that, possibly include the design of the steam engine?

Brian Bailey – keeping you covered

19
May

Stimulus done right

So, we are nearing the end of the biggest stimulus spending package that this country has seen, and I wonder what we really have to show for it. A few filled pot holes, perhaps a junction or two that flows a little easier than it once did. But in a few years time, all of that will be forgotten and the pot holes will be back. I believe that this country’s greatness in technology was the result of two previous stimulus plans, namely the space race and the Cold War. Both of these fueled spending in technology, created a sense of need or pride within the country and inspired a whole generation of kids to want to get into technology. We lived off the back of that research for a couple of decades or more, and only now is the rest of the world catching up. The total return on that initial investment must have been huge.

In Portland, there was a proposal on the ballot for one of the largest increases in property taxes in history, the money to be used to repair schools. Not to improve education in any way. I know of many teachers who are being laid off at the moment or not having their contract for the next year renewed. I hear so many stories about education in this country being low and going lower compared to other countries. How then do we expect to earn our way out of the debt that we are taking on? Without a return on investment, stimulus is worthless, just like any other investment. Well, in fact it is worse than that, it is gratuitous spending that we cannot find a way to pay for.

So, what would be a good stimulus program for this country? There is one area that I can think of that inspires kids across the country, and would serve to set them on the right direction for their careers. I am talking about robots – designing them, racing them, solving challenges to make them do certain things, and OK for the more violent amongst us, robot destruction derbies or robots that fight. These challenges call for creativity, technical skills, team work – all of the things that I don’t see in many of the activities the kids spend their time on today, such as video games. I believe these kids need inspiration and this would be one way to do that.

About a month ago, National Instruments released a new version of its LabVIEW software, specifically tailored to LEGO Mindstorm and high school classrooms. Along with that, they made available several lessons that are accessible online.

Taken from their press release: “LabVIEW for LEGO MINDSTORMS completes the National Instruments and LEGO Education ‘robotics for all ages’ learning platform,” said Stephan Turnipseed, president of LEGO Education North America. “We now can deliver a framework of age-appropriate, hands-on learning technology and curricula that continuously progress with student skill level and learning objectives, from elementary all the way through university.”

And at ESC, Freescale was demonstrating their robot kit which contains a Tower System Mechatronics Board which is supported by Robot Vision Toolkit and RobotSee (a simple language with the power of C). The board has a 3-axis accelerometer and a 12 channel touch sensor.

FIRST (For Inspiration and Recognition of Science and Technology) was created by Dean Kamen in 1992 and is the world’s leading high-school robotics competition. This includes the original FIRST Robotics Competition (FRC) and the newer FIRST Tech Challenge (FTC) for ages 14–18, the FIRST Lego League (FLL) for ages 9–14, and Junior FIRST Lego League (Jr.FLL) for ages 6–9. By 2010/11 there were 245,000 students participating and the program provided more than $14M in college scholarships. It takes 90,000 volunteers to make the program work. We have about 50 Million kids in schools at the moment and while I am not saying that this is a program for everyone, I am sure it could be expanded to include many more than the 0.5 percent is reaches today.

I want to steal a graphic from the FIRST webpage shown here:

Now that is what I call stimulus in education, and to me this country should be putting a lot more money into programs such as this. How many pot holes get filled for $14M – not many! For NYC they have a budget of $190.4M for paving, but because of a bad winter, they are adding on another $2M just to fill a few more pot holes.

So what do you think would be a good stimulus program? Share your ideas here.

Brian Bailey – keeping you covered

05
May

The simulator is no longer enough

It seems as if all of the major EDA vendors have now come to the realization that RTL simulation is just not a total solution anymore. Gone are the days when it was the one tool that would be used for all aspects verification from implementation through integration to system-level tests. For a while the growing problems associated with simulation were covered up by the race to add constrained random generation capabilities, and then by the side show of SystemVerilog. SystemC was seen by some of the vendors as a no-value add-on that they really had no interest in promoting because it lowered the value of the simulator because they were competing with free. But now, all three of the majors are adding different techniques to capture more of the verification problem throughout the flow, although perhaps in different ways.

Synopsys has placed significant effort into FPGA-based prototyping with a couple of acquisitions (Hardi, ProDesign) a few years back, plus the software flow to drive it through the acquisition of Synplicity. Then there was a raft of acquisitions that propelled it into the ESL space (CoWare, VAST, Virtio) and since then, Synopsys has been integrating these together along with its IP offerings into a more modern flow.

We have heard from Cadence, with their announcement at ESC and on the one year anniversary of EDA360, exactly how Cadence intends to respond in this area. They have formally announced their introduction of a virtual prototyping solution and an FPGA-prototype. In their case they are using a lot of the technology from their emulation business to drive the FPGA-prototype and their unique pitch is that once the design has been readied for emulation, then it is also ready for prototyping. I spoke to cadence about this a few days ago and Ran Avinun, director of marketing for the System Design and Verification segment. He said that if each technology adds enough value then having solutions that are fractured is fine, but the fracturing today is so large that it has a huge impact and results in technologies not being adopted. This integration between emulation and prototyping goes further than just the software chain in that both solutions will share things such as speed-bridges and analysis software. The FPGA-prototype has been in production since December but in the releases so far there is not much technical detail about the capacity, performance or extensibility of the solution. I am sure more will become known about this over time.

The other piece of the puzzle is the virtual prototype. This is an essential piece of the puzzle as it enables software to be designed, developed, integrated and verified before the hardware design has been completed, or at least stabilized. RTL simulation, emulation and prototyping all attack the problem after the design has been completed, or getting close to it. This part of the Cadence solution is still in the early adopter phase and a full release is expected later this year.

So where is Mentor? Their press release today talked about the integration of simulation with formal technologies and with intelligent testbench technology. Neither of the other two EDA companies have been pursuing intelligent testbenches. Vista is Mentor’s virtual prototyping platform, but unlike the other companies, their prototype is geared towards the hardware architecture rather than being a platform for software. They also have a whole division working on the embedded software part of the flow. I have heard nothing about them working on a prototyping solution, but maybe they have something up their sleeves. Several years ago I was asked who was the furthest along in putting together an ESL solution. My answer at that time was: Mentor has more pieces of the puzzle than anyone else, but the least likely to integrate them into a comprehensive flow. That is looking truer today than ever before or maybe Mentor just sees the future differently.

So, it is clear that all three of the major companies acknowledge that the software portion of the system is not only becoming more important, but that it present a significant revenue opportunity for them. Each has put in place a different strategy as to how to capitalize on it and I am sure this is just the beginning of their efforts into this area.

Brian Bailey – keeping you covered.
http://brianbailey.us/blog

© 2017 The ESL Edge | Entries (RSS) and Comments (RSS)

Design by Web4 Sudoku - Powered By Wordpress