Part of the  

Chip Design Magazine

  Network

About  |  Contact

Trends in Hyper-Spectral Imaging, Cyber-Security and Auto Safety

April 25th, 2016

Highlights from SPIE Photonics, Accellera’s DVCon and Automotive panels focus on semiconductor’s changing role in emerging markets.

By John Blyler, Editorial Director

Publisher John Blyler talks with Chipestimate.TV executive director Sean O’Kane during the monthly travelogue of the semiconductor and embedded systems industries. In this episode, Blyler shares his coverage to two major conferences: SPIE Photonics and Accellera’s Design-Verification Conference (DVCon). He concludes with the risk emphasis in automotive electronics from a recent market panel. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. Cheers — JB

O’Kane: Earlier this year, you were at the SPIE Photonic show in San Francisco. Did you see any cool tech?

Blyler: As always, there was a lot to see at the show covering photonic and optical semiconductor-related technologies. One thing that caught my attention was the continuing development of hyperspectral cameras.  For example, start-up SCiO prototypes a pocket-sized molecular scanner based on spectral imaging that tells you everything about your food.

Figure 1: SCiO Molecular scanner based on spectral imaging technology.

O’Kane: That sounds like the Star Trek Tricorder. Mr. Spock would be proud.

Blyler: It’s very much so. I talked with Imec’s Andy Lambrechts at the Photonics show.  They have developed a process that allows them to deposit spectral filter banks in both the visible and near infra-red range on the same CMOS sensor. That’s the key innovation for shrinking the size and – in some cases – the power consumption. It’s very useful for quickly determining the health of agricultural crops. And all thanks to semiconductor technology.

 

Figure 2: Imec Hyperspectral imaging technology for agricultural crop markets.

O’Kane: Recently, you attended the Design and Verification Conference (DVCon). This year, it was Mentor Graphic’s turn to give the keynote. What did the CEO Wally Rhines talk about?

Blyler: His presentations are always rich in data and trends slides. What caught my eye were his comments about cyber security.

Figure 3: Wally Rhines, CEO of Mentor Graphics, giving the DVCon2016 keynote.

O’Kane: Did he mention Beckstrom’s law?

Blyler: You’re right! Soon, the Internet of Things (IoT) will expand the security need to almost everything we do, which is why Beckstrom’s law is important:

Beckstrom’s Laws of Cyber Security:

  1. Everything that is connected to the Internet can be hacked.
  2. Everything is being connected to the Internet
  3. Everything else follows from the first two laws.

Naturally, the semiconductor supply chain want some assurance the chips are resistant to hacking. That’s why chip designers need to pay attention to three levels of security breaches: Side-Channel Attacks (On-Chip Countermeasures); Counterfeit Chips (Supply-chain security); and Malicious Logic Inside Chip (Trojan detection)

EDA tools will become the core of the security framework, but not without changes. For example, verification will move from its traditional role to an emerging one:

  • Traditional role: Verifying that a chip does what it is supposed to do
  • Emerging role: Verifying that a chip does nothing it is not supposed to do

This is a nice lead into safety-critical design and verification systems. Safety critical design requires that both the product development process and related software tools introduce no potentially harmful effects into the system, product or the operators and users. One example of this is the emerging certification standards in the automotive electronics space, namely, ISO 26262.

O’Kane: How does this safety standard impact engineers developing electronics in this space?

Blyler: Recently, I put that question to a panel of experts from the automotive, semiconductor and systems companies (see Figure 4). During our discussion, I noted that the focus on functional safety seems like yet another “Design-for-X” methodology, where “X” is the activity that you did poorly during the last product iteration, like requirements, testing, etc. But ISO 26262 is a compliant, risk-based safety standard for future automobile systems – not a passing fad.

 

Figure 4: Panel on design of automotive electronics hosted by Jama Software – including experts from Daimler, Mentor Graphics, Jama and Synopsys.

Mike Bucala from Daimler put it this way: “The ISO standard is different than other risk standards because it focuses on hazards to persons that result from the malfunctioning behavior of EE systems – as opposed to the risk of failure of a product. For purposes of liability and due care, reducing that risk implies a certain rigor in documentation that has never been there before.”

O’Kane: Connected cars are getting closer to becoming a reality.  Safety will be critical issues for regulatory approval.

Blyler: Indeed. Achieving that approval will encompass everything all aspects of connectivity, for example, from connected system within the automobile to other drivers, roadway infrastructures and the cloud. I think many consumers tend to focus on only the self-driving and parking aspects of the evolving autonomous vehicles.

Figure 5: CES2016 BMW self-parking connected car.

It’s interesting to note that connected car technology is nothing new. It’s been used in the racing industry for years at places like the Sonoma Raceway near San Francisco, CA. The high performance race cars are constantly collecting, conditioning and sending data throughout different parts of the car, to the driver and finally to the telemetry-based control centers where the pit crews reside. This is quite a bit different from the self-driving and parking aspects of consumer autonomous vehicles.

Figure 6: Indy car race at Sonoma Raceway.

 

 

 

Fit-for-Purpose Tools Needed for ISO 26262 Certification

April 19th, 2016

Both the product development process and third-party tool “fit-for-purpose” certification are needed for Automotive ISO 26262.

By John Blyler, Editorial Director

Recently, Portland-based Jama Software announced a partnership with an internationally recognized ISO 26262 automotive testing body to obtain ISO26262 “Fit-for-Purpose” certification.  This accreditation will assure automotive OEM and suppliers that the workflows they follow to define, build and test automotive related products in the Jama tool suits meet critical functional safety requirements.

When asked the name of the testing body issuing the “Fit-for-Purpose” certification, Jama’s co-founder Derywn Harris replied that the well-known organization could not be named until the certification was issued. Further, he emphasized that the certification was less for the designers and more for the compliance folks who define the process and obtain their own certification.

“This is the big difference between “fit for purpose” and having an actual certification. We will NOT be ISO 26262 certified,” he explained.

So how exactly does this certification help? Customers seeking ISO 26262 certification must make sure the tools they use and the use cases within those tools are evaluated to determine the Tool Confidence Level (TCL) level for each workflow. The TCL is a function of the Tool Impact (TI) measure, which indicates the possibility of a development system failure based on the cause of a tool problem and the Tool Error Detection (TD). The TD measures the likelihood of a tools problem detection and finding a suitable workaround.

In simple words, tool vendors must be sure their software process is fit-for-purpose for functional safety development in alignment with ISO 26262 (functional safety standard for passenger vehicles).

For example, let’s assume that a company uses Jama software for traceability of critical embedded hardware-software safety requirements and associated tests. This company will have to demonstrate how they are actually using this functionality in their workflow and apply a TCL level to that flow. That TCL number along with other risk-related measures will provide a level of confidence that the tools are fit for automotive safety-focused development.

Figure: Here’s an example of traceability showing both upstream and downstream trace relationships.

“While customers can do this themselves there are aspects of the tool development process they don’t have control of or visibility into,” notes Harris. “Hence they either need to audit the vendor or the vendor needs a certification. So, long story short, by Jama having a certification we save our customers time and cost.”

The traceability example is but one of many safety related system functions that companies may need to re-evaluate to gain ISO 26262 certification for their product development process. But traceability is a key process function needed for today’s robust design. Customers seeking ISO 26262 product certification are often blocked because the third-party tools they use are not ISO 26262 “fit for purpose” certified.

General Chair Shares Insights on DVCon 2016

February 22nd, 2016

Chair Yatin Trivedi highlights the upcoming US chip design-verification show and differences with European and Asian DVCon events.

By John Blyler, Editorial Director

What’s new at this year’s annual semiconductor chip design and verification conference (DVCon), held between Feb. 29 through Mar. 3, 2016, at the Doubletree Hotel in San Jose, CA? How has the globalization of this event affected the primary show? “JB Systems” sat down with Yatin Trivedi, DVCon General Chair, to answer these questions. What follows is a portion of that interview. – JB

Blyler: How is DVCon doing?

Trivedi: For 2016, we expect attendance to be around 1,000 attendees, about 800 attendees and about 300 exhibitors, which will be greater than last year. The number of exhibit booths should be about 40. People are still signing up. As usual, there will be lots of networking events with qualified engineers. I like to think of DVCon as “Facebook” live for engineers. The value of the show remains the same: attendees are able to learn from their peers.

There will be two panel sessions: one moderated by Jim Hogan on where the industry goes from here and the other moderated by Brain Bailey on ESL. Other opportunities exist in the poster sessions, where people talk with the authors and other engineers. Everyone exchanges good pieces of information about what does and doesn’t work and under what conditions.

The exhibit floor provides a place to show attendees that vendor claims about solutions can actually be demonstrated.

There will be 37 papers at this year’s show plus a couple of invited talks. The CEO of Mentor Graphics, Wally Rhines, will present the invited keynote on Tuesday. Tutorials start on Monday with courses on Accellera standards given by Accellera committee members. Vendors will provide tutorials on Thursday to solve specific problems. Topics range from debug methodologies to the Universal Verification Methodology (UVM), SystemC, formal verification and more.

Blyler: Recently, DVCon has expanded into Europe and Asia. What is the latest information on those activities?

Trivedi: DVCon US is the flagship of the show. A few years back we had the first DVCon Europe and India. We started events in these countries as a way to serve specific centers of excellence. For example, a lot of automotive work is done in Europe because of the presence of BMW, Mercedes and other automotive manufactures. Naturally, a large community of electronic designers has developed to support these companies.

Another motivating factor is that not everybody has the opportunity to travel to the US for DVCon. European Accellera board members like ST, NXP, Infineon, ARM and others convinced us of the need for a DVCon in Europe. So we put together the first conference in 2014, which had about 200 people. At last year’s event in 2015, we had over 300 attendees. The reason for the growth was pent-up interest from local communities that could not travel. The other benefit of a local DVCon was that people who could attend would be more willing to submit technical papers.

Blyler: Did the show in India grow from the same motivation as in Europe?

Trivedi: No, it happened a little bit differently. In India, there was already an event called India SystemC User Group or ISCUG. This event had about 300 people. At the same time, there existed a chip design-verification (DV) community that wasn’t exactly served by ISCUG. The merging of the Open SystemC Initiative (OSCI) with Accellera presented the opportunity for DVCon to open in India with two tracks: One for ESL or SystemC and another track on design and verification (DV). The later track provided a new platform where DV engineers could get together. At the first show in 2014, we had about 450 attendees. Last year in 2015, we topped 600 attendees. As a two year track record, that’s about 30 to 40% growth year-over-year.

Initially, we were worried that these new conferences might cannibalize the original US conference. That fear never came true because the paper submissions for the new shows came from local communities as did the volunteer organizations in terms of program and steering committees, exhibitors, etc. And the attendance came locally. It was probably something we should have done earlier.

This means that DVCON globally has grown to a 2000+ worldwide community.

Blyler: Thank you.

EDA Tool Reduces Chip Test Time With Same Die Size

February 4th, 2016

Cadence combines physically-aware scan logic with elastic decompression in new test solution. What does that really mean?

By John Blyler, Editorial Director

Cadence recently announced the Modus Test Solution suite that the company claims will enable up to 3X reduction in test time and up to 2.6X reduction in compression logic wirelength. This improvement is made possible, in part, by a patent-pending, physically aware 2D Elastic Compression architecture that enables compression ratios beyond 400X without impacting design size or routing. The press release can be found on the company’s website.

What does all the technical market-ese mean? My talk with Paul Cunningham, vice president of R&D at Cadence, helps clarify the engineering behind the announcement. What follows are portions of that conversation. – JB

 

Blyler:  Reducing test times saves companies a lot of money. What common methods are used today?

Cunningham: Test compression is the technique of reducing the test data volume and test application time while retaining test coverage. XOR-based compression has been widely used to reduce test time and cost. Shorter scan chains mean fewer clock cycles are needed to shift in each test pattern, reducing test time. Compression reduces test time by partitioning registers in a design into more scan chains than there are scan pins.

But there is an upper limit to test time. If the compression ratio is too high, then the test coverage is lost. Even if test coverage is not lost, test time savings eventually dry up. In other words, as you shrink the test time you also shrink the data you can put into the compression system for fault coverage.

As I change the compression ratio, I’m making the scan chains shorter. But I’ve got more chains while the scan in pin numbers are constant. So every time I shrink the chain, each pattern that I’m shifting in has less and less bits because the width of the pattern coming in is the number of scan pins. The length of the pattern coming in is the length of the scan chain. So if you keep shrinking the chain, the amount of information in each pattern decreases. At some point, there just isn’t enough information in the pattern to allow us to control the circuits to detect the faults.

Blyler: Where is the cross-over point?

Cunningham: The situation is analogous to general relativity. You know that you can never go faster than the speed of light but as you approach the speed of light it takes exponentially more energy. The same thing is going on here. At some point, if the length of the chain is too short and our coverage drops. But, as we approach that cliff moment, the number of patterns that it takes to achieve the coverage – even if we can maintain it – increases exponentially. So, you can get into the situation where, for example, you half the length of the chain but you need twice as many patterns. At that point, your test time hasn’t actually dropped because test time it the number of patterns times the length of the chain. So the product of those two starts to cancel out. At some point you’ll never go beyond a certain level but your coverage will drop. But as you get close to it, you start losing any benefit because you need more and more patterns to achieve the same result.

Blyler: What is the second limit to testing a chip with compression circuitry?

Cunningham: The other limit doesn’t come from the mathematics of fault detection but is related to physical implementation. In other words, the chip size limit is due to physical implementation, not mathematics (like coverage).

Most of the test community has been focused on the upper limit of test time. But even a breakthrough there wouldn’t address the physical implementation challenge. In the diagram below, you can see that the big blue spot in the middle is the XOR circuit wiring. All that wiring in the red is wiring to and from the chains. It is quite scary in size.

Blyler: So the second limit is related to the die size and wire length for the XOR circuit?

Cunningham:  Yes - There are the algorithm limits related to coverage and pattern count (mentioned earlier) and then there are the physical limits related to wire length. The industry has been stuck because of these two things. Now for the solution. Let’s talk about the things in reverse order, i.e., the issue of the physical limits first.

What is the most efficient way to span two dimensions (2D) with Manhattan routing? The answer is by using a grid or lattice. [Editor’s Note: The Manhattan Distance is the distance measured between two points by following a grid pattern instead of the straight line between the points.]

So the lattice is the best way to get across two dimensions while giving you the best possible way to control circuit behavior at all points. We’ve come up with a special XOR Circuit structure that unfolds beautifully into a grid in 2D. So when Modus inserts compress it doesn’t just create an XOR circuit, rather, it actually places it. It takes the X-Y coordinates for those XOR gates. Thus, using 2D at 400X has the same wire length as 1D at 100X.

Blyler: This seems like a marriage with place & route technology.

Cunningham:  For a long time people did logic synthesis only based on the connectivity of the gates. Then we realized that we really had to do physical synthesis. Similarly, for a long time, the industry has realized that the way we connect up the scan chains need to be physically aware. That’s been done. But nobody made the actual compression logic physically aware. That is a key innovation in our product offering.

And it is the compression logic that is filling the chip – all that red and blue nasty stuff. That is not scan chain but compression logic.

Blyler: It seems that you’ve address the wire length problem. How do you handle the mathematics of the fault coverage issue?

Cunningham: The industry got stuck on the idea that, as you shrink the chains you have shorter patterns or a reduction in the amount of information that can be input. But why don’t we play the same game with the data we shift in. Most of the time, I do want really short scan chains because that typically means I can pump data into the chip faster than before. But in so doing, there will be a few cases where I lose the capability to detect faults because some faults really require precise control of values in the circuit. For those few cases, why don’t I shift in more clock cycles than I shift out?

In those cases, I really need more bit of information coming in. But that could be done by making the scan deeper, that is, by adding more clock cycles. In practice, that means we need to put sequential elements inside the decompressor portion of the XOR Compressor system.  Thus, where necessary, I can read in more information. For example, I might scan in for 10 clock cycles but I’ll scan out (shift out) for only five clock cycles. I’m read in more information than I’ve read out.

In every sense of the word, it is an elastic decompressor. When we need to, we can stretch that pattern to contain more information. That stretched pattern it then transposed by 90 degrees into a very wide pattern that we then shove into those scan chains.

Blyler: So you’ve combined this elastic decompressor with the 2D concept.

Cunningham: Yes – and now you have changed the testing game with 400x compression ratios and achieving up to 3X reduction in test time without impacting the wire length (chip size). We have several endorsements from key customers, too.

In summary:

  • 2D compression: Scan compression logic forms a physically aware two-dimensional grid across the chip floorplan, enabling higher compression ratios with reduced wirelength. At 100X compression ratios, wirelength for 2D compression can be up to 2.6X smaller than current industry scan compression architectures.
  • Elastic compression: Registers embedded in the decompression logic enable fault coverage to be maintained at compression ratios beyond 400X by controlling care bits sequentially across

Blyler: Thank you.

Managing Complex Hardware-Software Systems – Online Course

January 27th, 2016

Managing complex technical systems presents different challenges from typical project/program management. This course will explore those differences using good systems_engineering principals, numerous case studies and modern tools. Some of the contemporary topics include:

  • Implementing a strategic, tailored middle-out approach to development that incorporates legacy systems.
  • Collaborating change across many engineering and business disciplines without affecting existing tool flows or development processes.
  • Managing integration and assessing challenges between hardware and software teams.
  • Exposure to Model-based systems engineering (MBSE) techniques.
  • Modern case studies to support up-to-date system engineering principles and concepts including decision modeling,trade-off analysis for hardware and software systems and the effect of organizational structure on product iot design
  • Practical system engineering program and design review checklists to aid in the effective and efficient implementation of overall program requirements (partially available online).

This course is based on the latest 5th edition of Wiley’s “Systems Engineering Management,” by Blanchard and Blyler.

Online course students will receive 3 CEUs from the Florida Institute of Technology or a certificate of completion that may be applied as 30 Continuing Learning Points (CLPs).

Visit the RMS Partnership page (right-hand column) to register

Last Day to Register is Thursday, February 4th

Instructor: John Blyler

Soon after you registered a member of the RMS Partnership will contact you regarding on-line procedures and related information

Originally posted on JB Systems

The Dangers of Code Cut-Paste Techniques

January 20th, 2016

A common coding practice can lead to product efficiency but also software maintenance nightmares and technical debt.

By John Blyler, Editorial Director

At a recent EDA event, Semi-IP Systems talked with Cristian Amitroaie, the CEO of AMIQ, about the good and bad side of developing software from code that is copied from another program. What follows is a paraphrased version of that video interview (AMIQ_DAC2015_Pt1). – JB

Cristian Amitroaie (left), the CEO of AMIQ, is interviewed by John Blyler (right), editorial director of JB Systems.

Blyler: I noticed that you recently released a copy-paste detection technology. Why is that important?

Amitroaie: We introduced these capabilities in Verissimo Testbench Linter. We’ve had it in mind for some time, both because it is very useful – especially when you accumulate large amounts of code – but also because it is fun to see how much copy-paste code exists in your program. There are all sorts of tools that detect copy-paste in the software development world, so why not include that in the chip design and verification space. Also, the topic of copy-paste code development is very interesting.

Blyler: Is the capability to copy and paste a good or a bad thing for developers?

Amitroaie: To answer that question, you first have to first see how copying and pasting are being used. While it is not a fundamental pattern used by software engineers, it is a common technique. When engineers want to build something new or solve an existing problem, they usually start from something that is working or basically doing what they need. They take what exists to use directly or to enhance it for another application. Copying and pasting is a means to start something or enhance it. In software, it is very easy to do.

It may happen that you don’t know if what you need is already available. For example, you make work in a big company and similar activities are being done in parallel that you don’t know about. So, you develop the same thing again. Now you have code duplication in the overall code base.

Another reason to use a copy-paste approach is that junior engineers lack the senior-level skills or experience to start from scratch. They copy and then build upon existing solutions.

Whatever the reason, in time most companies will have duplicate code. The fact that you use copy-paste to solve problems isn’t bad, because you take something that works. You don’t have to start from scratch, so you save time. You tweak the existing code and use it to solve a new problem. After all, engineering is about making things work. It’s not about finding the best, most ideal solution.

Blyler: We are talking about software programmers who prefer elegant solutions, aren’t we?

Amitroaie: Yes, but today you have lots of time pressures. Plus you often don’t have enough resources to get the best solution. But you need to get a solution within the market window. So elegance tends not to be the highest priority. The top thing is to make it work. In this sense, copying and pasting is a practice that makes sense. It is also unavoidable.

The bad thing is that, as time goes by, you accumulate and duplicate more code.  When a mistake is detected, you must now go to several places to fix it in the original code – if there is such a thing. It’s an interesting question: In this copy-paste world, is there such a thing as the original code? But that’s another matter.

Fixing or enhancing the code is problematic. For example, if you want to enhance an algorithm or functionality, you must remember where all the duplications are located. Many times, when you duplicate code, you don’t understand the intention of the original program. You think that you understand so you copy and paste it, adding a few tweaks. But maybe you really didn’t understand the intentions or implications of the original programmer and unknowingly insert a bug.

In this sense, copying and pasting is bad. As the code base grows, you can have what is called “technical debt” resulting from the copy-paste activity. Technical debt results from code that hasn’t’ been cleaned. We never have time to clean-up the code. We say that we’ll do it later but never do. If you go to your manager with a request for code clean-up, he/she will say “no.” Who approves time for code cleanup? Very few. They all talk about it but I’ve never seen it happen. Even though we are in the EDA market, we are still software developers and have the same challenges when trying to improve our code. I know how hard it is for a team leader to approach code clean-up. This is what is known as technical debt, which is analogous to interest that accumulates on a financial loan. You add more and more and the “clean-up debt accumulates that will add to higher maintenance cost over time. You can end up having huge piles of code that no one knows where it starts or ends and how much is duplicated. That makes it tough to redesign or make the code more compact. It will blow up in your face at the worse possible moment.

Blyler: The debt comes due when you are least able to afford it.

Amitroaie: Yes, it’s unavoidable. It is similar to software entropy in that it keeps accumulating. At some point, it will become more cost effective to rewrite the code from scratch than to maintain it.

The good side of copying and pasting is that it is a fundamental way of getting code developed quickly. It helps programmers advance in an efficient way, at least from a result oriented prospective. The bad side is that you accumulate technical debt that can lead to maintenance nightmares.

Blyler: Thank you.

For more information: AMIQ EDA Introduces Duplicate Code Detection in Its Verissimo SystemVerilog Testbench Linter

Originally posted on Chipestimate.com “IP Insider”

Imagination Talks about RFIC, IP, Embedded IoT and Fabric-based SoCs

December 18th, 2015

Tony King-Smith of Imagination Technologies discusses the need for RF IC designers, IP platforms, IoT as the new embedded and fabric-based SoCs.

At this year’s Imagination Summit 2015 in Silicon Valley, John Blyler, editorial director for “IoT Embedded Systems” sat down with Tony King-Smith, the Executive VP of Marketing at Imagination, to talk about RF design, IP platforms, embedded IOT and fabric-based SoC trends. What follows are excerpts from that conversation. – JB

Read the complete story on the Chipestimate “IP Insider” blog.

Autonomous Car Patches, SoC Rebirth, IP IoT Platforms and Systems Engineering

December 9th, 2015

Highlights include autonomous car technology, patches, IoT Platforms, SoC hardware revitalization, IP trends and a new edition of a systems engineering classic.

By John Blyler, Editorial Director, IP and IoT Systems

In this month’s travelogue, publisher John Blyler talks with Chipestimate.TV director Sean O’Kane about the recent Renesas DevCon and trends in software security patches, hardware-software platforms, small to medium businesses creating System-on-Chips, intellectual property (IP) in the Internet-of-Things (IoT) and systems engineering management. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. I hope you find it informative. Cheers — JB

 

ChipEstimate.TV — John Blyler Travelogue, November 2015

Read the transcribed, complete post on the “IP Insider” blog.

 

 

Is Hardware Really That Much Different From Software

November 30th, 2014

When can hardware be considered as software? Are software flows less complex? Why are hardware tools less up-to-date? Experts from ARM, Jama Software and Imec propose the answers.

By John Blyler, Editorial Director

HiResThe Internet-of-Things will bring hardware and software designers into closer collaboration than every before. Understanding the working differences between both technical domains in terms of design approaches and terminology will be the first step in harmonizing the relationships between these occasionally contentious camps. What are the these differences in hardware and software design approaches? To answer that question, I talked with the technical experts including Harmke De Groot, Program Director Ultra-Low Power Technologies at Imec; Jonathan Austin, Senior Software Engineer at ARM; and Eric Nguyen, Director of Business Intelligence at Jama Software; . What follows is a portion of their responses. — JB

Blyler: The Internet-of-Things (IoT) will bring a greater mix of both HW and SW IP issues to systems developers. But hardware and software developers use the same words to mean different things. What do you see as the real differences between hardware and software IP?

De Groot: Hardware IP, and with that I include very low level software, is usually optimized for different categories of devices, i.e. devices on small batteries or harvesters, medium size batteries like mobile phones and laptops and connected to the mains. Software IP, especially for the higher layers, i.e. middleware and up can easier be developed to scale and fit many platforms with less adaptation. However practice learns that scaling for IoT of software also has its limitations, for very resource limited devices special measures have to be taken. For example direct retrieval of data from the cloud and combining this with local sensor data by a very small sensor node is a partly unsolved challenge today. For mobiles, laptops and more performing devices there are reasonable solutions (though also not perfect yet) to retrieve cloud data and combine this with the sensor information from the device in real-time. For sensoric devices with more resource constraints working on smaller batteries this is not so easy, especially not with heterogeneous networking challenges. Sending data to the cloud (potentially via a gateway device as a mobile phone, laptop or special router) seems to work reasonably, but retrieving the right data from the cloud to combine with the sensor data of the small sensor node itself for real-time use is a challenge to be solved.

Austin: Personally, I see two significant differences between the real differences between hardware and software design and tools:

  1. How hard it is to change something when you get it wrong? It is ‘really hard’ for hardware, and somewhere on spectrum from ‘really hard’ to ‘completely trivial’ in software.
  2. The tradeoffs around adding abstraction to help deal with complexity. Software is typically able to ‘absorb’ more of this overhead than hardware. Also, in software it is far easier to only optimize the fast path. In fact, there usually isn’t as much impact to an unoptimised slow path (as would be the case in hardware.)
  3. There are differences in the tool sets. This was an interesting part of an ongoing debate with my colleagues. We couldn’t quite get to the bottom of why it is so common for hardware projects to stick with really old tools for so long. Some possible ideas included:
  • The (hardware) flow is more complex, so getting something that works well takes longer, requires more investment and results in a higher cost to switch tools.
  • There’s far less competition in the hardware design space so things aren’t pushed as much. This point is compounded by the one above, but the two sort of play together to slow things down.
  • The tools are hardware to write and more complex to use. This was contentious, but I think on balance, some of the simplicity and elegance available in software comes because people solve some really touch physical issues in the hardware tools.

So, this sort of thinking led me to an analogy of considering hardware to be very low level software. We could have a similar debate about javascript productivity versus C – and I think the arguments on either side would like quite similar to the software versus hardware arguments.

Finally on tools, I think it might be significant that the tools for building hardware are *software* tools, and the tools for building software are *also* software tools. If a tool for building software (say a compiler) is broken, or poor in some way, the software engineer feels able to fix it. If a hardware tool is broken in some way, the hardware engineer is less likely to feel like it is easy to just switch tasks quickly and fix it. So that is I guess to say, software tools are build for software engineers by software engineers, and hardware tools are built by software engineers to be sold to companies, to be given to hardware engineers!

Nguyen: One of the historical differences relates to the way integrated system companies organized their teams. As marketing requirements came in, the systems engineers in the hardware group would lay out the overall design. Most of the required features and functionality were very electrical and mechanical in nature, where software was limited to drivers and firmware for embedded electronics.

Today, software plays a much bigger role than hardware and many large companies have difficulties incorporating this new mindset. Software teams move at a much faster pace than hardware. On the other hand, software teams have a hard time integrating with the tool sets, processes and methodologies of the hardware teams. From a management perspective, the “hardware first” paradigm has been flipped. Now it is a more of software driven design process where the main question is how much of the initial requirements can be accomplished in software. The hardware is then seen as the enabler for the overall (end-user) experience. For example, consider Google’s Nest Thermostat. It was designed as a software experience with the hardware brought in later.

Blyler: Thank you.

Review of Jama, ARM Techcon and TSMC OIP Shows

November 14th, 2014

October issues of the “Silicon Valley High-Tech Traveler Log” – with Sean O’Kane and John Blyler

Three events from TSMC, ARM and JAMA Software highlight the breadth and depth of IP development that (hopefully) results in manual-less consumer apps.

A few week’s ago, I attended three shows -  Jama’s Software Product Delivery SummitTSMC’s Open Innovation Platform (OIP) and ARM’s Techcon. While each event was markedly different there was an unintentional common thread, i.e., all three dealt with the interplay between hardware and software IP systems – albeit on different levels of the supply chain.

Each of these shows characterized that interplay in different ways. For TSMC, it was a focus on deep semiconductor manufacturing-related IP. Conversely, Jama Software dealt with product delivery issues for which embedded hardware and application software played a major role. Embedded software on boards running the company’s flagship processors and ecosystem IP hardware peripherals was the focus at the ARM Techcon. Why are these various instantiations of IP important?

Read the rest of the story at: IP-Based Technology without Manuals?

 

 

 

 

 

 

Next Page »