March 20, 2009

Tool Economics 101  Comments 

Filed under: Engineering Tools — admin @ 12:52 pm

There are many economic aspects to think about when purchasing a tool.  This post discusses some of the basics covering: burdened costs, assessing the value of a tool, make versus buy decisions , and training costs.

 

 

 

Burdened Costs

 

Correctly accounting for the burdened costs is critical when understanding the costs of internal development.  This is important for both “make versus buy” decisions and to understand what a productivity improvement is worth in lowering expenses. 

 

The development costs are more than just the internal developer’s salary.  This should be the total costs that the company spends for that person’s time.   The benefits for the internal employee should be included, then a portion of other costs such as the rent.  There may be other costs that should also be added.  For example, if there is one technician for every three hardware engineers then one third of the technicians’ burdened cost should be added to the burdened cost for the hardware engineer.  A more in-depth discussion is available at the following link (though a bit “salesy” as they partly promote their solution): http://www.infoplusacct.com/cms-pdf/Labor%20Burden-PDF%20version.pdf

 

For example, in high-tech a typical yearly burdened cost for a development engineer in the Bay Area is between $200,000 and $250,000.  This cost will vary depending on the region of the country, the experience of the engineer, and the particular expertise involved.  The costs can easily vary up to 100% within the same company within the same location.  Most yearly burdened costs for an engineer are between $75,000 and $400,000 a year.

 

Value of a Tool

 

Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

-          Abraham Lincoln

 

The value of a sharp and efficient tool can be great, and there are times for when selecting or building the right tool is critical.  Just imagine how taxing it would be to undertake a large software project without a compiler.  However, the first question about a tool is “Is it worth it at all?”.  Does the tool provide enough value to justify the purchase or the development of a tool? 

 

Assessing the value of a tool depends on how it fits within the project.  A tool that enables the team to take on a complex project may be invaluable.   Another tool may accelerate the completion of tasks that are on the critical path for a project.  The primary value of such a tool comes from the improved time-to-market for the overall project.  Another tool may provide a general productivity improvement for team members.  The value for such a tool may be in lowering the expenses to complete the project.  Lastly, another tool’s primary value may be in being able to debug complex issues that will hold up the project when they are encountered.  The biggest part of the value for such a tool may be the peace of mind that it brings to project management. 

 

It is interesting to follow how the value of a tool evolves over time.  When a tool is a new concept the focus is on improving time-to-market and user productivity.  Early adopters may receive tremendous gains from being first to use a tool.  As a tool becomes more commonly used the focus becomes keeping up with your competition.  You may feel that you are being left behind, or will have difficulty keeping pace without the tool.  Once a tool is in widespread usage often it becomes a recruiting issue.  The lack of the tool may make it hard to hire engineers that view the tools as inferior without the capability.  Also being trained on the tool may become a requirement to be considered for certain positions.

 

Make Versus Buy Decisions

Most commercial tools started as internal custom tools.  Then as the number of users grew, some companies decided they could make a profit to selling tool products to those users.  With third party commercial tools available a company is presented with a “make versus buy” decision.

 

In doing such an analysis it is important to make sure that it is a fair “apples-to-apples” comparison.  The first issue in making the comparison fair is to understand the true burdened costs for the internal development option.   Other variables also need to be taken into account.  For example, if the commercial tool has better documentation than the internal tool.  You must either add time to improve the internal documentation, or assess the lost productivity in using a poorly documented tool.  The Total Cost of Ownership (TCO) of the tool needs to be assessed, including the ongoing maintenance, calibration, and training for the tool.

 

There are issues that go beyond just the economics in a “make versus buy” the decision.  Other factors such as risk and control need to be included.  The presentation at this link provides a good checklist of those other issues when procuring an outside product. http://www.maxwideman.com/issacons4/iac1402/sld002.htm

 

 

 

 

Training costs

 

For complex tools the training costs associated with using a tool can be a significant portion of the cost of integrating a tool into a development organization. 

 

First there are the actual fees for the training class itself.  If the expertise is rare and in-demand these fees can easily run into the thousands of dollars per engineer.  Of course on top of that is the burdened cost of the time of the engineer that is taking the class, the cost of travel, or other specific expenses in taking a class. 

 

Lastly there are the hidden training costs.   These come from the lessened initial productivity when using a new tool.  It may take a few weeks or months before someone is fully proficient in the new tool.  They are still effectively training themselves, even though they are not in a classroom.  Also the person that is the internal expert on the tool may become deluged with questions from colleagues during the learning phase.

 

When these costs are factored in, it becomes apparent why the ease-of-use can be a critical decision criterion when selecting a new tool.  It also explains why companies are generally reluctant to switch tools, and almost never switch tools in the middle of a project.

 

 

I hope this summary has been valuable, and caused you to think about the economics of tools.  Please comment if you have a specific example that relates.

 

Rick Denker

Packet Plus, Inc.

 

July 30, 2008

Life on the Treadmill  Comments 

Filed under: Engineering Tools — admin @ 8:16 pm

 How many engineers think of their career as being spent on a treadmill?   It is a larger portion than most realize.

 

The majority of electrical engineers work on products that are performance based.  A fundamental often overriding characteristic of their product’s value is the speed of operation.  When one version of the product is finished, development starts on the next more powerful version. 

 

This is true in the computer industry, the telecommunications industry, and certain portions of the consumer electronics industry.  The sales of semiconductors into these industries, shows that between two-thirds and three-quarters of electronics engineers work on an ongoing performance treadmill.

 

  

 

In searching the Internet there were several articles that either claimed that treadmill was slowing down, or continuing unabated.  There were also articles that bemoaned being a consumer of products that were becoming obsolete too fast and they wanted to get off the treadmill.  A couple of interesting articles are referenced below.

 

A technology treadmill and the effects on the vision of an industry.

http://www.isa.org/intech/blog/2008/02/winners-work-technology-treadmill.html

 

The quarterly business treadmill and the effects on project management.

http://www.undocumentedfeatures.com/2008/06/16/the-90-day-treadmill/

 

However, I did not find any articles that either described how this affects the life of the engineer developing the products, or gave any advice on how to cope with being on the treadmill.

 

 

 

 

 

The Predictive Value of Moore’s Law

 

It has been said that the value of Moore’s Law (http://special-sarfunshafi.blogspot.com/2007/11/meeting-man-behind-moores-law.html) was that it allowed Intel to plan the features to incorporate into each new microprocessor.   The Law allowed Intel to estimate of the number of transistors could be incorporated into each generation of the microprocessor.  This allowed them to determine when it would be feasible to incorporate certain new features into their devices such as floating point arithmetic.

 

There were also versions for other industries.  One I was familiar was Bill Joy’s Law of workstation performance.  It was also exponential, but not at the same rate as Moore’s Law.   This law was critical for planning a simulation accelerator product in the Electronic Design Automation (EDA) industry.  The key feature of a simulation accelerator was how much faster it was compared to a standard workstation.  The law allowed a projection of how the product would compare to a workstation when it was introduced, and to estimate the product life of how long the product would be viable.

 

 

Methods Breakdown

 

As the treadmill progresses some methods may breakdown, or need to evolve.  In the systems design class at MIT, we were taught that changes of that are of an order of magnitude in size, will cause unexpected components to break, or require a fundamentally new solution.  The example that I remember is of a new plane that could descend more rapidly.   The problem was the plane kept being landed in Tokyo bay on early morning flights.  The problem was that the new rapid descent did not give the pilot’s eyes time to adjust to the early morning sunlight as they approached Tokyo.  They were in effect flying blind until their eyes adjusted.

 

The progression of the software in-circuit emulator described below is illustrative.          

 

In the 70s through early 80s the preferred embedded software development tool was the in-circuit emulator (ICE).  This tool would plug into the socket for the microprocessor with a probe and act in the place of the device in the target design with additional debugging capabilities.  However, this approach became unfeasible and uneconomical as the design speed increased.  The cost to develop an ICE was escalating and the supportability became questionable as the probe method of access proved increasingly unreliable.  This method effectively broke down at higher speeds.

 

In the mid-80s microprocessor vendors starting adding debug features such as breakpoints into the device silicon to aid the embedded software developer.  This allowed a fundamentally different approach to building the software tool.  This proved an effective alternative to the probe and worked well with increasing device speeds.  Today, virtually every microprocessor device supports silicon-based access to debug features.

 

 

 

Focus on the core

 

Another aspect of being on the performance treadmill is that you become acutely aware of the portions of the design that drive performance.  These become the focus of tremendous amount of engineering effort.  Just as the critical path of a schedule get additional attention, so also does the performance core of a design.

 

The portions of the design that are not performance critical are often somewhat neglected.  They get the time that is left over after the performance critical portions are under control.

 

In the EDA industry this can be seen in that the tools are continually being re-built the new geometry of the latest semiconductor process.  The base tools get completed before any other tools get much attention.  This explains why it took so many years for timing analysis and other non-critical tools to mature into complete products.

 

One strategy for these non-critical features is to out-source them.  Once a particular function becomes large enough, it may support a third-party developing it and making a business of taking over the problem.  Since it is a less critical portion of the design, the risk of being dependent on a third party may become acceptable.  There are many examples where this is has proven to be successful. 

 

 

Thriving on the Treadmill

 

In summary here are three suggestions for thriving on the treadmill. 

 

 

One, step back and look at the implications of the treadmill performance.  Knowing the speed of the treadmill is important to getting out ahead of potential issues.

 

Two, watch for predictable places the system could break down especially when attempting a change that is an order of magnitude in size.

 

Three, know the core parts of design for performance and assess whether a third-party can take over the non-critical portions of the design.

 

 

If you know of other guidelines that you would recommend, please add a comment.

 

 

Rick Denker

 

Packet Plus™, Inc.

 

April 16, 2008

Tools and the Debug Cycle  Comments 

Filed under: Engineering Tools — admin @ 1:44 pm

The efficiency of a development or debug tool needs to be looked at with the perspective on how it effects the debug cycle.  Put another way the real goal of a compiler is to make the user more productive in building a new product, not just running more compiles.   

To analyze this real efficiency you must look at the overall debug cycle.  The cycle has three components: 1) the debug trial, 2) analyzing the results, and 3) modifying the build for the next trial.

           

The debug trial 

This is the time that measurements are being taken.  The trial needs to be run until a problem or issue that needs to be fixed occurs.  Sometimes productive execution can continue beyond the finding of the first issue.  Other times the error has an effect on the future execution and renders any continued execution useless.  As a project progresses the length of trials will on average increase as the equipment under development can run longer and longer without a failure.

Analyzing the results

This is the time to analyze the measurements that were taken during the trial.  This portion of the debug cycle is dominated by the think time as the engineer attempts to decipher the results to determine what caused the behavior.  The format and level of the information is critical to making it easy to pinpoint the issues.  In networking development, protocol analyzers translate the millions of bits that have flown by into symbolically decoded packets to greatly ease the analysis. 

Modifying the build for the next trial 

This involves running the build tool, loading the new build into the target design, and setting up the rest of the test configuration for the trial.  The build tool will vary depending upon the kind of engineer and the portion of the design that is being debugged.  Build tools include assemblers, compilers, hardware synthesis, and FPGA synthesis, among others.  The loading of the build into the target design may just involve downloading to the target over a cable of network connection.  It may also involve burning the image into an EPROM or other memory device, or multiple copies for multiple devices.  Lastly there is the setting up other configuration variables of the target system.  This can involve physical positioning, resetting of mechanical components, and re-initializing equipment to a known state.

 

The time to accomplish the build can vary widely from a few minutes to many hours depending on the complexity.  Typically the times are between 15 minutes and one hour.  Also this time tends to increase as a project proceeds, because the size of the design increases, and the test configuration increases in complexity.

Changing Bottlenecks 

Early in my career as a programmer there was controversy caused by how expensive computer time was.  The computer had limited access and compiles were typically batched together for maximum efficiency.  Some thought that programmers did not need more compute power, they just needed to do more analysis before they submitted another compile.  Today with the low cost of compute power and the expense of engineering time, it seems silly to focus on computer efficiency, instead of engineering productivity.

 

I once worked on a project to improve the performance of a software linker.  The link times were taking over one hour of CPU time.  With multiple engineers time sharing the minicomputer this was limiting the engineers to one or two debug trials per day.  With the project we were able to reduce the CPU time required for linking by 30 times.  Suddenly the CPU was no longer the bottleneck.  They could then make between 6 to 8 debug trials per day.  The time for the engineers to analyze the results and access to shared resources now dominated.

 

The early hardware acceleration products in the EDA industry could run a design simulation a thousand times faster than the design simulation run on a workstation.  However, the generation of the netlist to load in to the accelerator, became the bottleneck to productivity.  It could take hours to generate the netlist for a simulation run that would take only minutes.  This severely limited the usefulness of the accelerator.  Understandably as acceleration products evolved the generation of the netlist also was accelerated.

Shared Resources 

Once the tool bottleneck has been reduced or eliminated, there are often shared resources that can end up becoming a bottleneck.  These shared resources can include the lab equipment configuration, a high-performance piece of test equipment, access to a Faraday cage, and the target design hardware.  The allocation of the shared resources may need to be scheduled, and many engineers will work off hours to increase access.

Bust Trials 

Between 10% and 25% of trials are busts.  By this I mean that no valuable feedback to the design is achieved.  The causes for this can include a simple error in the design logic, improperly configured equipment, or outside interference.  Often the reason for the failure is discovered before any data is transmitted.  The equipment may not even be able to initialize.  This causes a quick desperate survey of all the potential culprits.  Sometimes the issue causes a complete bust and a re-build of the design is required.  Other times a partial bust is caused and the trial can be re-run from the beginning without a re-build being required.

 

An additional problem is that these bust trials often still take up the time of the shared resource.  Sometimes there is another engineer that has been waiting in the wings.  However, if there is not another engineer ready, the resource sits idle for some period of time.

Recommendations 

Given the above, here are a few recommendations:

 

One, monitor the number of debug trials per day that engineers are able to make, and make sure that they are not being tool bound. 

           

Two, maximize the information gathered per trial after it has gotten past the bust issues.  This can be accomplished with debug tools that support interactive queries during the debug trial (see post “Basic Debug Tools and beyond?, http://www.chipdesignmag.com/denker/?p=28).

           

Three, look for features that help avoid additional trials.   In-place editing can avoid having to make another build, and supports exploration that goes beyond what was anticipated before the start of the trial.  Also it is important that the information is presented at the right level (see post “Being on the Level?, http://www.chipdesignmag.com/denker/?p=29). If the engineer always has to stop to analyze, they will be bumped off the shared resource to make way for the next engineer.

 

Rick Denker

Packet Plus

 

February 5, 2008

Being on the Level  Comments 

Filed under: Engineering Tools — admin @ 5:34 pm

 

I recently had a discussion with a development engineer about what makes an ideal development tool.  He responded that he wanted to be able to work at a high level most of the time, but be allowed to dive to lower levels of detail when required.  This highlights the importance of the abstraction level to the effectiveness of a development tool. 

 

The abstraction levels within a market generally follow the modules or building blocks that are constructed as a market progresses.  As an example in the chip design world the lower levels are transistors, that has been built up to gates, then RTL (register-transfer level) descriptions, then behavioral and higher level descriptions.  Another example is the Seven Layer OSI model that is used throughout the networking industry.  Follow this link to see a description of the OSI model:

http://encyclopedia2.thefreedictionary.com/Seven-layer+OSI+model

 

Another blog had an interesting post that discusses the progression of abstraction level in software tools. Here is the link: http://softwarejobstofresh.blogspot.com/2007/11/programming-language-to-software.html

 

To increase their productivity, engineers continually push to work at higher and higher levels of abstraction.  Imagine the difficulty of designing a chip with millions and millions of transistors with tools that work at the transistor level.  It would be a gargantuan task, involving an army of engineers.  This pressure on more productivity for tools has been fueled by the relatively limited supply of engineers and the onward progression of Moore’s Law  (see http://special-sarfunshafi.blogspot.com/2007/11/meeting-man-behind-moores-law.html). 

 

However, at the higher levels of abstraction certain detailed information is left out, or not readily available.  When something unexpected occurs you often need to dive down a level and look at the more detailed information to determine where the problem is.  It may be in how the abstraction was built (for example, did the compiler create the right code), or it may be a problem that requires the additional detail of a lower level to diagnose.

 

When a move is made to a new higher abstraction level, the tools typically lag behind.  In software tools when high level languages started being used, the debug tools still worked at the lower level.  The engineer would look at the assembly or machine code generated, and debug using that.  Over time the debug tools improved to where the engineer can debug almost completely at the level of the high level language. 

 

Using a tool that is at a different level causes situations where the engineer is stuck performing the translation between levels.  This can be tedious, and error prone.  They may even have some additional tools to help with the manual translation such as a hexadecimal calculator.  They may also miss important information because of the shear volume of the data that needs to be scanned at a lower level. 

 

As an example, in wireless networking there are several lab bench tools that may be used in development depending on the level of the OSI stack that the engineer is working at.  Among the tools and their corresponding level are: protocol analyzer (packet level), software emulator (instruction level), logic analyzer (signal level) and spectrum analyzer (wave level). 

 

The networking engineer generally wants to work and think at the packet level or above.  Therefore to help the productivity of networking engineers would call for more tools at the packet level and an improved ability to easily move between the different abstraction levels when they need to go lower.

 

Rick Denker

Packet Plus, Inc.

January 9, 2008

Basic Debug Tools (PRINTF level) and Beyond  Comments 

Filed under: Engineering Tools — admin @ 4:39 pm

The basic level of debug tools for most disciplines share some common characteristics.  I call these basic features “PRINTF? level, because of my experiences as a software developer, and anyone who has developed a software program can easily relate.

 

Before the use of a software debugger, or the even more advanced in-circuit emulator, there was the most basic debug method I call “PRINTF-level? debugging.  I call it that because it involved the insertion of a PRINTF statement or statements into the software code.  (PRINTF was the language statement for a formatted print in the FORTRAN language.) 

 

You would put the PRINTF statements at key points in your program to check the values of key variables, or document how far the program had gotten.  Although the tool was crude, it provided sufficient access to debug most programs, but not with optimal productivity.

 

The salient characteristics of PRINTF level debugging are:

 

  * Significant effort to use

You have to program all the information that you want.  You only get the level of formatting that you are willing to program in.

 

  * Measurements must be decided before the debug trial

The measurement to be made cannot be changed during the running of the debug trial.

 

   * Changes in the design

The design is altered to make the measurement.  This can change the timing or size sometimes causing or masking a problem.

 

The next level beyond PRINTF is to minimize the need to alter the design, and make the tool interactive.   Minimizing the need to alter the design may involve completely eliminating, or making it small and predictable.  Making the tool interactive gives the user the ability to choose what information to gather during a debug trial, and control the execution flow.  These changes make a dramatic improvement in productivity.  Each debug trial with these improvements can take the place of multiple debug trials with a more basic tool.  Also the time setting up the trial may be substantially reduced.

 

In many industries the initial debug tools have shared the characteristics of PRINTF debugging, then progressed to more interactive options as the market evolved.  For example, a similar progression of tools has occurred in FPGA debugging.

 

In the networking equipment industry a majority of the debugging falls into the PRINTF category, because of several characteristics.  One, the trials must run in real-time in order to catch the difficult problems.  Two, there are multiple pieces of test and debug equipment that must be coordinated.  Three, the configurations are made of equipment from multiple vendors.  And lastly, the speed of protocol changes makes it difficult for tools to keep up.

 

In a previous post I called for improved tools for networking engineers (http://www.chipdesignmag.com/denker/?p=11) and moving to interactive tools would be a big improvement.

 

Rick Denker

Packet Plus, Inc.

December 13, 2007

The Balance of Marketing and Engineering  Comments 

Filed under: High Tech Marketing — admin @ 6:13 pm

How do you create a balance between marketing and engineering?   First set up the two functions each with a different primary focus.  Marketing with a focus on the customer, and engineering with a focus on the technology.

Marketing is responsible for bringing the customer into the product decisions.  This may be through many methods including: customer research, customer visits, reviewing support requests, and reviewing sales results.   They need to also make sure that they do not become a filter.  If certain discussions need a more technical person involved, marketing needs to facilitate that too.

 

Engineering is responsible for bring technology into product decisions.  What is possible, what is the best way to implement it, what will it cost.  Engineering needs to make sure they provide good data for making the decisions and guard against the options that they favor the options that they want to build.

 

The amount of overlap depends on the nature of the product and market.  There is significant overlap for most technical products and there is a need for high levels of interaction. 

slide1.GIF

The best at working this relationship often have a foot in both worlds.  The marketing people often have a technical degree and experience in engineering.  The engineering people have had significant customer interaction. 

 

Once a product plan has been agreed to, there also need to be guidelines on how to proceed in a changing world.  Few plans can stay the same for more than six months in our constantly changing world.  But there needs to be understanding that keeps the changes from getting out of control.

 

Marketing must realize that feature changes make engineering less efficient.  The cost of context switching is real, and can dramatically affect the productivity of the engineering team.  If the feature set changes for each customer, then marketing is not doing their job.  If the feature set never changes there is a good chance that marketing is not talking to enough customers.

 

Engineering must realize that changes in the schedule or feature set, makes the product introduction less optimal.  The timing of an introduction is often targeted to a specific event that will get maximum impact such as a trade show.  There may also be several events that were done before the event as build up or preparation.  Changing the schedule can severely distort these plans and hurt the effectiveness.  The changing of the feature set causes the documentation to be modified at a minimum and at the worst can cause a need to re-position the product, which can change all the marketing materials and the marketing strategy.

 

 

Rick Denker

Packet Plus, Inc.

 

October 16, 2007

The First Rule of Engineering  Comments 

Filed under: High Tech Marketing — admin @ 4:03 pm

Early in my marketing career I worked with an engineer at Hewlett Packard that liked to talk about the first rule of engineering being “don’t do something stupid?.  First before you judge the rule literally you have to realize how he used the phrase.  This was not stated as not to try new things or take on risks.  He liked to take on risks and push himself.  The other quality that he had was always thinking beyond scope of his specific area to make sure what he did would fit within the broader context of the product. Although it was often stated somewhat as a joke, I never heard him direct the rule at someone else as an insult.  However, he was always actively questioning to make sure he did not run afoul of this rule. 

 

In another take on the first rule of engineering, recently Andy Grove of Intel in a speech at City College of New York said “the first tenet of engineering is, Always know what problem you’re working on?. (http://www.time.com/time/magazine/article/0,9171,1538622,00.html?iid=chix-sphere)

 

What these two rules have in common is thinking before doing.  The engineer always needs to be actively thinking about what they are doing, not just blindly implementing something.  I like both these rules, because they are based on the analytical and questioning strength of engineers. 

 

A natural consequence is that engineering needs to be active in product decisions.  They need to be able to ask why a particular feature makes sense.  This is a positive sign that they are actively thinking about the problem, and trying to make sure they are solving the right problem.

 

I also believe that engineering must keep aware of the technology options available to completing a solution.  It is another way that engineers need to be actively thinking.  They must spend a portion of their time exploring and learning about the new options and methods to solve a problem. 

 

The rest of the organization needs to be tolerant and supporting of these behaviors.  They are part of having a robust and creative engineering team.

 

Also note that it does not mean that engineers alone should make all the product decisions.   In my next post I will explore how to balance the engineering and marketing to make good product decisions.

 

If anyone has there own version of the first rule of engineering that they like to use, please post them as a response.

 

Rick Denker

Packet Plus, Inc.

 

October 8, 2007

The #1 Job of Marketing  Comments 

Filed under: High Tech Marketing — admin @ 4:58 pm

For a long time in my career I believed that the most important function of marketing was to bring the knowledge and understanding of the customer into the company.  This is a critical part of marketing.  However I no longer believe that it is the most important. 

           

I now firmly believe that the #1 job of marketing is to:

Assess the potential of new markets and to plan the entry into the chosen new markets 

This is consistent with my earlier post, The Marketing Wedge http://www.chipdesignmag.com/denker/?p=13.  The key thought of the Marketing Wedge being that market factors are more important than customer factors, and that customer factors are more important than product factors.

 

New markets are the key to long-term continued growth and innovation.  Some of the reasons for this are:

- New markets offer the largest potential gains in revenue.  The gains from a new feature to an existing customer base, or addressing a new customer in an existing market are generally much smaller in the long-term.

 

- New leading edge markets also are characterized by change and innovation.  Participating in these markets will increase your own innovation.  If your growth and innovation are sagging, explore whether you have been resting on the laurels of your current markets, or taking on the challenges of new markets.

 

- Growth does not necessarily continue forever.  Even the best of markets eventually become saturated, then stagnate and decline.

 

- There is always the threat of change to current markets, potentially forcing you into a mad scramble for new markets.  Some changes can be foreseen, but a disruptive technological change is almost impossible to predict.

 

Also remember that new markets take time to develop.  You need to start your entry into a new market years before the new revenue is needed.

 

Too often within established companies the market and channel choices are already set in stone.  Because of this many in the marketing profession do not get exposed to this aspect of marketing.  However, the increasing pace of change in markets, the increasing complexity of the sales channel options, and the broad diversity of customers make ignoring this #1 job more dangerous to your companies’ long-term prosperity. 

 

An insightful analysis of this management behavior is detailed in the book titled, The Innovator’s Dilemma by Clayton Christensen. He explains how rational management decisions can cause management to miss market waves caused by discontinuous changes. 

 

What this means for marketing is that the old model of “the next bench syndrome? which is very incremental approach is even less likely to apply.  That more risks needs to be taken.  It is hard to predict what you will discover in a new market, so flexibility will be needed for success.  Also sales needs to understand that there will be more testing of the knowledge and capabilities of the sales channels.  They will need to be more flexible too.

 

If your company has not been flexing this new market muscle, it will more likely than not wake up one day to find it has to take a crash course in finding new markets in order to survive.

 

Rick Denker

Packet Plus™, Inc.

August 23, 2007

Wireless Test Environments  Comments 

Filed under: Engineering Tools — admin @ 12:25 pm

It is important to understand the strengths and weaknesses of the range of wireless test environments.  Test environment refers to the setup or environment into which the device being tested and the test equipment are placed.  The four primary types of environments are: 

Faraday Cages

Faraday cages are usually large, hand-constructed, copper mesh wrapped boxes or rooms.  Because of the expense of their construction, they are typically found in the labs of large equipment manufacturers, where they are shared for testing and quality assurance.  Because Faraday cages assure a fairly noise- and interference-free environment, they are good for a wide variety of individual product tests, especially for antennas.  However, test configurations of more than a few devices can quickly congest traffic in a cage.  In addition, there may not be enough distance in the cage to test effects such as multi-path or diversity. 

Test Boxes, or RF Chambers

RF Chambers are metal boxes with absorbing material lining the inside to dampen interference.  They provide a controlled environment for much lower cost than a Faraday cage.  Typically, the DUT is placed into the test chamber, and probes are used to couple signals to/from the DUT through cables to an external test system.  In some cases the DUT and the test equipment are placed within the same test chamber, at which point this approximates a Faraday cage.  At some point, it ceases to be practical to use chambers as opposed to a Faraday cage.  Moreover, because spatial information is lost, some equipment cannot be tested in a chamber, e.g., smart antennas. 

Multiple sizes of chambers may be required for proper testing in a fully-enclosed environment.  The lower limit on the size of the chamber is dictated by the distance at which the RF near-field transitions to the RF far-field.  Objects – including the walls of the chamber itself – that are placed closer than this distance to the unit under test have a significant impact on the radiation pattern and efficiency of the unit; hence it is necessary to ensure that the chamber dimensions are greater than this distance, otherwise the test results may prove to be either irreproducible or erroneous. 

Cabled

Cabled tests simply substitute a wired connection for the wireless connection, bypassing the antennas and directly connecting two pieces of equipment.  As a result, cabled tests are inexpensive and easy to configure, and provide good isolation from interference.  They are not limited to small configurations, like cages and chambers.  However, because of the lack of interference, their results in configurations are idealized toward better performance than would actually occur in the randomness of an open air environment.  In addition, properly performing cabled testing relies on the DUT itself being well-shielded, which may not always be the case in consumer or low-end enterprise equipment.  In addition, equipment with integral antennas (where the antenna cannot be disconnected to gain access to a connector or other attachment point for a cable) cannot be tested using this method. 

Open air

Open air is the only test environment that truly matches the way the customer will use the equipment.  Like cabled environments, open air has no size limitations or limits on the number of pieces of equipment in a configuration.  For some tests, it is ideal because it can test both the antenna and the protocol effects.  Also it is the only solution for certain location-dependent tests. 

Open air test environments can be separated into indoor and outdoor.  Indoor environments are normally actual buildings, usually with furnishings and other accoutrements characteristic of typical office buildings.  Outdoor environments are usually open spaces without obstructions, such as would be found at an antenna range.  Of these two, the indoor environment is of the most interest, as it most closely approximates the conditions under which the equipment is expected to function.  Outdoor environments are used for applications such as characterizing antenna patterns, setting baselines for range and rate, etc.

Summary of test environments

Complete testing requires a combination of test environments; a one-size-fits-all environment does not exist for wireless testing.  Ideally, test equipment should be able to accommodate all environments.  Click on the thumbnail below to see a summary table.   

 test-environments-figure.gif

Rick Denker

Packet Plus™, Inc

August 14, 2007

The QA Bottleneck  Comments 

Filed under: Engineering Tools — admin @ 4:11 pm

In new markets a bottleneck can develop in the Quality Assurance department.  This post discusses why this occurs and outlines potential solution areas.

The Cost of a Problem 

First it is important to look at how the costs to fix a problem depend significantly on in what phase the problem is discovered.  The conventional wisdom has been that the cost to fix a problem goes up ten times for each stage in of the development process from engineering to the customer. 

 

This makes intuitive sense.  In engineering a fix may involve a simple re-compile/re-test.  Only one functional group in the company has been affected.  In the best case only a single person is affected.

 

Once a product has been released to QA there has already been a tremendous investment made in integration, and unit-testing.  In QA there needs to be a test added to the regression to cover the newly discovered error and the previous testing re-run. 

 

Once a product that has been released to manufacturing the costs again go up dramatically.  The costs can include re-fabrication of a semiconductor device, reworking inventory, replacing inventory, and even re-tooling the manufacturing line in some cases.

 

Again the costs increase dramatically again once a product is released to customers.  The costs now are both increased expenses and lost revenue.  The expenses include support time handling multiple versions, recall costs, and replacement costs.  However, the impact to customer satisfaction, reputation, and market share may swamp the expenses.  All of these costs are magnified by the size of the installed base.

 

(Note that all the costs of previous stages will be incurred for issues that are discovered in a later stage.  For example, an error that is discovered by a customer will still have manufacturing costs, QA costs, and engineering costs.)

cost-of-change.JPG 

The Pressure on QA 

This cost structure already puts a lot of pressure on QA to avoid the costs of problems getting to manufacturing or to the customer.  In a new and growing market there are several factors that can combine to create a bottleneck in QA.

 

Among the factors that increase the load on QA are:  

1) The level of quality that is demanded rises. 

What was acceptable when a product was new and unique becomes unacceptable as more alternatives become available. 

 

2) The need to test with other products can increase dramatically

As a market grows, this interoperability testing can become a significant part of the QA effort.

 

3) The average user sophistication becomes lower. 

The early adopters may be experts that can work around certain defects and still get value.  As a market moves into the early majority this will no longer be the case.           

 

4) The new product may be finding new uses

As the product proliferates or as the price drops, the product may be penetrating new classes of customers.  Suddenly many new application scenarios need to be tested.

Potential Solution Areas 

Clearly from looking at the costs the solution has to come during the engineering or QA phases.  Releasing products prematurely just become ticking time bombs ready to explode at the wrong moment.  Actions such as outsourcing customer support are just stop gap measures that do not address the root cause.

           

There are many tools that help to automate the QA effort.  These can be invaluable.  They can both decrease the labor costs and increase the effectiveness of testing. 

 

The best solutions are the ones that can reach all the way to the engineering phase.  They typically involve changes to fundamental processes.   Depending on the kind of issues a company has it could include: getting customer feedback earlier in the process, more unit testing in engineering, eliminating fuzzy handoffs, and clearer release criteria.  As with most organizational changes they can be better implemented when there are tools that support and reinforce the new behaviors that are desired.

 

Rick Denker

Packet Plus™, Inc.

Next Page »