Part of the  

Chip Design Magazine

  Network

About  |  Contact

Headlines

Headlines

Collaboration Penalty Is Steep For Engineers

By John Blyler
System-Level Design sat down to discuss chip-design productivity and quality issues with Srinath Anantharaman, president and founder of Cliosoft; Ronald Collett, president and CEO of Numetrics Management Systems; and Michel Tabusse, CEO and co-founder of Satin Technologies. What follows are excerpts of that discussion:

SLD: How can chip-development teams improve their productivity?
Anantharaman: We all know that adding engineers to a project doesn’t increase productivity linearly. Engineers spend more time communicating and recovering from miscommunication and less time designing. This can be described by the following equation: N engineers + M engineers = (N + M)/CP, where CP is >1.0 and is the ‘collaboration penalty.’ The value of CP increases as the size of the team grows. Engineers need to share data with each other and be aware of changes being made by other engineers working on the project. One of the best investments a team can make to maximize productivity is to deploy tools and techniques to improve communication, institute accountability, track changes, have the ability to recover data easily, and do this without imposing undue burden on the engineers. Software engineers have used software-configuration-management (SCM) systems for decades to help solve some of the problems of concurrent development. Unfortunately, SCM systems don’t address all of the needs of hardware teams. Hardware flows are more complex, using a multitude of legacy tools generating large volumes and variety of data.
Collett: Productivity measures output per unit of effort expended to create that output. The development team’s output is the design it hands off for volume manufacture. The effort is the total labor, in person-weeks, that the team expends on the project—from concept to release-to-production. Therefore, maximum productivity occurs when the team expends the minimum effort to develop the chip. To ensure minimum effort, the project plan’s staffing level must assume that the average productivity among all team members will be the highest possible. Best-in-class is a good baseline target. The project manager then allocates only enough staffing necessary to achieve the development throughput—measured as output per week—that’s required to finish the project on time. You can think of it as ‘optimally understaffing’ the project.
A project manager achieves optimal understaffing by asking the following question during the project-planning phase: If my team were to achieve best-in-class productivity, what’s the minimum staffing I need to finish the project on time? The team size should be large enough to finish the project on schedule under the assumption that productivity will be best in class.
Tabusse: EDA tools—however good they may be to accomplish well-defined tasks and however focused on quality within their own domain—cannot solve the problem of overall quality without negatively impacting productivity. Scrutinizing megabytes of log files generated at each design iteration is hardly compatible with the overall design-schedule constraints. (And if you don’t do it, you risk overlooking something vital.) If design reviews are hurried or unfocused or skipped altogether in a crunch to get something completed—or if handoffs are unreliable—more risk is added. And if the weekly reports that a project manager gets from his team are nothing else than manually filled Excel checklists attached to e-mails, more time will be wasted at the project-management level before the right design decisions can be made. Design-quality monitoring and reporting has now become a discipline in itself. It can make the difference between meeting delivery schedules and not meeting them, between one-pass silicon and expensive respins, and between meeting a market window and missing it entirely.

SLD: Good EDA tools are a prerequisite for any serious chip-design project. Beyond that, what else is needed to ensure a high-quality, yet productive design?
Anantharaman: EDA tools help improve the productivity of an individual designer. However, as the size of the team grows it is of paramount importance to make sure that all of the team members are collaborating efficiently and working in unison. Even a small decrease in collaboration efficiency can erase any gains made by using the best EDA tools. Design teams must make use of automation to communicate effectively and share data and status in a timely and error-proof manner. For example, hardware configuration management (HCM) systems provide a very effective platform to share data and update engineers of status in a timely fashion. Issue tracking, project and schedule management, instant messaging, and web conferencing all are tools that should be deployed to grease the wheels of collaboration. Though tools can help improve productivity, it depends very much on how you use the tools. Keep the tools and processes simple. Otherwise, they can become onerous and engineers won’t follow the processes correctly—or will spend too much time following them.
Collett: Putting manufacturing issues aside, design quality is a function of the amount of verification and validation that a team performs. The more verification and validation, the higher the quality. This means putting the maximum number of resources possible on those tasks including staffing, computing, etc. Teams increase productivity by performing development activities more efficiently. Boosting motivation is among the most effective ways to improve efficiency. High motivation occurs if the team is set up for success. Give the team an extremely aggressive development schedule together with irrefutable facts and data that demonstrate that the schedule is achievable with the resources allocated (provided the team achieves best-in-class productivity).
Tabusse: Good EDA tools—even combined within well-automated flows—aren’t enough to produce quality designs with acceptable productivity. Outside of putting the best EDA in action, the most important issues to address are formalizing design flows, practices, and handoff processes. Equally important is the monitoring of all quality checks and metrics that the company considers critical as well as highlighting potential deviations from approved metrics. Focusing on saving monitoring time and automating reporting activities are also critical to successful design projects. By setting up a non-intrusive monitoring system based on specific company quality checks and metrics, each designer and design team can make decisions based on up-to-the-minute factual design data. They also can save weeks by automatically generating quality reports that highlight action items.

SLD: What’s the root cause of poor schedule predictability on IC-development projects?
Anantharaman: IC development is complex—often with large teams spread across multiple sites. Lack of communication and visibility into current status is a major cause of poor predictability. Changes to specifications or other ECOs may not get communicated to the necessary engineers and come as a surprise at the end. Managers have to rely on the engineers’ status reports, which are often too rosy. Deploying an HCM system effectively can help to avoid surprises. The team is constantly aware of the changes being made and can respond quickly to them. Schedule predictability can be improved by tracking objective metrics. Issue-tracking systems (several commercial and open-source systems available) provide an objective measure of the number and severity of open issues and also a rate of increase/decrease in reported issues.
Collett: By definition, the root cause of poor schedule predictability is a poor estimate of the time required to design the chip. A poor estimate means that the project was staffed incorrectly, given the design’s complexity and the time-to-market constraint imposed. Complexity includes not only the design’s intrinsic complexity, but also the stochastic nature of IC development, which introduces what I would call ‘stochastic complexity.’ Examples include spec changes, project-management issues, third-party and internal IP problems, EDA and library issues, and resource-management issues. Interestingly, both intrinsic and stochastic complexity can be accurately and reliably modeled, enabling accurate estimates of resource requirements. In a nutshell, a project slips schedule when a mismatch exists between intrinsic/stochastic complexity and resource allocation.
Tabusse: One cause is the lack of corporate-wide reference metrics that would help learning from the past. (Actually, the myriad of overlapping metrics at some companies has the same effect.) Another major challenge is the absence of reliable information on the quality of the building blocks that the next project will be using.

SLD: Design teams can be resource hogs, using up all available disk space and processing. How can such usage be controlled without adversely affecting the design?
Anantharaman: Hardware design teams are known to use up all available disk space. Design libraries are often very large. EDA tools produce large numbers of very large files. And engineers love to keep copies just in case they need them. Often, the entire project data is archived to keep a snapshot of a milestone. Engineers often think that disk space is cheap because they can go to a local electronics store and pick up a terabyte hard drive for a couple hundred dollars. However, this cannot be compared to disk space on a high-reliability NAS server in an enterprise network. Additionally, there’s the huge cost of managing disk space, which includes creating backups. Deploying an HCM system helps this cause in multiple ways. Users no longer need to keep local backup copies of their own because all versions are managed. Any milestone can be remembered using a tag/label to record the configuration of the project without having to make a complete copy of the project data.

SLD: Many engineers and their managers are reluctant to use software-as-a-service (SaaS) or cloud-computing tools due to performance (too slow) and security (too risky) concerns. How can product-lifecycle-management (PLM) software address these concerns? What portion of the overall development design could be used as a test case to boost confidence in SaaS-cloud PLM tools?
Collett: How does PLM software address those concerns? Cloud-based computing performance is a function of the software it uses to perform load balancing and transaction processing across the server farm and between the client and the cloud. So I think steady improvements in cloud-computing infrastructure software will be the answer to the performance issue. Regarding security, there are a tremendous number of very powerful measures available today. It’s different than it was 5 or 10 years ago. My company provides all of its products via SaaS and we’ve never had a security breach. Some of the top semiconductor companies in the world store their data on our systems. Preventing security breaches is a function of two things: the number of layers of security and continuous attention to the issue by senior management.
Tabusse: As stated before, our product architecture makes little use of the total available network bandwidth. End users only need a web navigator, and on most occasions they don’t even notice that the application is delivered from a remote server. While most of our customers have installed the software at their own premises (for security reasons or simply by habit), some have started to use a delocalized server that we provide.

Tags: , , ,

Leave a Reply