Saturday, March 30, 2013

Verification Futures India 2013 - Quick recap!

Verification Futures started off in 2011 from UK and in 2013 touched the grounds at India too. It is a one day conference organized by T&VS providing a platform for users to share the challenges in verification and for the EDA vendors to respond with potential and upcoming solutions. The conference was held on 19th MAR in Bangalore and turned out to be a huge success. It’s a unique event extending an opportunity to meet the fraternity and collaborate to discuss on challenges. I thank Mike Bartley for bringing it to India and highly recommend attending it.
 
The discussions covered a variety of topics in verification and somewhere all the challenges pointed back to the basic issue of ‘verification closure’. Market demands design with more functionality, small foot print, high performance and low power to be delivered in continuously shrinking time window. Every design experiences constant changes in the spec till the last minute expecting all functions of the ASIC design cycle to respond promptly. With limited resources (in terms of quantity and capabilities), the turnaround time for verification falls into critical path. Multiple approaches surfaced during the discussions at the event giving enough food for thought to solution seekers and hope to the community. Some of them are summarized below.
 
Hitting the coverage parameters is the end goal for closure. However, definition of these goals is biased on one side by an individual’s capability to describe the design into specification and on the other side to converge it into coverage model. Further, disconnect with the software team aggravates this issue. The software may not exercise all capabilities of hardware and actually hit cases not even imagined. HW-SW co-verification could be a potential solution to narrow-down the ever-increasing verification space and to increase the useful coverage.
 
Verifying the designs and delivering one that is an exact representation of spec has been the responsibility of the verification team. Given that the problem is compounding by the day there may be a need to enable “Design for Verification”. Designs that are correct by construction, easier to debug and follow bug avoidance strategies during development. EDA would need to enhance tool capabilities and the design community would need to undergo a paradigm shift to enable this.
 
Constrained Random Verification has been adopted widely to hit corner cases and ensure high confidence on verification. However, this approach also leads to redundant stimulus generation covering same ground over & over again. This means, even with grading in place, achieving 100% coverage is easier said than done. Deploying directed approaches (like graph based) or formal has its own set of challenges. A combination of these approaches may be needed. Which flow suits to what part of the design? Is 100% proof/coverage a ‘must’? Can we come up with objective ways of defining closure with a mixed bag? The answer lies in collaboration between the ecosystem partners including EDA vendors, IP vendors, design service providers and the product developers. The key would be to ‘learn from each other’s experiences’.
 
If we cannot contain the problem, are there alternates to manage the explosion? Is there a replacement to the CPU based simulation approach? Can we avoid the constraint of limited CPU cycles during peak execution period? Availability of cloud based solution extending elasticity or increasing velocity with hardware acceleration or enhanced performance using GPU based platforms are some potential solutions.
 
The presentation from Mentor included a quote from Peter Drucker –
- What gets measured, gets done
- What gets measured, gets improved
- What gets measured, gets managed
 
While the context of the citation was coverage, it applies to all aspects of verification. To enable continual improvement we need to think beyond the constraints, monitor beyond the signals and measure beyond coverage!
 

4 comments:

  1. The crux of the verification solution -- what is not simulated, does not work. Period.

    The following is not clear in the article, "Is there a replacement to the CPU based simulation approach? Can we avoid the constraint of limited CPU cycles during peak execution period?"

    ReplyDelete
  2. Hi Chandresh,

    Yes, what is not simulated doesn't work. But will software ever try using it is an important point. The Chips into production have bugs. It is just that the software running on top of it doesn't stimulate them and everything is fine. Can we use this to our advantage?

    Is there a replacement to the CPU based simulation approach?
    Answer is hardware accelerators/emulators or GPU based platforms.

    Can we avoid the constraint of limited CPU cycles during peak execution period?
    Answer is cloud computing.

    All these are work under progress. Hardware accelerators have gained traction. GPU based is still under development while cloud is facing the challenge of security to be adopted.

    ReplyDelete
  3. Hardware accelerators is a replacement for the CPU based simulation approach.
    Gaurav, Doesn't this solution become Application Specific Design for Verification platform. Presently the software verification platform uses a generic approach to do code coverage and functional verification.
    In case of the Hardware Accelerator design this approach may depend more on the application/design under consideration. Nevertheless, this solution may utilize lesser time for DFV. The isolation between the design engineer and the verification engineer is less.
    My question is should be a specific task to be accomplished during the design and development phase only?
    Rajeshwari

    ReplyDelete
  4. Rajeshwari,

    Thanks much for reading the post & sharing your thoughts.
    HW accelerator is a completing technology and comes into picture when CPU based approaches stagnate and is costly too.
    On the specific task, so far we have always focused on designing as per spec and verifying to ensure that the design is an exact representation of spec. We need to think in the direction where the working/verified design is an exact representation of the software requirements to reduce the overall verification goals.

    ReplyDelete