Saturday, March 30, 2013

Verification Futures India 2013 - Quick recap!

Verification Futures started off in 2011 from UK and in 2013 touched the grounds at India too. It is a one day conference organized by T&VS providing a platform for users to share the challenges in verification and for the EDA vendors to respond with potential and upcoming solutions. The conference was held on 19th MAR in Bangalore and turned out to be a huge success. It’s a unique event extending an opportunity to meet the fraternity and collaborate to discuss on challenges. I thank Mike Bartley for bringing it to India and highly recommend attending it.
 
The discussions covered a variety of topics in verification and somewhere all the challenges pointed back to the basic issue of ‘verification closure’. Market demands design with more functionality, small foot print, high performance and low power to be delivered in continuously shrinking time window. Every design experiences constant changes in the spec till the last minute expecting all functions of the ASIC design cycle to respond promptly. With limited resources (in terms of quantity and capabilities), the turnaround time for verification falls into critical path. Multiple approaches surfaced during the discussions at the event giving enough food for thought to solution seekers and hope to the community. Some of them are summarized below.
 
Hitting the coverage parameters is the end goal for closure. However, definition of these goals is biased on one side by an individual’s capability to describe the design into specification and on the other side to converge it into coverage model. Further, disconnect with the software team aggravates this issue. The software may not exercise all capabilities of hardware and actually hit cases not even imagined. HW-SW co-verification could be a potential solution to narrow-down the ever-increasing verification space and to increase the useful coverage.
 
Verifying the designs and delivering one that is an exact representation of spec has been the responsibility of the verification team. Given that the problem is compounding by the day there may be a need to enable “Design for Verification”. Designs that are correct by construction, easier to debug and follow bug avoidance strategies during development. EDA would need to enhance tool capabilities and the design community would need to undergo a paradigm shift to enable this.
 
Constrained Random Verification has been adopted widely to hit corner cases and ensure high confidence on verification. However, this approach also leads to redundant stimulus generation covering same ground over & over again. This means, even with grading in place, achieving 100% coverage is easier said than done. Deploying directed approaches (like graph based) or formal has its own set of challenges. A combination of these approaches may be needed. Which flow suits to what part of the design? Is 100% proof/coverage a ‘must’? Can we come up with objective ways of defining closure with a mixed bag? The answer lies in collaboration between the ecosystem partners including EDA vendors, IP vendors, design service providers and the product developers. The key would be to ‘learn from each other’s experiences’.
 
If we cannot contain the problem, are there alternates to manage the explosion? Is there a replacement to the CPU based simulation approach? Can we avoid the constraint of limited CPU cycles during peak execution period? Availability of cloud based solution extending elasticity or increasing velocity with hardware acceleration or enhanced performance using GPU based platforms are some potential solutions.
 
The presentation from Mentor included a quote from Peter Drucker –
- What gets measured, gets done
- What gets measured, gets improved
- What gets measured, gets managed
 
While the context of the citation was coverage, it applies to all aspects of verification. To enable continual improvement we need to think beyond the constraints, monitor beyond the signals and measure beyond coverage!
 

Sunday, March 3, 2013

Over-verification : an intricate puzzle

For verification, it was an eventful week. DVCON 2013 kept everyone busy with record attendance at the sessions and by following the tweets & blogs that resulted from them. The major highlight of this year’s conference was release of the latest update to System Verilog standard, IEEE 1800-2012 and free PDF copies made available, courtesy - Accellera.
 
With verification constantly marching to increase its claim on ASIC design schedule while retaining its position as a major factor for silicon re-spins, verification planning was a hot topic of discussion at DVCON. Some of the interesting points that came out of a panel discussion on verification planning were –
 
- Verification plan is not just a wish list. You have to define how you’re going to get there.
- Problem is not over-planning, but over-verifying designs because there has not been enough planning.
- Biggest objection we hear is we don’t have time to capture a verification plan. But you'll lose more time if you don't.
- What’s useful in verification planning is “ruthless prioritization.” You can never get it all done.
- My biggest challenge is getting marketing input into my verification plan.
- Failure to plan means planning to fail.
 
Last week, I guest blogged on a similar topic based on a recent survey conducted by Catherine & Neil. Clearly, the issue of poor planning gets highlighted in all areas of product development.
 
Traditionally, the verification plans were just a list of features to be verified, addressing ‘what to verify’. With the emergence of CRV, the plans started including the second aspect i.e. ‘how to verify’. Further, to bring focus to this never ending verification problem, CDV was adopted. The verification plans now started including target numbers in terms of coverage (code, functional and assertions) to define ‘when are we done’. With a given set of resources, when the ASIC design schedule is imposed on the verification plan, meeting the goals is a challenge. There arises a need to prioritize verification in terms of the features. Remember, Any code that is not verified will not work!
 
To enable this “ruthless prioritization”, collaboration is required among marketing, software and hardware groups to align to the design objectives. Everyone needs to understand the potential end applications and preferential ways in the design to achieve them. In case of IPs, this could mean that the initial releases target a limited set of applications based on the customers on board. Once that is achieved, ‘over-verifying’ can take over to further close on the grey areas. In case of SoC, it is a tough call. While design cost continues to increase with diminishing dimensions, the break even may not happen with limited applications (as a result of limited verification) of the SoC. The problem gets aggravated further with the specifications changing on the fly. A platform based approach could be a potential solution where variants of SoC are churned out frequently, but again defining the platform and prioritizing the features boils down to the same problem. A tough nut to crack!
 
The whole point of over-verifying comes to the fore front because verification is the long pole in the schedule. What if the designs could be ‘over verified’ within the timelines? What does it take to achieve that? Are tools like intelligent test benches, formal verification, hardware acceleration or cloud computing a solution? If yes, what is the associated cost and how does it affect?
 
There is no easy answer to any of these questions. In words of Albert Einstein, “The world we have created is a product of our thinking. It cannot be changed without changing our thinking.
 
Probably when an answer comes out, we will be on the road of commoditizing the hardware. Till then 'over-verification' is what we have to live with!