Monday, October 14, 2013

Trishool for verification

It’s the time of the year when I try to correlate Mythology with Verification. Yes, festive season is back in India and this is the time when we celebrate the fact that good prevails over evil. Given the diversity of Indian culture, there are a variety of mythological stories about demigods taking over evil. Well for us in the verification domain, it is the BUG that plays the evil preventing us from achieving first silicon success. While consumerism of electronic devices worked wonders in increasing the size of business, it actually forced the ASIC teams to gear up for developing better products in shrinking schedules. Given that verification is the long pole in achieving this goal, verification teams need a solution that ensures the functionality on chip is intact in a time bound fashion. The rising nature of design complexity has further transformed verification into a diverse problem where a single tool or methodology is unable to solve it. Clearly a multi pronged approach.... a TRISHOOL is required!
 
 
TRISHOOL is a Sanskrit word meaning 'three spears'. The symbol is polyvalent and is wielded by the Hindu God Shiva and Goddess Durga. Even the Greek God of sea Poseidon and Neptune  the Roman God of the sea, are known to carry it. The three points have various meanings and significance. One of the common explanations being that Lord Shiva uses the Trishool to destroy the three worlds: the physical world, the world of culture drawn from the past and the world of the mind representing the processes of sensing and acting. In physical sense, the three spears would actually be fatal as compared to a single one. 
 
So basically we need a verification strategy equivalent to a Trishool to confirm that the efforts converge in rendering a fatal blow to the hidden bugs. The verification tools available today correlate well with the spears of Trishool and if put together would weed out bugs in a staged manner moving from IP to SoC verification. So which are the 3 main tools that should be part of this strategy to nail down the complex problem we are dealing with today?
 
Constrained RandomVerification (CRV) is the workhorse for IP verification & reuse of the efforts at SoC level makes it as the main spear or the central spear of the verification Trishool strategy. CRV is further complimented by the other two spears on either side i.e. Formal Verification and Graph based Verification. The focus of CRV is more at the IP level wherein the design is stressed under constraints to attack the thought through features (verification plan & coverage) and also hit corner cases or areas not comprehended. As we move from IP to SoC level, we face two major challenges. One is repetitive code typically combinational in nature where Formal techniques can prove the functionality comprehensively and in limited simulation cycles. Second is developing SoC level scenarios that cover all aspects of SoC and not just integration of IPs. The third spear or graph based verification comes to rescue at this level where multi processor integration, use cases, low power scenarios and performance simulations are enabled with much ease. 
 
Hope this festive season while we enjoy the food & sweets, the stories and enacts that we experience around will also given enough food for thought towards a Trishool strategy for verification.
 
Happy Dussehra!!!
 
Related posts -

Sunday, October 6, 2013

Essential ingredients for developing VIPs

The last post Verification IP : Build or Buy? initiated some good offline discussions over emails & with verification folks on my visit to customers. Given the interest, here is a quick summary of important items that needs to be taken care of while developing a VIP or evaluating one. Hopefully they will further serve the purpose of helping you decide on Build vs. Buy J.
 
1. First & foremost is the quality of VIP. Engineers would advocate that quality can be confirmed by extensive validation & reviews. However, nothing can beat a well defined & documented process that ensures predictability & repeatability. It has to be a closed loop i.e. defining a process, documenting it & monitoring to ascertain that it is followed. This helps in bringing all team members in sync to carry out a given task, provides clarity to the schedule and acts as a training platform for new team members.
 
2. Next is architecture of the VIP. Architecture means a blue print that conveys what to place where. In absence of a common architecture, different VIPs from the same vendor would assume different forms. This affects user productivity as he/she would need to ramp up separately for each VIP. Integration, debugging & developing additional code would consume extra time & effort. Due to inconsistency across the products, VIP maintenance would be tough for the vendor too. A good architecture is one that leads to automation of skeleton while providing guidelines on making the VIP simulator and HVL+methodology agnostic.
 
3. With the wide adoption of accelerators, the need for having synthesizable transactors is rising. While transactors may not be available with initial releases of the VIP, having a process in place on how to add them when needed without affecting the core architecture of the VIP is crucial. The user level APIs shouldn’t change so that the same set of tests & sequences can be reused for either simulator or accelerator.
 
4. While architecture related stuff is generic, defining the basic transaction element, interface and configuration classes for different components of the VIP is protocol specific. This partitioning is essential for preserving the VIP development schedule & incorporating flexibility in the VIP for future updates based on customer requests or protocol changes.
 
5. Talking about protocol specific components, it is important to model the agents supporting a plug & play architecture. Given the introduction of layered protocols across domains, it is essential to provide flexible APIs at the agent level so as to morph it differently based on the use cases without affecting the core logic.
 
6. Scoreboard, assertions, protocol checkers & coverage model are essential ingredients of any VIP. While they are part of the release, user should be able to enable/disable them. For assertions, this control is required at a finer level. Also, the VIPs should not restrict use of vendor provided classes. The user should be able to override any or all of these classes as per requirement.
 
7. Debugging claims majority of the verification time. The log messages, debug & trace information generated by the VIP should all converge in aiding faster debug and root causing the issue at hand. Again not all information is desired every time. Controls for enabling different levels based on the focus of verification is required.
 
8. Events notify the user on what is happening and can be used to extend the checkers or coverage. While having a lot of events helps, too many of them affect simulator performance. Having control knobs to enable/disable events is desirable.
 
9. Talking about simulator performance, it is important to avoid JUGAAD (work around) in the code. There are tools available that can comment on code reusability & performance. Incorporating such tools as part of the development process is a key to clean code.
 
10. As in design, the VIP should be able to gracefully handle reset at any point during operation. It also needs to support error injection capabilities.
 
11. Finally, a detailed protocol compliance test suite with coverage model needs to accompany the VIP delivery.
 
These are essential ingredients. Fancy toppings are still possible to differentiate the VIP from alternate solutions though.
 
Looking for more comments & further discussions ....
 
Relevant posts -