Sunday, December 29, 2013

Sequence Layering in UVM

Another year passed by and the verification world saw further advancements on how better to verify the rising complexity. At the core of verification there exist two pillars that have been active in simplifying the complexity all throughout. First is REUSABILITY at various levels i.e. within project from block to top level, across projects within an organization and deploying VIPs or a methodology like UVM across the industry. Second is raising ABSTRACTION levels i.e. from signals to transaction and from transaction to further abstraction layers so as to deal with complexity in a better way. Sequence layering is one such concept that incorporates the better of the two.
 
What is layering?
 
In simple terms, the concept of layering is where one encapsulates a detailed/complex function or a task and moves it at a higher level. In day to day life layering is used everywhere e.g. instead of calling a human being as an entity with 2 eyes, 1 nose, 2 ears, 2 hands, 2 legs etc. we simply refer to it as human being. When we go to a restaurant & ask for an item from the menu say barbeque chicken, we don’t explain the ingredients & process to prepare it. Latest is apps on handhelds where by just a click of an icon everything relevant is available. In short, we avoid the complexity of implementation & details associated while making use of generic stuff.
 
What is sequence layering?
 
Applying the concept of layering to sequences in UVM (any methodology) improves the code reusability by developing at a higher abstraction level. This is achieved by adding a layering agent derived from uvm_agent. The layering agent doesn’t have a driver though. All it has is a sequencer & monitor. The way it works is that there is a high level sequence item now associated with this layering sequencer. It would connect to the sequencer of the lower level protocols using the same mechanism as used by sequencer & driver in an uvm_agent. The lower level sequencer would have only 1 sequence (translation sequence that takes the higher level sequence item & translates it into lower level sequence item) running as a forever thread. Inside this sequence we have a get_next_item similar to what we do in a uvm_driver. The item is received from the higher level sequencer. It is translated by this lower level sequence & given to its driver. Once done, the same item_done kind of response is passed back from this lower level sequence to the layered sequencer indicating that it is ready for the next item.
 
Figure 1 : Concept diagram of the layering agent
 
On the analysis front, the same layering agent can also have a monitor at higher level. This monitor is connected to the monitor of the lower layer protocol. Once a lower layer packet is received it is passed on to this higher level monitor wherein it is translated into a higher level packet based on the configuration information. Once done, the higher level packet is given to the scoreboard for comparison. So we need only 1 scoreboard for all potential configurations of an IP & the layering agent monitor does the job of translation.
 
Example Implementation
 
I recently co-authored a paper on this subject at CDNLive 2013, Bangalore and we received the Best Paper Award! In this paper we describe the application of Sequence Layering where our team was involved in verifying a highly configurable memory controller supporting multiple protocols from the processor side and a no. of protocols on the DRAM memory controller front. A related blog post here.
 
I would like to hear from you in case you implemented the above or similar concepts in any of your projects. If you would like to see any other topic covered through this blog do drop in an email to siddhakarana@gmail.com (Anonymity guaranteed if requested).
 
Wish you all a happy & healthy new year!!!
 
 

Monday, October 14, 2013

Trishool for verification

It’s the time of the year when I try to correlate Mythology with Verification. Yes, festive season is back in India and this is the time when we celebrate the fact that good prevails over evil. Given the diversity of Indian culture, there are a variety of mythological stories about demigods taking over evil. Well for us in the verification domain, it is the BUG that plays the evil preventing us from achieving first silicon success. While consumerism of electronic devices worked wonders in increasing the size of business, it actually forced the ASIC teams to gear up for developing better products in shrinking schedules. Given that verification is the long pole in achieving this goal, verification teams need a solution that ensures the functionality on chip is intact in a time bound fashion. The rising nature of design complexity has further transformed verification into a diverse problem where a single tool or methodology is unable to solve it. Clearly a multi pronged approach.... a TRISHOOL is required!
 
 
TRISHOOL is a Sanskrit word meaning 'three spears'. The symbol is polyvalent and is wielded by the Hindu God Shiva and Goddess Durga. Even the Greek God of sea Poseidon and Neptune  the Roman God of the sea, are known to carry it. The three points have various meanings and significance. One of the common explanations being that Lord Shiva uses the Trishool to destroy the three worlds: the physical world, the world of culture drawn from the past and the world of the mind representing the processes of sensing and acting. In physical sense, the three spears would actually be fatal as compared to a single one. 
 
So basically we need a verification strategy equivalent to a Trishool to confirm that the efforts converge in rendering a fatal blow to the hidden bugs. The verification tools available today correlate well with the spears of Trishool and if put together would weed out bugs in a staged manner moving from IP to SoC verification. So which are the 3 main tools that should be part of this strategy to nail down the complex problem we are dealing with today?
 
Constrained RandomVerification (CRV) is the workhorse for IP verification & reuse of the efforts at SoC level makes it as the main spear or the central spear of the verification Trishool strategy. CRV is further complimented by the other two spears on either side i.e. Formal Verification and Graph based Verification. The focus of CRV is more at the IP level wherein the design is stressed under constraints to attack the thought through features (verification plan & coverage) and also hit corner cases or areas not comprehended. As we move from IP to SoC level, we face two major challenges. One is repetitive code typically combinational in nature where Formal techniques can prove the functionality comprehensively and in limited simulation cycles. Second is developing SoC level scenarios that cover all aspects of SoC and not just integration of IPs. The third spear or graph based verification comes to rescue at this level where multi processor integration, use cases, low power scenarios and performance simulations are enabled with much ease. 
 
Hope this festive season while we enjoy the food & sweets, the stories and enacts that we experience around will also given enough food for thought towards a Trishool strategy for verification.
 
Happy Dussehra!!!
 
Related posts -

Sunday, October 6, 2013

Essential ingredients for developing VIPs

The last post Verification IP : Build or Buy? initiated some good offline discussions over emails & with verification folks on my visit to customers. Given the interest, here is a quick summary of important items that needs to be taken care of while developing a VIP or evaluating one. Hopefully they will further serve the purpose of helping you decide on Build vs. Buy J.
 
1. First & foremost is the quality of VIP. Engineers would advocate that quality can be confirmed by extensive validation & reviews. However, nothing can beat a well defined & documented process that ensures predictability & repeatability. It has to be a closed loop i.e. defining a process, documenting it & monitoring to ascertain that it is followed. This helps in bringing all team members in sync to carry out a given task, provides clarity to the schedule and acts as a training platform for new team members.
 
2. Next is architecture of the VIP. Architecture means a blue print that conveys what to place where. In absence of a common architecture, different VIPs from the same vendor would assume different forms. This affects user productivity as he/she would need to ramp up separately for each VIP. Integration, debugging & developing additional code would consume extra time & effort. Due to inconsistency across the products, VIP maintenance would be tough for the vendor too. A good architecture is one that leads to automation of skeleton while providing guidelines on making the VIP simulator and HVL+methodology agnostic.
 
3. With the wide adoption of accelerators, the need for having synthesizable transactors is rising. While transactors may not be available with initial releases of the VIP, having a process in place on how to add them when needed without affecting the core architecture of the VIP is crucial. The user level APIs shouldn’t change so that the same set of tests & sequences can be reused for either simulator or accelerator.
 
4. While architecture related stuff is generic, defining the basic transaction element, interface and configuration classes for different components of the VIP is protocol specific. This partitioning is essential for preserving the VIP development schedule & incorporating flexibility in the VIP for future updates based on customer requests or protocol changes.
 
5. Talking about protocol specific components, it is important to model the agents supporting a plug & play architecture. Given the introduction of layered protocols across domains, it is essential to provide flexible APIs at the agent level so as to morph it differently based on the use cases without affecting the core logic.
 
6. Scoreboard, assertions, protocol checkers & coverage model are essential ingredients of any VIP. While they are part of the release, user should be able to enable/disable them. For assertions, this control is required at a finer level. Also, the VIPs should not restrict use of vendor provided classes. The user should be able to override any or all of these classes as per requirement.
 
7. Debugging claims majority of the verification time. The log messages, debug & trace information generated by the VIP should all converge in aiding faster debug and root causing the issue at hand. Again not all information is desired every time. Controls for enabling different levels based on the focus of verification is required.
 
8. Events notify the user on what is happening and can be used to extend the checkers or coverage. While having a lot of events helps, too many of them affect simulator performance. Having control knobs to enable/disable events is desirable.
 
9. Talking about simulator performance, it is important to avoid JUGAAD (work around) in the code. There are tools available that can comment on code reusability & performance. Incorporating such tools as part of the development process is a key to clean code.
 
10. As in design, the VIP should be able to gracefully handle reset at any point during operation. It also needs to support error injection capabilities.
 
11. Finally, a detailed protocol compliance test suite with coverage model needs to accompany the VIP delivery.
 
These are essential ingredients. Fancy toppings are still possible to differentiate the VIP from alternate solutions though.
 
Looking for more comments & further discussions ....
 
Relevant posts -
 

Sunday, August 18, 2013

Verification IP : Build or Buy?

Consumerism of electronic products is driving the SoC companies to tape out multiple variants of products every year. Demand for faster, low power, more functionality and interoperability is forcing the industry to come up with standard solutions for different interfaces on the SoC. In past couple of years, tens of new protocols have shown up on silicon and equal no. of protocols has been revised spreading their description to thousands of pages. Reusability is the key to conquer this level of complexity both for design and verification. The licensing models for IPs & VIPs vary and many design houses still are in the dilemma on ‘Make vs Buy’ for verification.
 
Why BUILD?
 
Points that run in favour of developing in-house VIP solutions include –
 
- Cost of licensing the VIP that front loads the overall design cost for a given project.
- Availability of VIP for a given HVL, methodology & simulator.
- Encrypted VIP code aggravates the debug cycle delaying already aggressive schedules.
- VIP & simulator from different vendors lead to further delay in root causing issues.
- Verification environment developed with a VIP ties you to a vendor.
- DUT specific customizations need to be developed around the VIP. Absence of adequate configurability in available solutions poses a high risk to verification.
 
Why BUY?
 
While obvious, reasons why to procure the VIP include –
 
- Reusability advocates focusing on features that differentiate the final product and leave the innovation on standard solutions to relevant experts.
- Developing a VIP comes with a cost. A team needs to be identified, built and maintained all throughout with a risk that attrition would lead to risk at critical times.
- Time to market is important. Developing/upgrading in house VIP may delay the product itself.
- For new protocols or upgrades to existing ones, there would be a ramp up associated with protocol knowledge and this increases the risk with internally developed solutions.
- Probability of finding a bug and the end product being interoperable is high with third party solutions that have experienced different designs.
- Architecting a VIP is easier said than done. Absence of an architecture & process leads to multiple issues.
- In house solutions may not be reusable across product lines (different applications) or projects due to missing configurability at all levels.  Remember verification is all about JUGAAD and such philosophy doesn’t work with VIP development.
- With increasing adoption of hardware acceleration/emulation for SoC verification, there is need to develop transactors to reuse VIP leading to additional effort which otherwise would be done by vendor.
- Poorly developed VIP can affect the simulator/accelerator performance badly in general and at SoC level in particular. This in turn would affect the productivity of the team. To be competitive, vendors would focus on this aspect which is otherwise missing with internal solutions.
- External solutions come with example cases and ready to use env giving a jumpstart to verification. For in-house solutions the verification team may end up experimenting to bring up the environment adding to delays.
 
Clearly the points in favour of BUY outweigh the BUILD ones. Infact the ecosystem around VIP is evolving where solutions are available to the issues favouring MAKE too. With standard HVLs and methodologies like UVM, simulator agnostic VIP is relatively easy to find. Multiple VIP vendors and design service providers with a VIP architecture platform are getting into co-development of VIPs to solve the problem of specific language, methodology, encryption and availability of transactors for acceleration. Customization of VIPs to address DUT specific features or enable transition from one vendor solution to another is also on the rise through such engagements.
 
With this, the debate within the organization needs to move from BUILD vs BUY to defining the selection criteria for COLLABORATION with vendors who can deliver the required solution with quality at desired time.
 
In case you still are planning to build one, drop your comments on why?
 
Relevant posts -

Sunday, June 23, 2013

Leveraging Verification manager tools for objective closure

Shrinking schedules topped with increasing expectations in terms of more functionality opened up gates for reusability in the semiconductor industry. Reusability (internal or external) is constantly on a rise both in design & verification. Following are some of the trends on reusability presented by Harry Foster at DVCLUB UK, 2013 based on Wilson Research Group study in 2012, commissioned by Mentor Graphics.
 



Both IP and SOC now demand periodic releases targeting specific features for a customer or a particular product category. It is important to be objective in terms of verifying the design for the given context to ensure the latest verification tools & methodologies do not dismiss the required focus. With verification claiming most of ASIC design schedule in terms of efforts & time, conventional schemes fail in managing verification progress and extending predictable closure. There is a need for a platform that helps in directing the focus of CRV, brings in automation around coverage, provides initial triaging of failures and aids in methodical verification closure. While a lot of this has been done using in house developed scripts, there is a significant time spent in maintaining it. There are multiple solutions available in the market and the beauty of being into consulting is that you get to play around with most of them considering customer preferences towards a particular EDA flow.
 
QVM (Questa Verification Manager) is one such platform provided by Mentor Graphics. I recently co-authored an article (QVM : Enabling Organized, Predictable and Faster Verification Closure) published in Verification Horizons, DAC 2013 edition. This is available on Verification academy or you can download the paper here too.

Wednesday, May 1, 2013

Constrained Random Verification flow strategy

The explosive growth of cellular market has affected the semiconductor industry like never before. Product life cycle have moved to an accelerated track to meet time to market. In parallel, engineering teams are in a constant quest to add more functionality on a given die size with higher performance and less power consumption. To manage this, the industry adopted reusability in design & verification. IPs & VIPs have carved out a growing niche market. While reuse happens either by borrowing from internal groups or buying from external vendors, the basic question that arises is, whether the given IP/VIP would meet the specifications of SoC/ASIC? To ensure that the IP serves requirement of multiple applications, thorough verification is required. Directed verification falls short in meeting this target and that is where Constrained Random Verification (CRV) plays an important role.
 
A recent independent research conducted by Wilson Research Group, commissioned by MentorGraphics revealed some interesting results on deployment of CRV.
 
In past 5 years –
- System Verilog as a verification language has grown by 271%
- Adoption of CRV increased by 51%
- Functional coverage by 65%
- Assertions by 70%
- Code coverage by 46%
 
- UVM grew by 486% from 2010 to 2012
- UVM is expected to grow by 46% in next 12 months
- Half of the designs over 5M gates use UVM
 
A well defined strategy with Coverage Driven Verification (CDV) riding on CRV can really be a game changer in this competitive industry scenario. Unfortunately, most of the groups have no answer to this strategy and pick adhoc approaches only to lose focus during execution. At a very basic level, focus of CRV is to generate random legal scenarios to weed out corner cases or hidden bugs not anticipated easily otherwise. This is enabled by developing a verification environment that can generate test scenarios under direction of constraints, automate the checking and provide guidance on progress. CDV on the other hand uses CRV as the base while defining Simple, Achievable, Measurable, Realistic and Time bound coverage goals. These goals are represented in form of Functional coverage, Code coverage or Assertions.
 
The key to successful deployment of CDV+CRV demands avoiding redundant simulation cycles while ensuring overall goals, defined (coverage) and perceived (verified design) are met. Multiple approaches to enable this further are in use –
- Run random regressions while observing coverage trend analysis till incremental runs aren’t hitting additional coverage. Analyze coverage results and feedback to the constraints to hit remaining scenarios.
- Run random regressions and use coverage grading to come up with a defined regression suite. Use this for faster turnarounds with a set of directed tests hitting the rest.
- Look for advanced graph based solutions that help you attain 100% coverage with most optimal set of inputs.
 
To define a strategy the team needs to understand the following –
- Size of design, coverage goals and schedule?
- Availability of HW resources (server farm & licenses)?
- Transition from simulator to accelerator at any point during execution?
- Turnaround time for regressions with above inputs?
- Room to run random regressions further after achieving coverage goals?
- Does the design services partner bring in complementing skills to meet the objective?
- Does the identified EDA tool vendor support all requirements to enable the process i.e. Simulator, Accelerator, Verification planner, VIPs, Verification manager to run regressions, coverage analysis, coverage grading, trend analysis and other graph based technologies.
 
A sample flow using CRV is given below -
 


Relevant Blog posts -
 





 

Saturday, March 30, 2013

Verification Futures India 2013 - Quick recap!

Verification Futures started off in 2011 from UK and in 2013 touched the grounds at India too. It is a one day conference organized by T&VS providing a platform for users to share the challenges in verification and for the EDA vendors to respond with potential and upcoming solutions. The conference was held on 19th MAR in Bangalore and turned out to be a huge success. It’s a unique event extending an opportunity to meet the fraternity and collaborate to discuss on challenges. I thank Mike Bartley for bringing it to India and highly recommend attending it.
 
The discussions covered a variety of topics in verification and somewhere all the challenges pointed back to the basic issue of ‘verification closure’. Market demands design with more functionality, small foot print, high performance and low power to be delivered in continuously shrinking time window. Every design experiences constant changes in the spec till the last minute expecting all functions of the ASIC design cycle to respond promptly. With limited resources (in terms of quantity and capabilities), the turnaround time for verification falls into critical path. Multiple approaches surfaced during the discussions at the event giving enough food for thought to solution seekers and hope to the community. Some of them are summarized below.
 
Hitting the coverage parameters is the end goal for closure. However, definition of these goals is biased on one side by an individual’s capability to describe the design into specification and on the other side to converge it into coverage model. Further, disconnect with the software team aggravates this issue. The software may not exercise all capabilities of hardware and actually hit cases not even imagined. HW-SW co-verification could be a potential solution to narrow-down the ever-increasing verification space and to increase the useful coverage.
 
Verifying the designs and delivering one that is an exact representation of spec has been the responsibility of the verification team. Given that the problem is compounding by the day there may be a need to enable “Design for Verification”. Designs that are correct by construction, easier to debug and follow bug avoidance strategies during development. EDA would need to enhance tool capabilities and the design community would need to undergo a paradigm shift to enable this.
 
Constrained Random Verification has been adopted widely to hit corner cases and ensure high confidence on verification. However, this approach also leads to redundant stimulus generation covering same ground over & over again. This means, even with grading in place, achieving 100% coverage is easier said than done. Deploying directed approaches (like graph based) or formal has its own set of challenges. A combination of these approaches may be needed. Which flow suits to what part of the design? Is 100% proof/coverage a ‘must’? Can we come up with objective ways of defining closure with a mixed bag? The answer lies in collaboration between the ecosystem partners including EDA vendors, IP vendors, design service providers and the product developers. The key would be to ‘learn from each other’s experiences’.
 
If we cannot contain the problem, are there alternates to manage the explosion? Is there a replacement to the CPU based simulation approach? Can we avoid the constraint of limited CPU cycles during peak execution period? Availability of cloud based solution extending elasticity or increasing velocity with hardware acceleration or enhanced performance using GPU based platforms are some potential solutions.
 
The presentation from Mentor included a quote from Peter Drucker –
- What gets measured, gets done
- What gets measured, gets improved
- What gets measured, gets managed
 
While the context of the citation was coverage, it applies to all aspects of verification. To enable continual improvement we need to think beyond the constraints, monitor beyond the signals and measure beyond coverage!
 

Sunday, March 3, 2013

Over-verification : an intricate puzzle

For verification, it was an eventful week. DVCON 2013 kept everyone busy with record attendance at the sessions and by following the tweets & blogs that resulted from them. The major highlight of this year’s conference was release of the latest update to System Verilog standard, IEEE 1800-2012 and free PDF copies made available, courtesy - Accellera.
 
With verification constantly marching to increase its claim on ASIC design schedule while retaining its position as a major factor for silicon re-spins, verification planning was a hot topic of discussion at DVCON. Some of the interesting points that came out of a panel discussion on verification planning were –
 
- Verification plan is not just a wish list. You have to define how you’re going to get there.
- Problem is not over-planning, but over-verifying designs because there has not been enough planning.
- Biggest objection we hear is we don’t have time to capture a verification plan. But you'll lose more time if you don't.
- What’s useful in verification planning is “ruthless prioritization.” You can never get it all done.
- My biggest challenge is getting marketing input into my verification plan.
- Failure to plan means planning to fail.
 
Last week, I guest blogged on a similar topic based on a recent survey conducted by Catherine & Neil. Clearly, the issue of poor planning gets highlighted in all areas of product development.
 
Traditionally, the verification plans were just a list of features to be verified, addressing ‘what to verify’. With the emergence of CRV, the plans started including the second aspect i.e. ‘how to verify’. Further, to bring focus to this never ending verification problem, CDV was adopted. The verification plans now started including target numbers in terms of coverage (code, functional and assertions) to define ‘when are we done’. With a given set of resources, when the ASIC design schedule is imposed on the verification plan, meeting the goals is a challenge. There arises a need to prioritize verification in terms of the features. Remember, Any code that is not verified will not work!
 
To enable this “ruthless prioritization”, collaboration is required among marketing, software and hardware groups to align to the design objectives. Everyone needs to understand the potential end applications and preferential ways in the design to achieve them. In case of IPs, this could mean that the initial releases target a limited set of applications based on the customers on board. Once that is achieved, ‘over-verifying’ can take over to further close on the grey areas. In case of SoC, it is a tough call. While design cost continues to increase with diminishing dimensions, the break even may not happen with limited applications (as a result of limited verification) of the SoC. The problem gets aggravated further with the specifications changing on the fly. A platform based approach could be a potential solution where variants of SoC are churned out frequently, but again defining the platform and prioritizing the features boils down to the same problem. A tough nut to crack!
 
The whole point of over-verifying comes to the fore front because verification is the long pole in the schedule. What if the designs could be ‘over verified’ within the timelines? What does it take to achieve that? Are tools like intelligent test benches, formal verification, hardware acceleration or cloud computing a solution? If yes, what is the associated cost and how does it affect?
 
There is no easy answer to any of these questions. In words of Albert Einstein, “The world we have created is a product of our thinking. It cannot be changed without changing our thinking.
 
Probably when an answer comes out, we will be on the road of commoditizing the hardware. Till then 'over-verification' is what we have to live with!

Sunday, January 27, 2013

Evolution of the test bench - Part 2

In the last post, we looked into the directed verification approach where, the test benches were typically dumb while the tests comprised of stimuli and monitors. The progress on verification was in linear relationship with the no. of tests developed and passing. There was no concept of functional coverage and even the usage of code coverage was limited. Apart from HDLs, programming languages like C & C++ continued to support the verification infrastructure. Managing the growing complexity and constant pressure to reduce the design schedule demanded an alternate approach for verification. This gave birth to a new breed of languages – HVLs (Hardware Verification Languages).
 
HVLs
 
The first one in this category was introduced by Verisity popularly known as ‘e’ language. The base of this language was AOP (Aspect Oriented Programming) and required a separate tool (Specman) in addition to the simulator. This language spear headed the entry of HVLs into Verification and was followed by ‘Vera’ that was based on OOP (Object Oriented Programming) promoted by Synopsys. Along with these two languages, SystemC tried to penetrate this domain with support from multiple EDA vendors but couldn’t really gain wide acceptance. The main idea promoted by all these languages was CRV (Constrained Random Verification). The philosophy was to empower the test bench with all features of drivers, monitors, checkers and a library of sequences/scenarios. The generation of tests was automated with the state space exploration guided by constraints and progress measured using functional coverage.
 
Methodologies
 
As adoption of these languages spread, the early adopters started building proprietary methodologies around them. To modularize development, BCLs (Base Class Libraries) were developed by each organization. Maintaining local libraries and continuously improving them while ensuring simulator compatibility was not a sustainable solution. The EDA vendors came forward with methodologies for each of these languages to resolve the above issue and standardize the usage of language. Verisity led the show with eRM (e Reuse Methodology) followed by RVM (Reference Verification Methodology) from Synopsys. These methodologies helped in putting together a process to move from block to chip level and across projects in an organized manner thereby laying the foundation for reuse. Though verification was progressing at a fast pace with these entrants, there were some inherent issues with these solutions that left the industry wanting for something more. The drawbacks include –
 
- Requirement for an additional tool license beyond simulator
- Efficiency of simulator took a toll because of passing the control back & forth to this additional tool
- These solutions had limited portability across simulators
- As reusability picked up, finding VIPs based on the HVL was difficult
- Hardware accelerators started picking up and these HVL couldn’t compliment it completely
- Ramp up time for engineers moving across organizations was high
 
System Verilog
 
To move to the next level of standardization, Accellera decided to improve on Verilog instead of driving e or Vera as industry standard. This led to the birth of System Verilog which proved to be a game changer in multiple respects. The primary motivation behind driving SV was to have a common language for design & verification to address the issues with other HVLs. Initial thrust to System Verilog came in from Synopsys by declaring Vera as open source and extending its contribution to definition of System Verilog for verification. Further Synopsys in association with ARM moved RVM to VMM (Verification Methodology Manual) based on System Verilog providing a framework for early adopters. With IEEE recognizing SV as a standard (1800) in 2005 the acceptance rate increased further. By this time Cadence acquired Verisity after its quest of promoting SystemC as a verification language. eRM was transformed to URM (Universal Reuse Methodology) that supported e, SystemC and System Verilog. This was followed by Mentor proposing AVM (Advanced Verification Methodology) supporting System Verilog & SystemC.  Though System Verilog settled the dust by claiming maximum footprint across organizations, availability of multiple methodologies introduced inertia to industry wide reusability. The major issues faced include –
 
- Learning a new methodology almost every 18 months
- The methodologies had limited portability across simulators
- Verification env developed using VIP from 1 vendor not easily portable to another
- Teams confused in terms of road maps for these methodologies based on industry adoption
 
Road to UVM
 
To tone down this problem, Mentor and Cadence merged their methodologies and came up with OVM (Open Verification Methodology) while Synopsys continued to stick to VMM. Though the problem was reduced, still there was a need for a common methodology and Accellera took the initiative to develop one. UVM (Universal Verification Methodology) largely based on OVM and deriving featured from VMM was finally introduced. While IEEE recognized ‘e’ as an standard (1647) in 2011, it was already too late. Functional coverage, assertion coverage and code coverage all joined together to provide the quantitative metrics to answer ‘are we done’ giving rise to CDV (Coverage Driven Verification).
 
Suggested Reading - Standards War