Continuing the discussion further on CRV, here is a case study where a performance intense core was verified from scratch.
- Architecture documents for each block and integrated core.
- Reference model for each block developed for architecture validation and early software development.
- Periodic deliveries of core tests from architecture validation team, reusable at block and core level.
- Enough compute cycles, tool licenses and access to hardware accelerator to speed up verification at core level.
- Timeline for Architecture to RTL freeze = 13 months.
Develop block level test benches targeting CRV and capable of reusing system level tests to ensure staged bring up of functionalities in parallel. Top level test bench would reuse monitors and assertions from block level and simulate system level tests (no CRV) only. Sign off criteria would be 100% functional coverage, Code coverage (Line, expression & FSM), assertion coverage at block level and toggle coverage at core level.
Why limit CRV at block level –
- Test simulation time at core level was expected to be 12-48 hours, throttling iterations and affecting schedule.
- Low probability of finding an RTL bug using CRV with focus on integration verification.
- Test cases coming from architecture validation team would cover most of the use case scenarios.
- Bring up of constrained random TB for core would require staged approach i.e. append 1 block at a time and stabilize that. Verification closure in given time was unpredictable.
The estimated effort was 200 man months and the team started off with 1 engineer per block taking CDV approach. After test plan, block level test benches were developed using standard methodology and code shared for similar functionalities. The test benches became functional using system level tests executing at block level. While RTL debugging started, coverage models were developed to realize the test plan and hooked onto the TB. Next the CRV support was added to the test bench. The random tests were initially directed towards areas yet to be explored by system tests to exercise DUT from all fronts. When the system tests stagnated for a data path within the block, the random tests continued to weed out bugs and hit scenarios for coverage closure. By the time system tests started passing for individual blocks, the team was ready with the core level TB. Every engineer finishing block level started contributing to top level. Tests at core level consumed a lot of simulation cycles (i.e. idle time for engineers) so a couple of engineers focused on CR TB for adjacent blocks (choking point from performance perspective) to regress the RTL further. No specific coverage goals were planned at this level and the emphasis was to hit random scenarios. Finally the team was able to deliver quality RTL with acceptable slippage in schedule.
Following CDV and standard methodology for CRV helped in defining, tracking and achieving goals in an organized manner. Maintenance and rework claimed limited overhead. Reusing components from block to top level was smooth with minimum efforts.
With CRV TB, the bug rate increased drastically, averaging to 7 bugs per week per block for at least 6 weeks. The scenario generation increased exponentially while tests delivered by architecture validation team continued to be incrementing linearly. The team was able to uncover hidden bugs and hit corner cases difficult to cover otherwise. Performance bottleneck scenario generation was easily achievable through CRV.
The CRV TB for adjacent blocks revealed interesting outcome. Converging on the valid constraints was challenging. The set of valid constraints at block level required additional constraints guided by introduction of a new block in the DUT data path. This TB was able to uncover 3 bugs, 2 with software work around and 1 critical where the two blocks would hang in a particular scenario. The latter was difficult to catch through system tests.
Lesson learnt – If you are unable to introduce CRV at top level, try to deploy it to the level possible!