In response to my earlier post on CRV, Daniel Payne raised an interesting question on Semiwiki – “Is there any difference in choosing a VHDL, Verilog, System C or System Verilog simulator from Cadence, Synopsys or Mentor and then using the CRV approach”?

Before answering this, let’s quickly check the CRV approach. Constrained Random Verification includes choosing an HVL, defining the test bench architecture and developing constraints to generate legal random stimuli. When this test bench is used to simulate the DUT, a random seed value and a simulator become part of the verification environment. The seed helps in reproducing the failure only if the other inputs i.e. the test bench architecture (components hierarchy) and the set of constraints are constant. Any change to these inputs may lead to different results with the same seed value also. The random seed value and the constraints are fed to the constraint solver integrated in the simulator to generate random values.

Let’s assume the HVL used is System Verilog (IEEE 1800). The standard provides details on the data types, how to randomize, how to define constraints and output of expressions. It does not specify the algorithms involved and the order in which the constraint solver should generate the outcome. The implementation of the solver is left to the sole discretion of the simulator R&D teams. The available simulators from different vendors may also support multiple solvers that tradeoff between memory usage and performance. Therefore, a given input may not give the same results when run on –

- Simulators from different sources OR

- Different versions of the same simulator OR

- Using different solver from the same simulator.

All the constraint solvers claim to solve standard puzzles like Sudoku, N-Queen problem etc. Some of them even claim to solve these standard problems in limited or minimum iterations. Although these claims are correct but its reproducibility entirely depends upon how the constraints are defined. The capability of any constraint solver to return results in a given time primarily depends upon the representation of constraints. For a given equation, the solver may sometimes time out, but if the constraints typifying the equation are restructured it gives results instantly.

So, what to look for in the simulator when implementing CRV approach?

All the industry standard simulators have stable solvers varying on performance and there is no standard way to benchmark them. However one should consider other important aspects too -

- Support of the data types planned for implementing the test bench.

- Ease of debugging the constraint failures.

Next important part is how to define clean and clear constraints that would aid the solver in giving out results quickly. Following the KISS (Keep It Simple Silly) principle sets the right ground. Split the constraints to smaller equations, particularly if the expressions involve multiple mathematical operators and data types. This will provide better control on the constraints blocks and help in solving these constraints easily. Defining easy to solve constraint would improve the predictability of the outcome i.e. the generated value to be in a legal range and how fast it can be generated.

Coming back to the main question, the answer is

**YES**, different simulators will give different results (values and fastness) while using CRV approach. This will affect the turnaround time of regressions and coverage closure too. However, there is no standard benchmarking available on the constraint solvers from these simulators. It is up to the verification engineer how best he/she can define the constraints and turn the output to be solver independent.NOTE – This doesn’t mean that solver will never hit a bottleneck. Even with well defined constraints, the constraint solvers fail to deliver the results sometimes.