Friday, September 9, 2016

Quick chat with Alok Jain : Keynote speaker, DVCon India 2016

Alok Jain
All of us have heard the story of a woodcutter and the importance of the quote “Sharpen your axe”. It applies well to everything we do including verification. Two decades back, the focus of a verification engineer was predominantly on “What to Verify”. As complexity grew “How to Verify” became equally important. To enable this, EDA teams rolled out multiple technologies & methodologies. As we try to assimilate & integrate these flows amidst first time silicon & cost pressure, it is important for us to sharpen our axe through continuous learning, applying the right tool for the right job and applying it effectively.

Alok Jain, Senior Group Director in the Advanced Verification Division at Cadence would be discussing on similar lines as part of his DV track keynote on Day 1 at DVCon India 2016. With 20+ years of industry experience, Alok leads the Advanced Verification Division at Cadence India. Having associated with different technologies around verification in the past 2 decades, Alok candidly shared his views on the challenges beyond complexity that verification teams need to focus on. Here is a curtain raiser for his talk "Verification of complex SoCs" 

Alok your keynote topic focuses on challenges in verification beyond the complexity resulting from Moore’s law. Tell us more about it?

The keynote is going to focus on challenges and potential solution for verification of complex SoCs. Verifying a complex SoC consisting of tens of embedded cores and hundreds of IPs is a major challenge in the industry today. One of the big challenges is performance and capacity. Given the size and complexity of modern SoCs, tests can run for 18-24 hours or even more. One has to figure out how to get the best verification throughput. Another challenge is generation of test benches and tests. The test benches have to be developed in a way which can achieve good performance in both simulation and hardware acceleration. Tests have to be created that stress the SoC under the application use cases, low power scenarios, and multi-core coherency scenarios. The tests have to be re-usable across pre-silicon and post-silicon verification and validation platforms. Yet another challenge is coverage. One has to measure verification coverage across formal, simulation, and acceleration platforms at the SoC level to know when you are done. The final challenge is how to effectively debug across RTL, test bench, and embedded software on multiple verification platforms.

In the last decade, advancements in verification was focused primarily on unifying HVL(s) & methodologies. What changes do you foresee in verification flows ‘Beyond UVM’?

UVM is very well suited for IP, Sub-system and some specific aspects of SoC verification. However, UVM is not the best approach for general SoC verification. UVM is essentially developed for “bottom-up” verification where the focus is on trying to exhaustively verify IP/sub-systems. SoCs require a more “top-down” verification where the focus is on stressing the SoC under important application use cases. There is a need to reuse SoC content across simulation, emulation, FPGA and post-silicon. UVM is optimized for simulation and is too slow and heavy for high speed platforms. Finally, there is a need to drive software stimulus on CPUs in coordination with hardware interfaces. It is difficult in UVM to drive and control software and hardware interfaces. All this is asking us to explore options beyond UVM. The keynote will cover some more insights into options beyond UVM.

The rise of IoT is stretching the design demands to far ends i.e. server class vs edge node devices. How do you see verification flows catering to these demands?

Several of the requirements for IoT verification are similar to the ones for complex SoCs. But then there are some unique additional requirements from the IoT world. The first is simply the cost of verification. For complex SoCs, the cost of verification has been steadily rising. For IoT applications, one has to consider alternative methods and flows that can reduce the cost. One option is to use some form of a correct by construction approach where the design is specifically done in a way to enable a simpler form of verification. Another approach is to put much more emphasis on reuse. This includes horizontal reuse which is portability across multiple platforms and vertical reuse which is reuse from IP to sub-system to SoC. Another requirement is verification throughput for design with considerably more analog, mixed signal and low power content. Finally, one has to devise verification techniques and flows that can cater to the security and safety requirements of modern IoT applications.

Formal took a while to become mainstream. The rise of Apps in Formal seems to have accelerated this adoption. What’s your view on this?

Yes, I do agree that Apps has considerably accelerated the pace of adoption of formal. Traditionally, formal tools have been developed and used by formal PhDs and experts. The main charter and motivation of these experts was to solve the coolest and hardest problems in formal verification. It was only after some time that both sides (developers and users) started realizing that formal can be used in a much more practical and usable way by engineers to solve specific problems. This lead to the development of various formal apps which greatly enabled the mainstream usage of formal.

This is the 3rd edition of DVCon India. What are your expectations from the conference?

I am expecting to attend keynotes, technical papers and panel discussions that give me an understanding of some the latest work in the domain of design and verification of IPs, sub-systems and SoCs. In addition, I am looking forward to the opportunity to network with some of my peers from the industry and academia.

Thank you Alok!


Come join us in this exciting journey to contribute, collaborate, connect & celebrate @ DVCon India 2016!

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Saturday, September 3, 2016

Quick chat with Sushil : Keynote speaker, DVCon India 2016

Sushil Gupta
A very famous urdu verse that translates  translates to “When I started I was alone, slowly others joined and a caravan formed” truly describes the plethora of challenges in SoC verification that continues to abound as the design complexity marches north. It started with growing logic on the silicon and moved to performance before power took over. While we still juggle up to handle the PPA implications, time to market pressure with cost effective secure customized solutions further add enough spice to the problem.

Sushil Gupta, Group Director in the Verification group at Synopsys covers these problems & potential solutions in his keynote titled “Today’s SoC Verification Challenges: Mobile and Beyond” on Day 2 of DVCon India 2016. Sushil joined Synopsys in 2015 as part of acquisition of Atrenta. He has 30 years of industry experience which spans various roles in engineering management and leadership in EDA and VLSI Design companies. Here is a quick excerpt of the conversation with Sushil around this topic –

Sushil your keynote topic focuses on challenges in verification associated with the next generation of SoCs. Tell us more about it?

We have seen the chip design industry shift its focus from computers and networking into System on Chips (SoC) for mobility – smartphones, tablets, and other consumer devices. The next wave of SoCs go beyond mobility into IoT, automotive, robotics, etc. These SoCs integrate hundreds of functions into a single chip and a complete software stack with drivers, operating system, etc.. The result is 10X increase in verification complexity in continually shrinking market windows. My talk focuses on these challenges and how verification solutions must scale to address them effectively.

Reuse of IP/Subsystems is the key trend with SoCs today. Do you think that reuse from third party add to challenges in verification? If yes, how?

IP/sub-system reuse (both third party and in-house) helps accelerate the integration of multiple functions into a single chip. However, these IP/sub-systems can come from multiple sources with heterogeneous design and verification flows. The resulting SoCs are extremely complex with  millions of lines of RTL and testbench, protocols, assertions, clock and power domains, and billions of cycles of OS boot.

Do you think progress in verification methodologies & flows have reached to a point where consolidation is key to allow verification engineer use the best of each? Any specific trends that you would like to highlight on this?

Integrated verification platforms are key to verification convergence. Verification now extends beyond functional verification into low power verification, debug automation, static  and formal verification, early software bring-up and emerging challenges with safety, security and privacy. This requires not only best-in-class verification tools and engines, but also native integrations between the tools to enable seamless transitions and faster convergence.

Sushil you have had a significant stint with formal at Atrenta. What are your thoughts on adoption of Formal coming to mainstream? How does the trend looks moving forward?

Formal is fast becoming mainstream because it can catch bugs that are otherwise very difficult to detect. Advancements in performance, debug and capacity of formal verification tools has enabled formal to become an integral part of a comprehensive SoC verification flow. The emergence of formal ‘Apps’ for clock and reset domains, low power, connectivity, sequential equivalence, coverage exclusions, etc. has enabled a broad range of design and verification engineers to benefit from formal verification without the need to be a formal “expert”.   

This is the 3rd edition of DVCon India. What are your expectations from the conference?

Speaking from my own experience having started my career with TI India in 1986, India has a very rich design and verification expertise. I hope to learn about the latest challenges and innovations in verification and look forward to working with our customers and partners on new breakthroughs.

Thank you Sushil!

Join us on Day 2 (Sept 16) of DVCon India 2016  at Leela Palace, Bangalore to attend this keynote and other exciting topics.

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Saturday, August 27, 2016

Quick chat with Wally : Keynote speaker, DVCon India 2016

Walden C. Rhines
It takes a village to raise a child! Correlating it with the growth of an engineer, YES! it does require Contribution from many & Collaboration with many. While our respective teams play the role of a family, the growth is accelerated when we Connect beyond these boundaries. DVCon India is one such platform to enable all of these for Design, verification & ESL community. The 3rd edition of DVCon India is planned on September 15-16 at Leela Palace, Bangalore.

The opening keynote on Day 1 is from Walden C. Rhines, CEO & Chairman, Mentor Graphics. It is always a pleasure to hear his insights on the Semiconductor & EDA industry. This year, he picked up an interesting topic – “Design Verification: Challenging Yesterday, Today and Tomorrow”. While we all wait with excitement to hear him on Sept 15, Wally was kind enough to share his thoughts on some queries that came up after I read the brief about his keynote. Below is an unedited version of the dialogue for you.

Wally your keynote topic is an excellent start to the program discussing the challenges head on. Tell us more about it?

Our industry has done a remarkable job of addressing rising complexity in terms of both design and verification productivity. What’s changed recently in verification is the emergence of a new set of requirements beyond the traditional functional domain. For example, we have added clocking, power, performance, and software requirements on top of the traditional functional requirements; and each of these new requirements that must be verified. While a continual development of new standards and methodologies has enabled us to keep pace with rising complexity and be productive, we are seeing that requirements for security and safety are becoming more important and could ultimately pose challenges more daunting than those we have faced in the past.

In the last few years ESL adoption has improved a lot. Is it the demand to move at higher abstraction level or convergence of diverse tool sets into a meaningful flow that is driving it?

Actually, a little of both. Historically, our industry has addressed complexity by raising abstraction when possible. For example, designers now have the option of using C, SystemC, or C++ as a design entry language combined with high-level synthesis to dramatically shorten the design and verification cycle by producing correct-by-construction, error-free, power-optimized RTL.

Moving beyond high-level synthesis, we are seeing new ESL design methodologies emerge that allow engineers to perform design optimizations on today’s advanced designs more quickly, efficiently, and cost-effectively than with traditional RTL methodologies by prototyping, debugging, and analyzing complex systems before the RTL stage.  ESL establishes a predictable, productive design process that leads to first-pass success when designs have become too massive and complex for success at the RTL stage.

The rise of IoT is stretching the design demands to far ends i.e. server class vs edge node devices. How does the EDA community view this problem statement?

Successful development of today’s Internet of Things products involves the convergence of best practices for system design that have evolved over the past 30 years. However, these practices were historically narrowly focused on specific requirements and concerns within a system. Today’s IoT ecosystems combine electronics, software, sensors, and actuator; where all are interconnected through a hierarchy of various complex levels of networking. At the lowest level, the edge node as you referred to it, advanced power management is fundamental for the IoT solution to succeed, while at the highest-level within the ecosystem, performance is equally critical. Obviously, EDA solutions exist today to design and verify each of these concerns within the IoT ecosystem. Yet more productivity can be achieved with more convergence of these solutions when possible.  For example, there is a need today to eliminate the development of multiple silos of verification environments that have traditionally existed across various verification engines—such as simulation, emulation, prototyping, and even real silicon used during post-silicon validation. In fact, work has begun with Accellera to develop a Portable Stimulus standard which will allow engineers to specify the verification intent once in terms of stimulus and checkers, which then can be retargeted though automation for a diverse set of verification engines.

Wally you seem to love India a lot! We see frequent references from you about the growing contribution of India to the global semiconductor community. Any specific trends that you would like to highlight?

Perhaps one of the most striking findings from our 2016 Wilson Research Group Functional Verification Study is how India is leading the world in terms of verification maturity. We can measure this phenomenon by looking at India’s adoption of  System Verilog and UVM compared to the rest of the world, as well as India’s adoption of various advanced functional verification techniques, such as constrained-random simulation, functional-coverage, and assertion-based techniques.

This is the 2nd time you would be delivering a keynote at DVCon India. What are your expectations from the conference?

I expect that the 2016 DVCon India will continue its outstanding success as a world-class conference, growing in both attendance and exhibitor participation, while delivering high-quality technical content and enlightening panel discussions.

Thank you Wally! We look forward to see you at DVCon India 2016.

Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Sunday, June 26, 2016

Marlin & Dory way of 'Finding Bugs'

On our return after watching ‘Finding Dory’, my son asked, “Dad, if you were to find Dory would you be able to do that”? I said, “Ofcourse”! Next, came HOW? I reminded him that my job is to Find Bugs and so I know the tricks of the game already. That made him super excited and wanting to know more about it. Given that this time the reference was picked by him, I decided to continue the same to explain him further.

In the movie, Marlin and Nemo were finding Dory inside the Marine Life Institute (MLI), likewise, we find bugs inside the design called as System on Chip (SoC). The SoC has a lot of similarity to MLI in the sense that it is big and complex. As MLI had different sections, our SoC has different blocks where Dory (Bug) can be found. Also it is not only the sections but the inter-connections that are equally important. When we look for Dory (Bug) inside these blocks we call it IP verification and when our focus is on the inter-connections we call it Integration or SoC Verification.

Image Source : http://www.socwall.com/desktop-wallpaper/7462/dory-and-marlin-by-ryone/

We start off our quest using the Marlin way i.e. “Assess the situation, evaluate, and plan it out”. We call it the Directed Verification approach wherein we understand the design, prepare a plan on where and how we would look around for Dory (Bug) and then execute accordingly. During this process we also keep asking (reviews) around (designers & peers) to let us know if we are missing out on anything. So if Dory is somewhere around, there is a chance we may sight her. But since Dory doesn’t think much before acting, that makes her unpredictable. There is always a possibility that we may not find her as per our plan.

My son’s eyeballs zoomed…. THEN?

Then we also do what Marlin & Nemo did i.e. follow “What would Dory do”? My son jumped, "She wouldn’t think twice and be random". Yes! We pick the Dory way and we call it Random Verification. We search randomly everywhere in an unplanned sequence and guess what? The chances that we would find Dory (Bug) increase. To make it more effective, we define weights and constraints to the randomness so as to improve our luck of finding her further. The approach now becomes Constrained Random Verification (CRV).  While following this random pattern we also take a note (coverage) of where all we have visited to avoid repeating same place again and save time. Now we can find her faster. Tracking coverage on top of CRV is called Coverage Driven Verification (CDV). So if we missed finding Dory (Bug) using the Marlin way (Directed verification) we still have an option to find her the Dory way (CRV).

That settled my son for a while till he pointed again saying, “Dad, maybe you should seek help from Bailey, the beluga whale who can find Dory faster than anyone using echo location”. I smirked and told him that we have our Bailey too and we call it Formal Verification. But then, Bailey was dependent on the whale voice between Dory & Destiny, the whale shark without which he couldn’t be of much help. Similarly, in Formal we are dependent on the assertions that connect the tool to the bug in the design. The effectiveness of this approach is purely dependent on the quality of voice (assertions) and the connect (covering all parts of design) between Dory & Destiny. But yes, if that is in place, it is really fast & effective.

Now convinced that his dad would be able to find Dory, my son asked, “So once you have found Dory, what do you do next”?

I laughed and told that we don’t have to find only 1 Dory (Bug). There are many of them and the address and architecture of the institute (new SoC) also keeps changing. So we just keep Finding Dory (Bug)!!!

Disclaimer: "The postings on this blog are my own and not necessarily reflects the views of Aricent"

Sunday, June 12, 2016

Learning Verification with Angry Birds

What do you say when your little one asks you, “Dad! What do you do”? Well I said, “I am an engineer”. For his age, he knew who is a driver, a doctor and a policeman. So the next question was “What does an engineer do”? I pointed him to different man made stuff around to explain him what all an engineer does. As an inquisitive kid he wanted to know if I build them all. That is when I tried to explain different engineering functions building different artifacts. So the question came back as to, “What do you do”? Finally, I told him that, “I find BUGS in the designs”. The next one was HOW? Given that he watched The Angry Bird movie recently & loves to play that game so I picked from there to explain what a verification engineer really does.

Figure 1 : Labeled screenshot of The Angry Birds game 
As in the figure above, the screenshot of the game is called TESTBENCH for us. The target that is seen on the right is called the DESIGN UNDER TEST or DUT in short. Our goal is to hammer the DUT with minimum iterations such that all the BUGS inside it like the pigs above get kicked out. On the left you see a series of angry birds waiting to take the leap. We refer to them as the PACKETS or SEQUENCE ITEMS. They are all from the same base class “angry_birds” i.e. have certain characteristics in common while some different features in each one so as to ensure we hit the DUT differently. We sequence these birds (sequence items) in such a way so as to generate different scenarios to weed out the pigs (bugs). This scenario is called a TEST CASE. The catapult shown is known as the DRIVER in our testbench. It takes the angry bird (sequence item) and throws (drives) it on to the DUT at different points known as INTERFACES of the DUT.  Once the angry bird (sequence item) hits the DUT, there is an inbuilt MONITOR in the game (testbench) that confirms if the flight taken is useful or not & if it is, how much? If the hit resulted in correct outcome the SCOREBOARD gives a go ahead and this leads to the scores that we get and we call it COVERAGE. The high score is the maximum coverage achieved with this test case.  When we are able to kill all the pigs (here bugs) hidden in different parts of the DUT, we are all set to move to another screen i.e. new test case targeting another part of the DUT. Once all tests at a given level pass, we move to the next level which is a little tougher. We can call it moving vertically i.e. block to subsystem to SoC/Top OR moving horizontally within a given scope i.e. more complex test scenarios or stress tests. Usually when we have passed all levels, by that time another version of the game is released and we move to that one i.e. next PROJECT.

After explaining it to my son, I felt he would be fascinated with my work. He thought about it and said, “Dad, so you don’t really work, you go to office and play”!!!

All I could tell him was, “Become a Verification Engineer and you can play too at work”!!!

Disclaimer: "The postings on this blog are my own and not necessarily reflects the views of Aricent"

Monday, May 23, 2016

.....of Errors & Mistakes in Verification

Miniaturization of devices has led to packing more functionality on the given slice of silicon. An after effect of that is heating of the device due to increased power consumption and discovering innovative ways of cooling off these components. As electronics adopted wireless, the concern on power came to forefront as, who wants to recharge the battery every second hour. Different techniques have been adopted since then to address this growing concern. One such technique is letting parts of silicon go to into hibernation and trigger a wake up when needed. My hibernation from blogging was no different except that though I received many pokes during this time probably the trigger wasn’t effective enough to tantalize the antennas of the blogger in me. It was only during a recent verification event hosted by Mentor Graphics when my friend Ruchir Dixit, Technical Director – India at Mentor Graphics introduced the event with an interesting thought touching the basics of verification. The message completely resonates the idea of this blog of exploring verification randomly but rooted on basics and I took it as a sign to get the ball rolling again. To start with, I am sharing the thoughts that actuated this restart. Thank you Ruchir for allowing me to share the same.


Source: Slides from Ruchir Dixit - 'Verification Focus & Vision' presented at Verification Forum, Mentor Graphics, India

Before we unfold the topic further have you ever thought as to why computers only spell out ERRORS & not MISTAKES?

Let’s start with understanding the basic difference between an error & a mistake. A mistake is usually a choice that turns out to be wrong because the outcome is wrong. Mistakes are made when a free choice is made either accidentally or performance based but can be prevented or corrected. An error, on the other hand, is a violation of a golden reference or set of rules that would have lead to a different action and outcome.  Errors typically are a result of lack of knowledge and not choice. That is the reason that computer doesn’t make mistakes and only throws error on screen when unable to move forward on a pre-defined set of actions or sees a violation to them. And that is again a reason why you see Warnings & Errors from our EDA tools and not Mistakes :) Machines don’t make mistakes… we do!

Now talking about verification, the sole reason of why we verify is BUGS! And the source of these BUGS are the ERRORS & MISTAKES committed as part of code development.

Mistakes as we understood earlier is resultant of a free choice. While no one wants to make a bad choice, still this creeps into the code due to distractions or coding in a hurry. To prevent or correct such mistakes it is the basic discipline one needs to follow and that is where the EDA tools come to rescue in assisting you to make the right choice.
 
Errors typically happen due to ignorance about the subject or partial knowledge leading to wrong assumptions. This could further find it roots in incomplete documentation or incorrect understanding of the subject. Given that documentation & the resulting conclusions are more subjective it is hard to define the right way to document anything. The only way to minimize errors is to prevent them from occurring by defining clear set of rules that need to be followed and that is where ‘Methodology’ comes into picture. A classic example of the same is having a template generator for UVM code to ensure the code is correct by construction & integrates seamlessly at different levels. Having coding guidelines is another way to reduce errors. Uncovering the rest of the errors is where the tests become important and unless we stimulate that scenario we may not know what & where the error is.

So while errors & mistakes are unavoidable, it is the deployment of the right set of methodologies and tools that leads to a bug free silicon …. In time…. Every time!

After writing this post, I was tempted to say that ‘To ERR is HUMAN and to FORGIVE or VERIFY is DIVINE! 

But then that would be a MISTAKE again :)

Happy Bug Hunting!!!


Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”