Friday, February 22, 2019

Quick chat with Tom Fitzpatrick : recipient of Accellera Technical Excellence Award


Moore’s law, the driving force behind the evolution of the semiconductor industry, talks about the outcome wherein the complexity on silicon doubles every 2 years. However, to achieve this outcome for past 50+ years, several enablers had to evolve at the same or much faster pace. Verification as a practice, unfolded, as part of this journey and entered mainstream, maturing with every new technology node. To turn the wheel around everytime, various spokes in the form of languages, EDA tools, flows, methodologies, formats, & platforms get introduced. This requires countless hours of contribution from individuals representing diverse set of organizations & cultures putting the canvas together for us to paint the picture. 

Tom Fitzpatrick
At DVCon, Accellera recognizes outstanding achievement of an individual and his/her contributions to the development of Standards. This year, Tom Fitzpatrick, Vice Chair of the Portable Stimulus Working Group and member of the UVM Working Group, is the recipient of the 8th annual Accellera Technical Excellence Award.  Tom who represents Mentor at Accellera, has more than 3 decades of rich experience in this industry. In a quick chat with us, he shares his journey as a verification engineer, technologist & evangelist!!!    

Many congratulations Tom! Tell us about yourself & how did you start into verification domain?

Thanks! I started my career as a chip designer at Digital Equipment Corporation after graduating from MIT. During my time there, I was the stereotypical “design engineer doing verification,” and learned a fair amount about the EDA tools, including developing rather strong opinions about what tools ought to be able to do and not do. After a brief stint at a startup, I worked for a while as a verification consultant and then moved into EDA at Cadence. It was in working on the rollout of NC-Verilog that I really internalized the idea that verification productivity is not the same thing as simulator performance. That idea is what has really driven me over the years in trying to come up with new ways to make the task of verification more efficient and comprehensive.

Great! You have witnessed verification evolving over decades. How has been your experience on this journey?

I’m really fortunate to have “grown up” with the industry over the years, going from schematics and vectors to where we are now. I had the good fortune to do my Master’s thesis while working at Tektronix, being mentored by perhaps the most brilliant engineer I have ever known. I remember the board he was working on at the time, which had both TTL and ECL components, multiple clock domains, including a voltage-controlled oscillator and phase-locked loop, and he got the whole thing running on the first pass doing all of the “simulation” and timing analysis by hand on paper. That taught me that even as we’ve moved up in abstraction in both hardware and verification, if you lose sight of what the system is actually going to do, no amount of debug or fancy programming is going to help you.

For me, personally, I think the biggest evolution in my career was joining Co-Design Automation and being part of the team that developed SUPERLOG, the language that eventually became System Verilog. Not only did I learn a tremendous amount from luminaries like Phil Moorby and Peter Flake, but the company really gave me the opportunity to become an industry evangelist for leading-edge verification. That led to working on VMM with Janick Bergeron at Synopsys and then becoming one of the original developers of AVM and later OVM and UVM at Mentor. From there I’ve moved on to Portable Test and Stimulus as well.

So, what according to you were key changes that have impacted verification domain the most?

I think there were several. The biggest change was probably the introduction of constrained-random stimulus and functional coverage in tools like Specman and Vera. Combined with concepts like object-oriented programming, these really brought verification into the software domain where you could model things like the user accidentally pressing multiple buttons simultaneously and other things that the designer didn’t originally think would happen. I think it was huge for the industry to standardize on UVM, which codified those capabilities in System Verilog so users were no longer tied to those proprietary solutions and the fact that UVM is now the dominant methodology in the industry bears that out. As designs have become so much more complex, including having so much software content, I hope that Portable Stimulus will serve as the next catalyst to grow verification productivity.

Tom, you have been associated with Accellera for long & contributing to multiple standards in different capacities. How has been your experience working on standards?

My experience with standards has been entirely self-inflicted. It started when I was at Cadence and heard about a committee standardizing Verilog but that there were no Cadence people on the committee. I kind of joined by default, but it’s turned out to be a huge part of my career. Aside from meeting wonderful people like Cliff Cummings, Stu Sutherland and Dennis Brophy, my work on standards over the years has given me some unique insights into EDA tools too. I’ve always tried to balance my “user side,” where I want the standard to be something I could understand and use, with my “business side,” where I have to make sure that the standard is supportable by my company, so I’ve had to learn a lot more than someone in my position otherwise might about how the different simulators and other tools actually work. On a more practical note, working on standards committees has also helped me learn everything from object-oriented programming to Robert’s Rules of Order.

You have been one of the key drivers behind development of Portable Test and Stimulus Standard (PSS). How was your experience working on this standard compared to UVM?

Good question! UVM was much more of an exercise in turning existing technology into an industry standard, which involved getting buy-in from other stakeholders, including ultimately VMM advocates, but we didn’t really do a whole lot of “inventing.” That all happened mostly between Mentor and Cadence in developing the OVM originally. We also managed to bring most of the VMM Register Abstraction Layer (RAL) into UVM a bit later.

Portable Stimulus has been different for two reasons. First, I’m the vice-chair of the Working Group, so I’ve had to do a lot more planning than I did for UVM. The other is that, since the technology is relatively new, we had the challenge of combining the disparate capabilities and languages used by existing tools into a new declarative language that has different requirements from a procedural language like System Verilog. We spent a lot of time debating whether the standard should include a new language or whether we should just use a C++ library. It took some diplomacy, but we finally agreed to the compromise of defining the new language and semantics, and then producing a C++ library that could be used to create a model with the same semantics. To be honest, we could have played hardball and forced a vote to pick only one or the other, but we wanted to keep everyone on board. Since we made that decision, the working group has done a lot of really great work.

What are the top 2-3 challenges that you observe we as an industry need to solve in verification domain?

Remember when I said earlier that verification productivity is about more than simulator performance? Well, with designs as big and involved as they are today – and only going to get more so – we’re back at the point where you need a minimum amount of performance just to be able to simulate the designs to take advantage of things like UVM or Portable Stimulus without it taking days. This is actually part of the value of Portable Stimulus in that the engine can now be an emulator, FPGA prototype or even silicon and you can get both the performance to get results relatively quickly and the productivity as well.

The other big challenge I think is going to be increasing software content of designs. Back when I started, “embedded software” meant setting up the hardware registers and then letting the hardware do its thing. It made verification relatively easy because RTL represents physical hardware, which doesn’t spontaneously appear and disappear, like software. We’ve spent the last ten or so years learning how to use software techniques in verification to model the messy stuff that happens in the real world and making sure that the hardware would still operate correctly. When you start trying to verify a system that has software that could spontaneously spawn multiple threads to make something happen, it becomes much harder. Trying to get a handle on that for debug and other analysis is going to be a challenge.

But perhaps the biggest challenge is going to be just handling the huge amounts of data and scenarios that are going to have to be modelled. Think about an autonomous car, and all of the electronics that are going to have to be verified in an environment that needs to model lots of other cars, pedestrians, road hazards and tons of other stuff. When I let myself think about that, it seems like that could be a larger leap than we’ve made since I was still doing schematic capture and simulating with vectors. I continue to be blessed to now work for a company like Siemens, that is actively engaging this very problem.

Based on your vast experience, any words of wisdom to the practicing & aspiring verification engineers?

I used to work with a QA engineer who was great at finding bugs in our software. Whenever a new tool or user interface feature came out, he would always find bugs in it. When I asked him how he did it, he said he would try to find scenarios that the designer probably hadn’t thought about. That’s pretty much what verification is. Most design engineers are good enough that they can design a system to do the specific things they think about, even specific corner cases. But they can’t think of everything, especially with today’s (and tomorrow’s) designs. Unfortunately, if it’s hard for the design engineer to think of, it’s probably hard for the verification engineer to think of too. That’s why verification has become a software problem – because that’s the only way to create those unthought-of scenarios.

Thank you Tom for sharing insights & your thoughts.  
Many congratulations once again!!!


DVCon US 2019 - February 25-28, 2019 - Double Tree Hotel, San Jose, CA

Tuesday, November 6, 2018

Quick chat with Srini Maddali : Keynote speaker Accellera Day India 2018

Distant communication has come a long way over the last century! The 64 kbps telephone line enabling real time communication revolutionized traditional messenger based means. Wireless communication brought in another revolution enabling people to talk on the go! While this eased up communication, it also took technology to remote places adding millions of users to this network. With wireless & internet joining hands, a new world of possibilities opened up that continues to evolve and amaze us by the day! Qualcomm continues to lead the effort of bringing wireless technology and associated enablers to mainstream. The verification team at Qualcomm works relentlessly to defy the challenges posed by this accelerated rise in complexity every day. As we look forward for the 5G enabled world, what are some of these challenges & expectations from the next generation of verification tools & technologies?

Srini Maddali

Srini Maddali, Vice President, Technology at Qualcomm India Design Center leads the Mobile SoC development Engineering teams. With focus on Low-Power, High-Performance SoC design and enabling rapid ramp to volume of these designs, Srini has a first-hand information on these challenges which he would be sharing as part of his keynote on the Accellera Day in India at Radisson Blu Bengaluru on November 14, 2018. A quick chat with Srini unfolded the adventurous ride that awaits the verification community as we embrace proliferation of IoT through the 5G route. Read on!!!

Srini, you plan to discuss on the topic “Challenges in Validation of Low Power, High Performance, and Complex SoCs at Optimal Cost”? Tell us more about it?

Year over Year, complexity, performance and power requirements have been increasing rather drastically. In addition to that, time from the start of SoC development to customer deployment is shrinking. These pose significant challenges to SoC development teams both in design and validation. 

For our team to put a test plan comprehending each use case requirements of each domain in these Systems (System that is on a monolithic die) and then validate concurrent use cases involving multiple domains operating independently, creating conditions that would stress the designs is quite challenging. On top of this, the team must validate performance aspect of each domain independently, valid concurrent scenarios, and simulate use cases beyond the spec of system ensuring graceful exit each time. 

The challenges further become multi-fold when simulating power profiles comprising number of power domains with huge number of system use cases interacting dynamically.

The Cost aspect is beyond the resource aspect and instead the Time aspect of delivering SoC to customer on a tight schedule.

Combining all of these with multiple SoCs getting developed in parallel to leverage the best across the chips and validating entire SoC builds up a multi-dimensional challenge for any team. As part of the team, I witness this with every chip development and multiple developments per year. I would be covering these aspects in my talk.

WoW!!! Srini, you are throwing light on the whole iceberg! Wondering how has been your team’s experience with the philosophy that verification takes 70% of the design cycle?

As mentioned, with the design complexities and to validate the system use cases, verification effort starts from the architecture phase of the design till customer sampling. Teams engaging right from the architectural level discussion helps to define the test plans, defining task details and prioritizing tasks covering all aspects of system validation i.e. functional, performance, power, etc. On top of this, our teams leverage Formal and emulation platforms to cover any gaps in the coverage.

Srini you mentioned leveraging Formal technology. Has it been offloading simulation tasks or trailblazing verification?

Formal has been an integral part of our verification methodology and our teams leverage its capability in validating our designs/IPs and SoCs. We use Formal right from providing jumpstart to design validation before deploying UVM or other techniques till the last mile of coverage closure.

UVM as a methodology was proposed to solve some of these challenges for IP level & aid at SoC level too. How does your team views UVM’s contribution in alleviating these challenges?

With the challenges described, creating verification environment for every SoC is very difficult and inefficient. Having modular verification environment that enables to port IP/Design to SoC and re-use across SoCs/designs helps to improve efficiency and quality. UVM enabled to scale this to our need and certainly helped alleviating the challenges handling the complexity as well as managing multiple SoC developments. UVM has been serving very well to the complexities of last multiple years. With Designs/SoCs becoming more-and-more complex managing it with UVM TB is itself becoming a challenge.

Srini you also mentioned using emulation. How has been your team’s experience with this platform?

With the number of clock domains, power domains, and infra systems, some designs can be bit tight for emulation platforms. Based on the need and complexity we deploy all techniques including hybrid emulation to cover the SoC.

A lot of these challenges can be subsided with an integrated holistic approach to verification & validation. Do you believe Portable Stimulus aiming in those lines would provide a solution to them?

Yes, with Portable Test & Stimulus, once the vendor tools start supporting full feature set as defined in the standard, it shall enable validation at context level extending the ability to leverage block/IP level validation at the system level. This will help to cover system level scenarios effectively and get the coverage at context and use case level.

Qualcomm is a pioneer in cellular technology. 5G would enable a system of systems across all domains. Do you observe Safety & Security as next set of challenges already standing outside the door?

Safety & Security is always a challenge with constant news about vulnerabilities detected in the systems. With 5G, we will have systems that can operate differently based on the use case/environment e.g. Streaming movie or video would be a high bandwidth mode vs Automotive environment that operate in very low latency and guaranteed service vs AI mode leveraging cloud compute and so on. Security and safety will be even more critical for systems that morph based on the need/environment. It is very interesting and equally challenging topic for sure.

Srini, this year we are having Accellera Day for the first time in India. What are your expectations from the event?

It is indeed nice to see. Having such events help the VLSI community to come together, share their ideas, views and learn about the latest trends in the vast Verification universe.

Thank you Srini!

Accellera Day India 2018 is getting hosted at Radisson Blu, Bangalore on 14th November 2018. Register now!!!

Monday, March 19, 2018

Negative Testing in functional verification!!!


Imagine someone on an important call and the mobile device reboots suddenly! The call was to inform that the devices installed at the smart home seems to be behaving erratically with only elderly parents & kids to provide any further details. On booting up, the smartphone flashes that there has been a security breach and data privacy has been compromised. Amidst this chaos, the car’s cruise control didn’t respond to pressing of the pedals!!! Whew!!!.... nothing but one of the worst nightmares in the age of technology we live in! But what if some of it could be true someday? What if the user has little or no idea about that technology?

The mobile revolution has enabled a common man to access technology and use it for different applications. The data from Internet world statistics suggest that internet adoption worldwide has increased from 0.4% of world population in 1995 to 54.4% in 2017. Related data also indicate that a sizable portion of the users are aged & illiterate. The ease of use has potentially driven this adoption further with the basic assumption that devices would be functioning correctly 24x7 even if used incorrectly out of ignorance. The same assumptions are seamlessly getting extended to safety critical domains such as Medical & Auto introducing several unknown risks for the user.

So how does this impact the way we verify our designs?

Traditionally, verification is assumed to be ensuring that the RTL is an exact representation of the specifications. Given that the state space based on the design elements is so very huge, a targeted verification approach covering positive verification has been in practice all throughout. Here, Proof of no bug is assumed to be equal to No proof of bug! The only traces of anything beyond this approach include –

- Introducing asynchronous reset during the test execution to check that the design boots up correctly again.
- Introducing stimulus triggering exceptions in the design.
- Simulating architecture or design deadlock scenarios.
- Playing around with key signals per clock for low power scenarios and reviewing the corresponding design response.


But as we move forward with security and safety becoming key requirements of the design, is this good enough? There is a clear need to redefine the existing approach and bring Negative testing to mainstream! Negative testing ensures that the design can gracefully handle invalid inputs, unexpected user behavior, potential security threats or defects such as structural faults introduced while the device is operational. Amidst shrinking design schedules, negative testing really requires creative thinking coupled with focused effort. 

To start with, it is important to question the assumptions used while defining the verification plan for the design. Validating those assumptions itself can lead to a set of scenarios to be verified under this category. Next, review the constraints applied while generating stimulus to list out potential illegal inputs of interest. Caution should be taken in defining this list as the state space would be large. Reviewing it in the context (Context Aware Verification) of end application would surely help in narrowing down this illegal stimulus set. Further to this, faults need to be injected at critical points inside the DUT using EDA tools or innovative testbench techniques. This is important for safety critical applications where the design needs to respond to random faults and exit properly while notifying about the fault or even correct it. Of course not to forget that appropriate coverage needs to be applied to measure the reach of this additional effort.

As we step into an era of billions of devices empowering humans further, it is crucial that this system of systems is defect free especially when it touches safety critical part of our life. Negative testing is a potential way forward ensuring reliability of designs for such applications. As is always said – 

Better safe than sorry!


Sunday, March 4, 2018

Portable Stimulus : redefining verification play field yet again!

In the last 3+ decades, verification has come a long way! Right from quick testing by designers to dedicated verification teams moving from directed testing to constrained random and adding elements of formal apps at times, it has been an eventful journey! Standardization of UVM enabled a common framework with which the fraternity marched forward in sync with each other. Horizontal reuse viz. at the IP level experienced maximum benefits of UVM while vertical resume viz. from IP to SoC level observed limited returns. Apart from the methodology, verification has also proliferated beyond simulation or emulation into virtual prototyping, FPGA validation, post silicon functional validation & HW SW co-verification. Today, the reuse aspects are not limited from IP to SoC or across projects but between platforms too. It is extremely important to realize reuse of efforts at any level across different vehicles enabling first silicon success. The challenge however is that each of these vehicles involve multiple stakeholders like architects (Chip , SW, system), SystemC modelling engineers, RTL designers, verification engineers, prototyping experts, post silicon debuggers and SW developers, each defining & driving the stages of SoC design cycle. Different goals focusing on a specific stage, different approaches in solving these problems and different means of representing solutions has made the task of reuse across the platforms – a convoluted puzzle!!!

To solve this problem, Accellera initiated a task force called Portable Stimulus Working Group (PSWG) that reviewed the concern & potential solutions. After long & regular sessions of intense activity in last couple of years, the group has come up with a proposal in form of a standard.  Beta release of the preliminary version of the Portable Test and Stimulus Standard is now opened for public review.

The basis of the solution relies upon taking the stimulus definition across platforms to an abstraction layer where the focus is on what is needed instead of how it shall be implemented? The idea is to understand & represent the action/outcome i.e. the behavior of the DUT when the test runs. While representing these actions, the focus would be on what would be the inputs and outputs, what resources are required in the DUT, and their relationship with each other? The how part i.e. the implementation will be left to a hidden layer (read EDA tool) to generate the required stimulus for the target platform based on the scenario definition. The actions referred above are all declarative and not procedural or executable by themselves. A set of these static actions can be used to construct a scenario, analyzed for coverage to determine the paths to be traversed & dumped into a test format using the hidden layer.  

To represent these actions, the PSWG has proposed 2 formats - Domain Specific Language (DSL) that is close to System Verilog and a restricted C++ library. Both these formats have equivalent semantics that are completely interchangeable such that for each line in the DSL there is an exactly equivalent C++ library entry. If one defines the actions of an IP in DSL and another IP in C++ library, both can be read together to generate a SoC level scenario for the target platform.

While the road of moving from developing each testcase to a testbench that generates tests has been long, it’s time to take another step in the direction of standardizing the stimuli! This would feed the testbenches for IP/SoC verification & be reused by other workhorses of verification & validation in the SoC design cycle. Remember, reuse is one of the keys to keep up with Moore's law!!!

The public review period for this proposal is open until Friday, March 30, 2018. Download & review NOW!!!

Sunday, February 18, 2018

Moana of Verification!!!

Dear Readers!!!! Welcome back!!!

During this lull period of sharing thoughts, I realized that even blogging faces the same dilemma of verification on “How much is enough”. Finally, as a verification engineer should think, I decided to continue as much as I can with the hope to improve blogging frequency than before – wish there was  Moore’s law for bloggers too 😊!!! Since this is the first one of this year (Happy New year folks!!!) & by this time of the year the intent, action and discussions on new year resolutions would have died down mostly, let’s start from there.  While speaking to budding DV engineers on their new year resolutions around verification, I discovered that somehow focus on the end goal is missing & they seem to be trending into all directions. Reason? Well! Possibly many –
- The never ending gap between industry expectations & Academia.
- Missing core elements in the Job description of a verification engineers.
- Overwhelming solutions that enable verification….. etc.

Still confused on what I mean? Let me explain!

When I started my career as a verification engineer with legacy directed verification approaches, all I learnt was that, to be successful you need to have - a Nose for Bugs! In a way, that was also because one needed to maximize returns from each test so, you handcraft each one to really mine bugs. With rising silicon complexity, verification domain has been experiencing consistent advancements enabling us verify designs faster & better. During this shift, the expectations from verification engineers also kept changing i.e. demanding experience on new flow, language or methodology. The rise of SV & UVM further accelerated this shift giving a taste of elaborate & sometimes exotic code development opportunities for the DV engineers. While this continued, reuse gained momentum on the design & eventually on verification too. Due to this the code development gave way to reuse & with latest verification flow/methodologies, the verification engineers started spending more time on derivative designs that demand debug capabilities over development. 

Coming back to our budding engineers discussion, learning a new flow/methodology would end up having expectations on development work which may not really be needed always. This trend has often lead to the confusion of multiple directions where the engineer ends up settling on the means losing focus on the end goal.  As a DV engineer, our goal is to find bugs! Approaches like Directed/Constrained Random/Formal or languages, methodologies & platforms like simulator/emulator etc. are all means to hunt bugs. While expertise in many or all of these are important and occasionally may even lead to a career, the expectations from verification engineer is really to catch bugs! 

How does this relate to the topic of the blog?

Well! Moana is a film on the story of a girl named Moana, daughter of a chief of a Polynesian village. She is chosen by the ocean to reunite a mystical relic with a goddess. When a blight strikes her island, Moana sets sail in search of Maui, a legendary shapeshifting demigod, in the hope of returning the heart of Te Fiti and saving her people. In this journey of hers, she discovers that her tribe were ‘voyagers’ who have forgotten their virtue and settled as villagers on an island which would die down soon. She not only saves the island & her people but leads them back to their original self – a journey which is more exciting & enriching!

Similarly, UVM, formal apps and emulation etc. are all means to find bugs in your verification journey. Don’t just settle for the means which might be short-lived sometimes but shoot for the end goal & be the Moana of Verification!!! A worthy resolution to pursue 😊!!!    

What was your resolution on verification on this New Year???

Sunday, September 10, 2017

Quick chat with Vishal Dhupar : Keynote speaker DVCon India 2017

Vishal Dhupar
Imagine learning how to ride a bicycle! You learn to balance - pedal - ride on a straight line - turn - ride in busy streets - All set!!! It takes step by step learning & then if you are offered a different bicycle you would try to apply the “truths” you discovered in your earlier learning process & quickly pick up the new one too. Our machines so far perform the tasks they are programmed for and as obedient followers carry out the required job. However, the new wave of technology is striving to make the machines more intelligent, to not only seek but offer assistance, to make our decision making better, help an ageing population store & retrieve memories that fade and much more!!! Sounds interesting? Conniving? …???

Vishal Dhupar, Managing Director – Asia South at Nvidia would be discussing Re-Emergence Of Artificial Intelligence Based On Deep Learning Algorithm as part of the invited keynote on Day 1 DVCon India 2017. Passionate about the subject, Vishal shares the background & what lies ahead for us in the domain of AI & Deep Learning. Extremely useful from beginners to practitioners!!!

Vishal your keynote focusses on AI & Deep learning – intricate & interesting topic. Tell us more about it?

Curiously, the lack of a precise, universally accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. Claims about the promise and peril of artificial intelligence are abundant, and growing.

Several factors have fueled the AI revolution which will be the premise of my talk. Touching upon how machine learning is maturing, and further being propelled dramatically forward by deep learning, a form of adaptive artificial neural networks. This leap in the performance of information processing algorithms has been accompanied by significant progress in hardware and software systems technology. Characterizing AI depends on the credit one is willing to give synthesized software and hardware for functioning appropriately and with foresight. I will be touching upon a few examples of AI advancements.

How do we differentiate between machine learning, artificial intelligence & deep learning?

Machine learning, deep learning, and artificial intelligence all have relatively specific meanings, but are often broadly used to refer to any sort of modern, big-data related processing approach. You can think of deep learning, machine learning and artificial intelligence as a set of concentric circles nested within each other, beginning with the smallest and working out. Deep learning is a subset of machine learning, which is a subset of AI. When applied to a problem, each of these would take a slightly different approach and hence a delta in the outcome.


Artificial Intelligence is the broad umbrella term for attempting to make computers think the way humans think, be able to simulate the kinds of things that humans do and ultimately to solve problems in a better and faster way than we do. Then, machine learning refers to any type of computer program that can “learn” by itself without having to be explicitly programmed by a human. Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks. Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons.

Some of the discussions on deep learning are intriguing. Does it lead to machines taking over jobs?

Machines are getting smarter because we’re getting better at building them. And we’re getting better at it, in part, because we are smarter about the ways in which our own brains function.

Despite the massive potential of AI systems, they are still far from solving many kinds of tasks that people are good at, like tasks involving hand-eye coordination or manual dexterity; most skilled trades, crafts and artisan- ship remain well beyond the capabilities of AI systems. The same is true for tasks that are not well-defined, and that require creativity, innovation, inventiveness, compassion or empathy. However, repetitive tasks involving mental labor stand to be automated, much as repetitive tasks involving manual labor have been for generations.

Let me give you an example your calculator is smarter than you are in arithmetic already; your GPS is smarter than you are in spatial navigation; Google, Bing, are smarter than you are in long-term memory. And we're going to take, again, these kinds of different types of thinking and we'll put them into, like, a car. The reason why we want to put them in a car so the car drives, is because it's not driving like a human. It's not thinking like us. That's the whole feature of it. It's not being distracted, it's not worrying about whether it left the stove on, or whether it should have majored in finance. It's just driving.

What are the domains that you see would see faster adoption & benefits of these techniques?

In healthcare, deep learning is expected to extend its roots into medical imaging, translational bioinformatics, public health policy development using inputs from EHRs and beyond. There is rapid improvements in computational power, fast data storage and parallelization have contributed to the rapid uptake of deep learning in addition to its predictive power and ability to generate automatically optimized high-level features and semantic interpretation from the input data.

Seems like the ASIC design flow/process can be equally benefited from these techniques. Your views on it?

Deep Learning in its elements is an optimization problem. Its application in any work flow or design process where there is scope for optimization carries enormous benefits. With respect to the design, fab and bring up of ICs, deep learning helps with inspection of defects, determination of voltage and current parameters, and so. In fact, at NVIDIA we carry out rigorous scientific research in these areas. I believe as we unlock more methods of unsupervised learning, we’ll discover and explore many more possibilities of efficient design where we don’t entirely depend of large volumes of labelled data which hard to get in such a complex practice.

What are the error rates in execution we can expect with deep learning? Can we rely on machines for life critical applications?

Deep learning will certainly out-perform us in few specific tasks with very low error rates. For example, classification of images is task where models can be far accurate than mortals. Consider the case of language translation, today machines are capable of such efficient and economic multi-lingual translation that it wouldn’t just be possible for a person. [Recently MSFT’s speech recognition systems achieved a word error rate of 5.1%on par with humans] While we look into health care where life critical decisions are made, deep learning can be used to improve accuracy, speed and scale in solving problems like screening, tumor segmentation, etc. and not necessarily declaring a person alive or otherwise!

In all the instance we just saw, state-of-art capabilities are developed in very specific and highly verticalized applications. Machine are smarter than us in these applications but nowhere close to our general intelligence in piecing these inputs together to make logical conclusions. From a pure systems and software standpoint, we will need guard rails, i.e. fail-safe heuristics that backup a model when it operates outside the boundaries to keep the fault tolerance at bay.  

This is the 4th edition of DVCon in India. What are your expectations from the conference?

While the 20th century is marked by the rise and dominance of the United States, the next 100 years are being dubbed the Asian Century by many prognosticators. No country is driving this tectonic shift more than India with its tech talent. NVIDIA is a world leader in artificial intelligence technologies and is doing significant work to train the next generation of deep learning practitioners. Earlier this year we announced our plans to train 100,000 developers in FY18 in deep-learning skills. We are working across academia and the startup community to conduct trainings in deep learning. I’m keen to understand the enthusiasm of the attendees in these areas and how NVIDIA can provide a bigger platform and bring the AI researchers and scientists community together. 

Thank you Vishal!

Join us on Day 1 (Sep 14) of DVCon India 2017 at Leela Palace, Bangalore to attend this keynote and other exciting topics!


Disclaimer: "The postings on this blog are my own and not necessarily reflect the view of Aricent"

Sunday, August 27, 2017

Quick chat with Ravi Subramanian : Keynote speaker DVCon India 2017

Dr. Ravi Subramanian
For many decades, the semiconductor industry followed Moore’s law, transforming what we called as a discrete chip carrying a function on silicon into a small IP inside the SoC on silicon today. As we continue to debate beyond Moore, more than Moore or stagnation of this law and step in the world of IoT, we realize that the system is no more only a single SoC, but instead, it is a conglomeration of multiple tiny & large systems working in tandem producing interesting use cases & enhancing user experience. But are we as the verification engineering workforce ready with the required skills along with the right arsenal of tools and efficient workhorses to ride through this new challenge?

Dr. Ravi Subramanian, Vice president and General manager of Mentor’s IC Verification Solutions Division shares a holistic view on this subject in his opening keynote on Day 2 at DVCon India 2017. The talk titled Innovations in Computing, Networking, and Communications: Driving the Next Big Wave in Verification, dives into convergence of different technologies and its impact on verification. A quick chat with Ravi, revealed the excitement that we all can look forward to in his talk as well as the future that lies ahead for all of us. Read on!!!

Ravi your keynote focusses on drivers to the next big wave in verification. Tell us more about it?

Yes, my talk will focus on the amazing innovations our industry is developing with respect to computing, networking, and communications. These include the changing nature of computing, the dramatic changes in networking and storage, and the disruptive effect of new broadband communications. Yet, the next big wave in design is actually the convergence of these technologies, which is driving today’s internet-of-things and autonomous systems revolution. A common theme across these emerging systems is the need for low power, security, and safety—whether you are talking about devices on the edge or high-availability systems in the cloud. These new challenges have opened innovation opportunities for us to rethink the way we approach verification

IoT is driving the convergence of different technologies. How would it affect the way we verify the systems today?

To answer your question, I first want to step back in time to provide a framework for today’s challenges. In the 1990’s the concept of separation of concerns was introduced into engineering. Essentially, the idea is that verification would become more productive if we focused on verifying orthogonal concerns or requirements of the design separately versus trying to verify multiple concerns combined. For example, during this period of time, we learned that it is more efficient to verify functional concerns and physical concerns in separate simulation runs. This approach to verification worked well up to about 10 years ago. The emergence of mobile devices introduced new low-power requirements that made it difficult to separate concerns. For example, today we see that physical concerns (such as low power management) now can directly affect functional behavior of a device. Hence, these concerns need to be verified together. Bringing together physical, electrical, and functional has become mandatory.

The key point is that convergence of computing, networking, and communication, which is driving IoT, has introduced new layers of verification requirements that did not exist years ago, and the interaction of these requirements has had a profound effect on the way we must verify systems today.

What are the solutions that the EDA industry is driving to enable this next big wave in verification?

One contributing factor to growing verification complexity is the emergence of new layers of verification requirements, as I previously mentioned. For example, beyond the traditional functional domain, we have added clock domains, power domains, mixed-signal domains, security domains, safety requirements, software, and then obviously, overall performance requirements. Hence, we see the next big wave in verification on multiple fronts:

Continuing introductions of focused solutions optimized for specific verification concerns. Examples of these focused solutions include: formal apps focused on  verifying security features within a design or power apps used to provide complete RTL power exploration and accurate gate-level power analysis within emulation.
Emerging system-level analysis solutions, which provide new metrics and insight into the fully integrated SoC. This becomes essential for system-level performance analysis. The IoT SoC, for example, is a different beast than today’s state-of-the art networking SoC.
Greater convergence across multiple verification engines (e.g., simulation, emulation, and FPGA prototyping), which will improve productivity. The new Accellera Portable Stimulus standard will help facilitate this convergence and foster the introduction of new verification solutions.
Q4: Do you see domain specific solutions like automotive or machine learning etc. getting enabled for verification?

Yes, in fact there are multiple opportunities to leverage big data analytics to solve many system-level analysis problems. Machine learning is only one approach used today for big data analytics; however, there are others. Now, concerning domain-specific solutions in the automotive space, formal technology is being leveraged to improve productivity related to safety fault analysis.

Do you expect all workhorses (Simulation, Emulation & Formal) playing a critical role in verifying these new converged system level designs?

Obviously, this depends on the design. A project developing sensors for an IoT edge solution has different verification requirements than a project developing an automotive SoC containing multiple CPU and GPU cores, a coherent fabric, and multiple complex interfaces. Nonetheless, with increased design integration, multiple verification engines are required today that address the growing volume of verification requirements.

This is the 4th edition of DVCon in India. What are your expectations from the conference?

DVCon, in general, is recognized as the premier conference on the application of languages, tools, methodologies and standards for the design and verification of electronic systems and integrated circuits. And DVCon India is no exception, which has continued to grow in both attendance and exhibitor participation. I expect DVCon 2017 will continue to deliver high-quality technical content and provide valuable networking opportunities for its attendees. It is the premier venue to share state-of-the-art developments and connect the creative minds working on these developments.

Thank you Ravi!

Join us on Day 2 (Sep 15) of DVCon India 2017 at Leela Palace, Bangalore to attend this keynote and other exciting topics.


Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”

Sunday, August 20, 2017

Quick chat with Apurva Kalia : Keynote speaker DVCon India 2017

Apurva Kalia
The advancements in semiconductor industry starting picking up with the rise in performance of processors driving the computer industry. Next, the mobile segment opened floodgates when the PC market stagnated & then low power with smaller dimensions on top of performance drove the innovation in silicon implementation. The industry today is at cross roads once again awaiting the next big thing. Automotive is one of the key areas to get the ball rolling yet again. But then, each domain has its characteristics that needs to be aligned to!

Apurva Kalia, Vice President of R&D focusing on Automotive solutions at Cadence picks on an interesting topic for his DV track keynote on Day 1 at DVCon India 2017. With the auto industry shifting gears into autonomous cars, the question worth asking is – Would you send your child to school in an autonomous car? Yes, that’s the theme of Apurva’s keynote and here’s a sneak peek on this topic.

Apurva your keynote focusses on ‘autonomous cars’ – the talk of the town these days. Tell us more about it?

Well, there is major inflection point coming up in automotive electronics. We all know that Moore’s Law driven advances in cost per transistor and capacity have been holding up for many years. Complex chips are now possible within a cost factor that was not possible earlier. Moreover advances in algorithms, especially Machine Learning, now enables much more complex processing, especially vision based processing, to be done in real time. Both these trends coming together with advances in sensor technology has enabled systems to be created which can detect their environment quite accurately and in real time. This is the basis of autonomous driving. Also, as we know, every few years the semiconductor industry is looking for the next big trend which will drive the fab capacity. The above factors are pushing autonomous driving to be the talk of the town.

Security & Safety are emerging areas resulting from this topic. How does this change the way we verify our designs?

As I described above, with autonomous driving really taking off, these systems are becoming mission critical for the automobile. This means that the system needs to be safe and secure. It is inconceivable for a car to stop working at 80 kmph on a highway! Also, with the car needed to be connected to other cars and even to infrastructure and internet, this opens the system to attacks and makes it vulnerable. Therefore, these systems needs to make safe and secure to ensure safety and security of the automobile.

What are the solutions that the EDA industry is driving to enable ISO 26262 requirements from process & product perspective?

ISO26262 is the main standard that defines the safety requirements for automobiles. It is a very comprehensive standard which places requirements on all automotive systems. In fact edition 2 of the standard – coming out in Jan 2018 – will focus specially on semiconductors. Given the excitement around automotive electronics and autonomous systems, EDA industry needs to retool rapidly to address this need. Ensuring safety in these designs requires additional design and verification flows, methodologies and tool changes. The EDA industry needs to step up to define and create these flows, methodologies and tools required.

What are your views on the couple of accidents that happened in the US with autonomous cars? What could have been done better?

We are at early stages of this technology. Unfortunately as with any new technology, technology will take time to stabilize. In the meantime, during this stabilization time, unfortunate things like these accidents could happen. Organizations and individuals who are early adopters of these technologies take these risks, but they also contribute in a big way for advancement of these technologies. However, with the proper use of tools, implementation of standards, and focus on new solutions, we can avoid these kind of accidents.

How do you observe the adoption of autonomous cars across the globe & in India?

Autonomous cars are here to stay. They are solving real problems in real environments. We already have examples of autonomous cars on real roads – driving very safely. In fact, there are statistics which show that autonomous cars will actually cut down on accidents and fatalities – the most of which are caused by human error. Last year, I saw an engineering college in Delhi demonstrate an autonomous vehicle in Govindpuri – one of the most congested areas of Delhi. So this technology is real and works. I think it is just a matter of a few years when we will see this mainstream.

Do you see all workhorses (Simulation, Emulation & Formal) playing a critical role in realizing Auto grade designs?

Yes – all current EDA technologies – not just verification technologies, but even implementation technologies – need to be upgraded to support safety and security design and verification. All engines will need enhancements and special features to support these new requirements and flows.

This is the 4th edition of DVCon in India. What are your expectations from the conference?

I have seen DVCon India grow from humble beginnings to an excellent conference today. I think this conference provides a very good platform to share and discuss new trends in design and verification. I look forward to stimulating conversations on new flows and technologies. This conference attracts many design companies and all EDA vendors in India – what better assemblage of the right people for these discussions.

Thank you Apurva!

Join us on Day 1 (Sep 14) of DVCon India 2017 at Leela Palace, Bangalore to attend this keynote and other exciting topics.


Disclaimer: “The postings on this blog are my own and not necessarily reflect the views of Aricent”