tech

How do software simulation, hardware simulation, and prototype verification work

In the entire chip development process, the verification phase of chip design is akin to a front-line battle, and it can be said to be the key to success or failure in the entire defense line. Before the chip enters production, it is necessary to ensure that its design fully complies with the required specifications, resolves all potential risks, and corrects all defects. This can prevent the discovery of uncorrectable hardware bugs after tape-out, reducing the risk of problems in the later stages. As the scale and complexity of chips increase, the difficulty of verification also rises. How to reduce the complexity of verification while ensuring its correctness and efficiency is the core issue of verification.

The core role is the DUT (Design under Test), which is the design we are testing, that is, the RTL (Register Transfer Level) code we have designed. The verification process starts with the collection of requirements, dividing the requirements into subsystem modules, further refined into functional modules, and then writing hardware description language files at the RTL level. Verification personnel then build a test platform (Testbench) based on these design files, which is used to simulate and control the inputs and environment of the DUT, including generating functional models, input stimuli, or online data interaction, as shown in Figure 1. It can be said that the entire verification process is to confirm the correctness of the DUT and ensure that the chip product meets the specification requirements.

Advertisement

Facing complex design code, how do we ensure its accuracy? Functional verification is the key process of this battle. The verification methods commonly used by engineers include software simulation, hardware simulation, and prototype verification, etc. These different verification methods each have their own advantages and disadvantages. Choosing different verification tools at different design stages to improve production efficiency and accelerate the convergence of verification is particularly important. All of these revolve around the DUT. Next, we will discuss in detail how the three methods of software simulation, hardware simulation, and prototype verification work around the DUT.

1. Software Simulation

Software simulation is the simulation and verification of the function and characteristics of digital circuit design based on hardware description language. It verifies whether the circuit design conforms to the original intention by simulating the hardware behavior in a computer environment. The simulation process is a key link in the correct implementation of the design, based on the model written in hardware description language (such as VHDL or Verilog), to check whether the functions in the design are correct.

A simplified simulation verification system is shown in Figure 2: In this process, the test vector (TestVector) runs on the test platform (Testbench), and the DUT and Testbench run together through the simulation system (based on software), and the final results are usually compared with the expected results.

However, with the increase in the scale of chip design, the traditional test platform written in Verilog/SystemVerilog is not sufficient to efficiently cover the test scenario requirements. For example, the communication between the basic components in the test platform, and the establishment, management, and reusability issues between these components. Therefore, UVM (Universal Verification Methodology) came into being.

In complex system-level chip design, UVM provides a robust test platform capable of handling a large number of design and verification tasks. The main advantage of UVM is its reusability, which allows designers to reuse the same verification environment in multiple projects, greatly improving design efficiency. In addition, because UVM is an industry standard, using UVM facilitates collaboration with other design teams and the use of UVM components they create.Generally speaking, software simulation is divided into functional simulation, post-synthesis simulation, and timing simulation, corresponding to the steps after RTL design input, after synthesis completion, and after layout and routing completion, respectively.

Functional Simulation: Also known as RTL simulation, this is the first step in simulation verification, also called pre-simulation. The goal is to confirm whether the design's functionality meets expectations under ideal conditions. At this stage, we simulate the design's output under specific inputs to verify its behavior. This is like a "dress rehearsal" for the device under test (DUT), allowing us to detect logical errors in the design without physical hardware.

Post-Synthesis Simulation: In the post-synthesis simulation phase, the goal is to confirm whether the synthesized circuit structure aligns with the design intent. At this stage, we use synthesis tools to convert hardware description language (HDL) code into a logic netlist. Then, we simulate with this netlist to ensure that the post-synthesis circuit behavior is consistent with the design intent.

Timing Simulation: Finally, in the timing simulation or post-layout simulation phase, we consider timing issues that the design may encounter in actual hardware and processes. This includes component delays, wiring delays, power, and thermal issues. At this stage, we use more complex simulation models, such as those that consider delay information, to more accurately simulate the hardware behavior.

In each phase, we control the inputs and environment of the DUT by building a test platform (Testbench) and compare the DUT's output with expectations. The common goal of these three simulation applications is to ensure that our chip design meets the expected functionality and performance at each stage.

Taking the PegaSim software simulation tool from S2C as an example, it is a high-performance, multi-language hybrid commercial digital software simulation tool that adopts innovative architectural algorithms, achieving a high-performance simulation and constraint solver engine. It provides extensive support for System Verilog language, Verilog language, VHDL language, and UVM methodology, while also supporting timing back-annotation and gate-level post-simulation, and offering functional coverage, code coverage analysis, and other features. The innovative software architecture allows the simulator to support different processor architectures - x86-64, RISC-V, ARM, etc.

Although software simulation technology is very necessary for engineers, in terms of the current business model, the simulation capability and computing power of software simulation are tied to the software license. Commercial software simulation services provided by suppliers will charge in the form of a license. However, in actual use, engineers find it difficult to match the effective computing power and tool requirements based on experience, as shown in Figure 5.Siemens' PegaSim Xing Shen Chi software simulation tool, in addition to the traditional licensing cooperation model, has also adopted an innovative business model, providing an on-demand online simulation cloud platform. When performing regression testing and random-driven coverage range on the Device Under Test (DUT), it can achieve what is shown in Figure 6. It can well meet the diverse needs of enterprises, helping to solve the problems of tight license usage, insufficient computing power, and long-term occupation of licenses by design engineers. It provides engineers with on-demand and unlimited simulation capabilities, improving the work efficiency of the verification team.

2. Hardware Simulation

Although software simulation is easy to use, cost-effective, and has complex debugging capabilities, once it encounters large-scale digital circuit design, the more complex the structure, the longer the simulation time required, and the benefits of software simulation are limited. Therefore, debugging chip design on hardware through specialized equipment, such as hardware simulation and prototype verification, is one of the important solutions.

The operating speed and debugging efficiency of hardware simulation are much higher than that of software simulation because it can perform automated accelerated simulation and debugging of the complete chip design, and is widely used in RTL functional verification of large-scale SoC design in the early stage.

Hardware simulation first compiles the hardware design (usually written in HDL, such as Verilog or VHDL), and then loads the compiled design. In some systems, the design may be loaded into specialized hardware (such as FPGA). Once the design is loaded, hardware simulation can run the design and observe its behavior. Hardware simulation usually provides tools for observing and debugging the internal state of the design. Finally, engineers can analyze the correctness of the design according to the results, find and solve problems, and optimize the design.

Hardware simulation can provide faster simulation speed than software simulation while also simulating the actual behavior of hardware during actual operation. This makes them very useful in the hardware design and verification process, especially when dealing with complex and large-scale hardware systems. The hardware simulation system is mainly composed of two parts: hardware and software. Taking Siemens' OmniArk Xing Shen Ding enterprise-level hardware simulation system as an example, the hardware part is composed of many FPGAs, which can be expanded to up to hundreds of FPGAs. The software part consists of compilation (Compile), operation (Runtime), and debugging (Debug).

Compilation: During the compilation phase, the design to be tested (DUT) is fully automated by the software to map it to the hardware simulation system for high-speed simulation. The compilation process is shown in Figure 8.Translation:

Operation: The Runtime software controls the entire process of hardware simulation during operation. It can control hardware simulation to support different user modes. Its core components include the runtime database, runtime library, software-hardware interface, and user interaction interface, such as ICE (In-circuit Emulation), TBA (Transaction-based Acceleration), and QEMU mode, etc. It can also support multiple users to use the device simultaneously.

Debugging: Hardware simulation has debugging capabilities close to software simulation. Static probes, dynamic probes, and built-in logic analyzers (ILA) can be used to observe the data of signals and achieve full visibility of signals. At the same time, the ReadBack/WriteBack function can be used to assign or restore signals.

In addition, hardware simulation is also equipped with dedicated verification cores (VIP), providing the verification interfaces required by the hardware simulation system. For example, Xinshending can support APB, AHB, AXI4, AXI4-Stream, AXI4-Lite, UART, SPI, I2C, DDR, Ethernet, USB, PCIe, SPI Flash, NAND Flash, etc. It basically covers common interface protocols and can meet the needs of most verification applications. Subsequently, Sileixin can also develop according to customer needs.

Xinshengding not only provides a hardware acceleration platform but also provides various innovative supporting software with functions such as user design syntax automatic error correction, Smart P&R technology, ABS (Auto-Block Select) technology, and various signal acquisition methods, etc., allowing users to achieve MHz-level simulation acceleration, fully automatic intelligent compilation process, powerful debugging capabilities, and various simulation verification modes. It also has a rich VIP library, suitable for system-level verification of ultra-large-scale high-end general chip design, and can meet the needs of different verification scenarios.

In summary, hardware simulation usually integrates dedicated circuits and logic to accelerate the simulation process. Its speed can usually reach hundreds of kHz or even MHz, while the functional simulation in software simulation usually runs at a speed of tens to hundreds of Hz. In comparison, hardware simulation is thousands of times to hundreds of thousands of times faster than software simulation. Therefore, hardware simulation is very useful in verifying complex designs. They can perform simulation at higher speeds, provide feedback and results faster, which is crucial for the verification and debugging of the design.

3. Prototype Verification

In the design of complex integrated circuits, prototype verification is another key "verification" technical method. Its purpose is to test and verify the circuit design through a prototype hardware close to the final chip at an early stage, and to ensure the correctness of the chip by running at a speed close to the final chip. Prototype verification maps the design to an FPGA array, simulates the chip's functions and application environment to verify the overall functionality of the chip, and provides an on-chip software development environment. Because compared to hardware simulation, the running speed of prototype verification is closer to the real chip, which can be used in conjunction with software engineers for the development of low-level software. This pre-tapeout co-development of software and hardware is its most irreplaceable place.

The following are the key steps of DUT in the prototype verification process, including design partitioning, system-level timing analysis after partitioning, programming and downloading, functional verification and debugging, and other steps.

Design Partitioning: At the beginning stage, we need to partition the complex design, that is, DUT, to adapt to the resource limitations of the FPGA. Usually, due to a single FPGA not being able to accommodate an ultra-large-scale design, we need to use specific tools to divide the design logic into smaller parts. Each part is mapped to one or more FPGAs, which requires maintaining the integrity of the overall design while minimizing the number of cross-FPGA signals to reduce the path delay between systems, thereby improving system performance. A typical RTL-level partitioning process is shown in Figure 9.

The logic synthesis in it is to transform DUT into a netlist that FPGA can understand. Inserting TDM into the partitioned design is also a key step affecting the performance of the partitioned system. Usually, there are far more interconnect signals between the partitioned designs of FPGA than the number of physical connections. Inserting TDM is to transmit these interconnect signals through limited physical resources by time-division multiplexing. Mapping and layout routing are to map the synthesized design to specific resources of the FPGA, including lookup tables, flip-flops, DSP modules, etc., and then perform layout routing.Timing Analysis: Timing analysis ensures that the design meets all timing requirements when running on an FPGA, including the timing requirements of each FPGA and the overall system. Since the user's original design has been partitioned, the timing path delays of the partitioned design need to be considered during timing analysis. This part of the timing delay mainly comes from the delay of TDM and the delay of cross-FPGA connections, which can usually reach tens of ns. When there are paths that do not meet the timing requirements, it may cause the design to fail to work properly. In this case, timing performance can be improved by optimizing timing constraints, design optimization, pipeline design, partition boundary adjustment, and layout optimization, so that the design meets the expected clock frequency and reduces the delay of the path.

Since the frequency at which the prototype verification system can operate is a key factor in measuring system performance, how to improve the operating frequency of the system is also a problem that often needs to be considered. Common practices include adjusting partition boundaries, optimizing the TDM of the partition results, using layout and routing constraints, and using timing-driven partitioning algorithms to reduce the delay of critical paths and improve system performance.

Programming and Downloading: Compile the design after mapping and layout into the FPGA bit file; set up the interconnection network structure between each FPGA, and then download the bit file to the corresponding FPGAs. After downloading, configure the global clock, global reset, and other peripheral IPs as needed. This ensures that the DUT can run correctly on the prototype.

Functional Verification and Debugging: This stage mainly tests the correctness of the DUT's functionality when running on the FPGA. We can test the DUT through actual hardware interfaces or virtual IO interfaces to verify whether it meets expectations.

How to debug the partitioned design is also an important issue to consider in prototype verification. In addition to the application-level debugging and monitoring tools built into the user's design, designers also need to capture and analyze the signal waveforms when the design is running. For this application scenario, the MDM Pro debugging solution provided by S2C supports collaborative debugging of multiple FPGAs, supports a high-speed sampling frequency of up to 125MHz, and the maximum waveform storage capacity can reach 64GB, which can effectively solve the collaborative debugging problem of multiple FPGAs in prototype verification.

Taking S2C's Prodigy core vision prototype verification solution as an example, the core vision provides a timing-driven RTL-level partitioning algorithm, which can achieve fully automatic partitioning and compilation process. Its built-in incremental compilation algorithm function can help users complete rapid iteration of version iterations, greatly improving the efficiency of user development and verification.

In summary, because the internal processing of prototype verification can also perform parallel computing like real chips, its high performance can discover more hidden bugs through hardware subcards connecting to real data. In contrast, there is a certain difference between the stimulus source model used in software simulation and real data, so it cannot cover all the corner cases, which requires prototype verification. Through prototype verification, we can start driving development immediately after the basic function verification of SoC before tape-out. Even customers with needs can be given chip demonstrations and pre-sales before tape-out. This greatly shortens the entire verification cycle and accelerates the product launch time.

4. Summary

Software simulation, hardware simulation, and prototype verification, through their respective advantages and functions, provide a comprehensive and efficient verification method for chip design, which helps to accelerate the entire chip development cycle while ensuring the correctness of the design.Under the impetus of advanced processes, heterogeneous computing architectures have gradually become the mainstream approach in chip design. Since different computing units have their unique architectural designs and information processing methods, it is necessary to adopt verification methods that conform to their characteristics. To shorten the time-to-market of chips, major chip design companies have reached a consensus that different simulation verification tools should be selected at different design stages to improve verification efficiency. This strategy has been widely applied in the field of major chips.

The heterogeneous verification method of S2C emerged against this background. It uses a variety of different verification methods, such as software simulation (Xin Shen Chi), hardware simulation (Xin Shen Ding), and prototype verification (Xin Shen Tong), integrating multiple verification methods, continuously innovating verification tools and verification processes, and conducting collaborative simulation and cross-verification around the device under test (DUT) to ensure the correct design of the chip. A series of EDA tools from S2C conduct a thorough and comprehensive inspection of the DUT, checking its functions and performance at all levels and aspects. This series of work can effectively identify and fix problems in the design, thereby greatly shortening the development cycle of the chip.

About S2C

Since its establishment of the Shanghai headquarters in 2004, S2C has always been focused on the field of integrated circuit EDA. As a well-known expert in EDA solutions, the company's business focuses on the front-end verification of digital chips, covering tools such as verification cloud services, architectural design, software simulation, hardware simulation, and prototype verification. It has established good cooperative relations with more than 600 domestic and foreign enterprises, serving the implementation of digital circuit design functions such as artificial intelligence, high-performance computing, image processing, data storage, and signal processing, and is widely used in terminal fields such as the Internet of Things, cloud computing, 5G communication, smart medical treatment, and automotive electronics.

The company's headquarters is located in Shanghai, and it has established a global technology research and development and market service network, with branches or offices in Beijing, Shenzhen, Xi'an, Hong Kong, Tokyo, Seoul, and San Jose.

S2C's technical strength in the field of EDA has been widely recognized by the industry. Through years of hard work, it has built a dual advantage in technology and market in the field of prototype verification. It has also participated in the formulation of China's EDA group standards and undertaken a number of national and local major scientific research projects.

Leave A Comment