Hardware and software co-verification employing deferred...

Data processing: structural design – modeling – simulation – and em – Simulating electronic device or electrical system – Circuit simulation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C703S019000, C703S022000, C703S026000, C703S020000

Reexamination Certificate

active

06356862

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the field of digital system design verification. More specifically, the present invention relates to design verification of digital systems whose development efforts are neither hardware nor software dominant.
2. Background Information
The majority of digital systems being designed today are task specific embedded systems that consist of standard and/or custom hardware as well as standard and/or custom software. Standard hardware typically includes off-the-shelf microprocessor/micro-controller, and memory etc., whereas custom hardware is implemented with programmable logic devices (PLDs), or Application Specific Integrated Circuits (ASICs). Hardware architecture binds and constrains these resource and provides a framework on which software processes execute. Standard software typically includes a real time operating system (RTOS), and configurable device drivers, whereas customer software is the embedded application. Software architecture defines how these processes communicate. The complexity of these embedded systems varies widely from low to high end depending on the market segment and product goals. They can be found in almost everything that we encounter in our daily lives, such as communication systems ranging from the phone on our desk, to the large switching centers, automobiles, consumer electronics, etc.
Some embedded systems are software dominant in their development effort, in that most of the design efforts are focused on implementing the functionality in software. Typically, standard or previously designed hardware are employed. Thus, even though the software dominant characteristic typically makes these systems a lot more cost sensitive, these systems can be readily validated by compiling and debugging the software under development on existing hardware, using a compiler, a debugger and other related software tools.
Other embedded systems are hardware dominant, in that most of the design efforts are focused on implementing the functionality in PLDs or ASICs. The original software content of these systems tends to be small. Typically, these embedded systems are found in applications where performance is critical. For these systems, hardware emulation and/or simulation techniques known in the art appear to adequately serve the design verification needs. In the case of emulation, the hardware is “realized” by configuring the reconfigurable logic and interconnect elements of the emulator. The configuration information are generated by “compiling” certain formal behavioral specification/description of the hardware. In the case of simulation, a simulation model would be developed. For the more “complex” hardware, since it is very difficult, if not outright impossible, to model all the behaviors of the hardware, certain accuracy are often sacrificed. For example, in the case of a microprocessor, it is often modeled by a “bus interface model”, i.e. only the different bus cycles that the processor can execute are modeled. The modeled bus cycles are driven in timed sequences, representative of typical bus transactions or bus activities for invoking specific conditions.
Embedded systems that are most difficult to validate are those that are neither software or hardware dominant, in that both parts play an equally important role in the success of the system. Due to increased time to market pressures, hardware and software are usually developed in parallel. Typically, the hardware designers would validate the hardware design using an hardware simulator or emulator. Concurrently, the software designer would validate the software using an instruction set simulator on a general purpose computer. The instruction set simulator simulates execution of compiled assembly/machine code for determining software correctness and performance at a gross level. These instruction set simulators often include facilities for handling I/O data streams to simulate to a very limited degree the external hardware of the target design. Typically, instruction set simulators run at a speeds of ten thousand to over a million instructions per second, based on their level of detail and the performance of the host computer that they are being run on.
Traditionally, the hardware and software would not be validated together until at least a prototype of the hardware, having sufficient amount of functionality implemented and stabilized, becomes available. The software is executed with a hardware simulator, and very often in cooperation with a hardware modeler (a semiconductor tester), against which the hardware prototype is coupled. The hardware simulator provides the hardware modeler with the values on the input pins of the prototype hardware, which in turn drives these values onto the actual input pins of the prototype hardware. The hardware modeler samples the output pins of the prototype hardware and returns these values to the hardware simulator. Typically, only one to ten instructions per second can be achieved, which is substantially slower than instruction set simulation.
Recently, increasing amount of research effort in the industry has gone into improving hardware and software co-verification. New communication approaches such as “message channels” implemented e.g. using UNIX®“pipes”have been employed to facilitate communication between the hardware and software models (UNIX is a registered trademark of Santa Cruz Software, Inc.). Other efforts have allowed the models to be “interconnected” through “registers”, “queues”, etc. In U.S. Pat. Nos. 5,771,370 and 5,768,567, an optimizing hardware-software co-verification system was disclosed.
Notwithstanding these advances, the fundamental fact remains that a rather detailed hardware model is required before it is possible to verify software concepts within the hardware context. As a result, the concurrent development of hardware and software have to proceed without the ability to verify the interfaces between the hardware and software being contemplated. Furthermore, without such verification, the partitioning choices on what is to be implemented in hardware versus software have become increasingly more difficult as the complexity of the embedded systems continue to increase.
To address this problem, the industry looks to higher level of abstraction to describe the hardware components, i.e. describing the hardware components only in terms of their functionality and not their implementations. These high level models are commonly referred to as “system models”, and are executed within a system or transaction level simulator, which typically executes 10-100× faster than historical simulators that operate with a lower level of abstraction. With the enhanced execution rate, these system or transaction level simulators are much more capable of keeping up with the simulators employed to execute the software.
However, these system or transaction level simulators have not gained wide acceptance because the combined performance has been constrained by the communications between the simulators. As the general performance of the simulators increases, the performance of the synchronization and transfer of information between the simulators become even more important.
A successful synchronization strategy must ensure the consistency of the verifications, i.e. for a fixed set stimulus, the verification results must always be the same. Preferably, it should allow both verifications to proceed concurrently, without impeding performance. Furthermore, it should allow either verification to operate as a source or sink for information, and it should be flexible in the resolution of synchronization periods. Prior art synchronization strategies typically require either verifications to be performed in locked step, resulting in substantial degradation in software verification performance, or employ roll back mechanism, which imposes significant memory hardware burden on the co-verification system. Thus, an improved approach to co-verification synchronization is desired.
SUMMARY OF THE

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Hardware and software co-verification employing deferred... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Hardware and software co-verification employing deferred..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hardware and software co-verification employing deferred... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2878449

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.