Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-10-30
2001-08-07
Nguyen, Than (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S118000, C711S139000, C711S141000, C711S145000
Reexamination Certificate
active
06272599
ABSTRACT:
FIELD OF THE INVENTION
The present invention pertains to computers having cache-based architectures, and more particularly to an apparatus and a method for improving the worst-case execution time (WCET) of the central processing unit (CPU) of such computers.
BACKGROUND OF THE INVENTION
Cache memory is a small, fast buffer located between the CPU and the main system memory of a computer. Cache memory is well known in the art and is used in conventional computers to store recently accessed data and instructions so that such information can be quickly accessed again, thereby increasing the operating speed of the CPU. See, for example, Chi, C. H. and Diets, H., “Unified Management of Registers and Cache Using Liveness and Cache Bypass”,
Proceedings of the ACM Conference on Programming Language Design and Implementation
, 344-355 (1989).
This increase in CPU throughput results from two factors. First, since the main system memory cycle time is typically slower than the CPU clocking rate, the CPU can access data and instructions stored in the cache memory more quickly than it can access such information from the main system memory of the computer. Second, accessing information from the cache memory rather than from the main system memory reduces the CPU's utilization of the available main system memory bandwidth, thereby allowing other devices on the system bus to use the main system memory without interfering with the operation of the CPU.
The improvement in computer system performance provided by cache memory is particularly important in high-performance systems running time-critical applications, as for example are used in the telecommunications field, where a quick response time and dependability are essential. However, the average execution time metric used in non-real time applications can not provide the stringent real-time performance guarantees required for such time-critical applications. By contrast, the WCET can be used to provide such stringent real-time performance guarantees. Accordingly, obtaining optimum WCET performance for such time-critical applications is important to ensuring that system constraints are met
However, real-time applications run on computers having cache-based architectures suffer from a significant drawback due to the unpredictability of the behavior of such systems caused by thrashing and other forms of cache interference which render the cache useless. For example, thrashing can occur when a call function is mapped to the same cache line as its caller. This occurs because the code-linkers and code-generators do not to seek to minimize the WCET. Thrashing can also occur when a long sequence of instructions larger in size than the direct mapped cache in which such instructions are to be stored repeat in a loop such that instructions at the beginning of the loop conflict with instructions at the end of the loop. Loopunrolling can produce such long sequences of instructions.
It is therefore an object of the present invention to provide an apparatus and a method for overcoming the foregoing drawback by improving the real-time performance of applications run on computers having cache-based architectures by reducing the WCET performance of such applications to reduce thrashing in the cache.
SUMMARY OF THE INVENTION
An apparatus and method for improving the WCET performance of applications run on computers having cache-based architectures by setting cache
o-cache bits to selectively control the caching of different regions of address space thereby reducing thrashing in the cache memory and improving the real-time performance of such applications.
REFERENCES:
patent: 4075686 (1978-02-01), Calle et al.
patent: 5247639 (1993-09-01), Yamahata
patent: 5745728 (1998-04-01), Genduso et al.
Johnson et al. “Run-Time Cache Bypassing”, Computers, IEEE Transactions on. vol: 48 Dec. 12, 1999, pp. 1338-1354.*
Chi et al. “Improving cache performance by selective cache bypass”. System Sciences, 1989. vol. 1: Architecture Track, Proceedings of the 22nd Annual Hawaii International Conference on, pp. 277-285.
Lucent Technologies - Inc.
Nguyen Than
Zimmerman Jean-Marc
LandOfFree
Cache structure and method for improving worst case... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache structure and method for improving worst case..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache structure and method for improving worst case... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2542663