Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-11-09
2002-10-22
Verbrugge, Kevin (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S129000, C712S206000
Reexamination Certificate
active
06470422
ABSTRACT:
BACKGROUND
The invention relates to buffer memory management in a system having multiple execution entities.
A buffer memory can be a relatively small, fast memory placed between a memory and another device that is capable of accessing the memory. An example of a buffer memory is a cache memory located between a processor and system memory (which typically is relatively large and slow) to reduce the effective access time required by the processor to retrieve information from the system memory. In some systems, a multi-level cache system may be used for further performance improvement. A first-level cache (L
1
cache) may be implemented in the processor itself, and a second-level, typically larger cache (L
2
cache) is externally coupled to the processor.
Further, in some conventional memory systems, a cache memory may include separate instruction and data cache units, one to store instructions and the other to store data. During operation, a processor may fetch instructions from system memory to store in the instruction cache unit. Data processed by those instructions may be stored in the data cache unit. If information, such as instruction or data, requested by the processor is already stored in cache memory, then a cache memory hit is said to have occurred. A cache memory hit reduces the time needed for the processor to access information stored in memory, which improves processor performance.
However, if information needed by the processor is not stored in cache memory, then a cache miss is said to have occurred. When a cache miss occurs, the processor has to access the system memory to retrieve the desired information, which results in a memory access time performance reduction while the processor waits for the slower system memory to respond to the request. To reduce cache misses, different cache management policies have been implemented. One of several mapping schemes may be selected, for example, including a direct mapping scheme or a set associative cache mapping scheme. A set associative cache memory that implements k-way associative mapping, e.g., 2-way associative mapping, 4-way associative mapping, and so forth, generally provides a higher hit ratio than direct mapped cache memory. One of several replacement policies may also be specified to improve cache memory hit ratios, including a first-in-first-out (FIFO) or least recently used (LRU) policy. Another feature of a cache memory that may be configured is the cache memory update policy that specifies how the system memory is updated when a write operation changes the contents of the cache. Update policies include a write-through policy or a write-back policy.
Conventionally, a system, such as a computer, may include multiple application programs and other software layers that have different data flow needs. For example, a program execution entity, such as a process, task, or thread, associated with a multimedia application may transfer large blocks of data (e.g., video data) that are typically not reused. Thus, access of these types of data may cause a cache to fill up with large blocks of data that are not likely to be reused.
In filling a cache memory, data used by one execution entity may replace data used by another execution entity, a phenomenon referred to as data cache pollution. Data cache pollution caused by the activities of one execution entity may increase the likelihood of cache misses for another execution entity, which may reduce overall system performance.
A need thus exists for a memory architecture that provides improved memory performance.
SUMMARY
In general, according to an embodiment, a system includes a processor and a plurality of execution entities executable on the processor. A buffer memory in the system has multiple buffer sections. Each buffer section is adapted to store information associated with requests from a corresponding one of the multiple execution entities.
Other features will become apparent from the following description and from the claims.
REFERENCES:
patent: 4905141 (1990-02-01), Brenza
patent: 5479636 (1995-12-01), Vanka et al.
patent: 5551027 (1996-08-01), Choy et al.
patent: 5809524 (1998-09-01), Singh et al.
patent: 5875464 (1999-02-01), Kirk
patent: 5909695 (1999-06-01), Wong et al.
patent: 5960194 (1999-09-01), Choy et al.
patent: 5963972 (1999-10-01), Calder et al.
patent: 5966726 (1999-10-01), Sokolov
patent: 6058456 (2000-05-01), Arimilli et al.
patent: 6061763 (2000-05-01), Rubin et al.
patent: 6112280 (2000-08-01), Shah et al.
patent: 6161166 (2000-12-01), Doing et al.
patent: 6182194 (2001-01-01), Uemura et al.
patent: 6205519 (2001-03-01), Aglietti et al.
patent: 6269425 (2001-07-01), Mounes-Toussi et al.
patent: 6295580 (2001-09-01), Sturges et al.
patent: 0 856 797 (1998-08-01), None
Gary Tyson et al., A Modified Approach to Data Cache Management, Proceedings of MICRO-28, pp. 93-103 (Dec. 1995).*
Jude A. Rivers et al., On Effective Data Supply for Multi-Issue Processors, Proceedings of the 1997 ICCD, pp. 1-10, Oct. 1997.*
Robert Stepanian, Digital Strong Arm SA-1500, Presentation at Microprocessor Forum 1997, pp. 1-8 (Oct. 1997).*
Dongwook Kim et al., A Partitioned On-Chip Virtual Cache for Fast Processors, Journal of Systems Architecture, 519-529 (Nov. 1996).
Cai Zhong-ning
Nakanishi Tosaku
Trop Pruner & Hu P.C.
Verbrugge Kevin
LandOfFree
Buffer memory management in a system having multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Buffer memory management in a system having multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Buffer memory management in a system having multiple... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2919875