Method, system, and program for managing requests to a cache...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S110000, C711S144000, C710S039000, C710S052000, C710S054000

Reexamination Certificate

active

06438651

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method, system, and program for managing a cache and, in particular, managing read and write requests to a cache to process enqueue and dequeue operations with respect to a queue.
2. Description of the Related Art
In communication devices, such as Asynchronous Transfer Mode (ATM) and other devices, different processes need to sequentially access data in real-time streaming. Circular buffers are data structures used to buffer data for such sequential data accessing. One process, referred to as the producer, places data in the circular buffer and another process takes data out of the buffer. Circular buffers may be used to buffer data transferred in either direction between a network communication link, such as an ATM switch, and a network node. The circular buffer includes a sequence of entries and a head entry and tail entry. New entries are added at the tail entry and data is accessed from the head entry. Whenever a new entry is added beyond the last possible entry in the circular buffer, a wrap-around occurs, which means that the latest entries are stored in the first entry of the circular buffer.
FIG. 1
shows a circular buffer in an “empty” state 2, a “partially full” state 4, and a full state 6. The circular buffer contains entries storing consecutive data blocks in the memory and a head pointer
6
and tail pointer
8
. Circular buffers are used for accessing data blocks at sequential memory addresses. Circular buffers typically operate under a FIFO (first in first out) scheme, where the oldest data in the circular buffer is dequeued from the head
6
and new data is added or enqueued to the tail
8
. When a new entry is enqueued at the tail
8
, the tail
8
is incremented to again point to the next available entry position.
The head
6
points to the oldest entry in the circular buffer. The entry pointed to by the head
6
is dequeued as in a FIFO arrangement, the oldest entry is removed before any later added entries. After dequeuing the entry at the head
6
, the head is incremented to point to the next oldest entry which would comprise the data block in memory logically following the data block dequeued.
The circular buffer
4
is partially filled. The tail
8
points to the next available free entry and the head
6
points to the oldest entry in queue
4
. As an entry is added at the tail
8
of the queue
4
, the tail
8
moves downward to point to the next free entry and, as an entry is removed, the head
6
moves downward to point to the next oldest entry in the queue following the dequeued entry. When adding and removing entries, the circular buffer moves forward in a circular worm-like manner, where the circle may expand and contract depending on how data is added and removed.
In the full buffer
6
, the head
6
and tail
8
point to the same entry. In preferred embodiments, a full
ot full flag indicates whether a queue is full. The full buffer
6
has its full flag set to 1 or “on.”
Although circular queues such as the one shown above, as well as other implementations of circular queues, are known in the art, there is a need to provide improved methods for accessing the data in the queues.
SUMMARY OF THE PREFERRED EMBODIMENTS
Preferred embodiments disclose a method, system, and program for managing a cache. Upon receiving a data access request to a data block in a memory, a determination is made as to whether the requested data block is maintained in a cache line entry in the cache. If so, the determined cache line entry is accessed to perform the data access request. A first flag associated with the accessed cache line entry is set “on” if the data access request is a read request.
In embodiments where the data access request is a write request to update the requested data block in the memory, the update data to the data block is written to the cache line entry including the data to update when accessing the determined entry in the cache line. A second flag associated with the accessed cache line entry is set “on” to indicate that the entry was subject to a write hit.
In further embodiments, the entries in the cache line may be flushed to the memory if the second flag for each entry is set “on”.
In still further embodiments, each cache line includes a valid flag set “on” if the cache line includes data from the memory and set “off” if the cache line does not include data from the memory. One cache line having its valid flag set “off” is selected if the requested data block is not maintained in one cache line entry. Consecutive data blocks are fetched from the memory including the requested data block to access if the requested data block is not maintained in one cache line entry and written to the selected cache line.
In yet further embodiments, one cache line having its valid flag set “on” and the first and second flags for each entry set “off” is selected if no cache line has its valid flag set “off”. Alternatively, one cache line having its valid flag set “on”, the first flag for each entry set “off”, and the second flag for each entry set “on” is selected if no cache line has its valid flag set “on” and the first and second flags for each entry set “off”. The selected cache line is flushed to the memory if the selected cache line has each second flag set “on” before writing the fetched consecutive data blocks to the selected cache line.
Preferred embodiments of the present invention include a cache management scheme that optimizes read and write hits for queues having data that is accessed in a consecutive manner, such as a circular buffer queue. By maintaining consecutive data addresses from the memory in a cache line, read and write hits are assured as each subsequent read/dequeue and write/enqueue request to the circular buffer queue is to a next consecutive address in the memory. If a cache line includes n consecutive data blocks in memory, then the next n access requests to the queue will have hits to the cache after the n data blocks are staged into the n entries in the cache line.
In preferred embodiments, flags are added to the cache to ensure that cache lines currently being accessed by requests to the circular buffer are not selected for replacement. A cache line is currently being accessed if there are entries in the cache line that have not been accessed during access operations to the circular buffer. This optimizes cache access because such lines that have not had every entry accessed still have further entries that will be requested during subsequent access requests to the queue, which for circular buffers are to the next subsequent data blocks in memory.


REFERENCES:
patent: 5502833 (1996-03-01), Byrne et al.
patent: 5537552 (1996-07-01), Ogasawara et al.
patent: 5682513 (1997-10-01), Candelaria et al.
patent: 5923900 (1999-07-01), Soell et al.
patent: 5940864 (1999-08-01), Arimilli et al.
patent: 5996049 (1999-11-01), Arimilli et al.
patent: 6032228 (2000-02-01), Islam et al.
patent: 6141733 (2000-10-01), Arimilli et al.
Andrew S. Tanenbaum, “Structured Computer Organization” ©1976, 1984 and 1990, pp. 209-215, Cache Memory.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method, system, and program for managing requests to a cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method, system, and program for managing requests to a cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method, system, and program for managing requests to a cache... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2967530

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.