Hardware mechanism for managing cache structures in a data...

Electrical computers and digital processing systems: memory – Addressing combined with specific memory configuration or... – Addressing cache memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S137000

Reexamination Certificate

active

06216199

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to storage controller systems, and more particularly to a hardware mechanism for managing cache structures in a storage controller system.
2. Description of Related Art
Host computer systems often connect to one or more storage controllers that provide access to an array of storage disks. A host system requests access to data at a virtual disk location, and a storage controller accesses a corresponding physical storage location on one of the storage disks and provides access to the data by the host system. Typically, a storage controller includes one or more microprocessors that communicate the data between the storage disk and the host system.
A common feature of a storage controller system is disk caching, which involves temporarily storing contiguous regions of data from a storage disk to a cache memory unit. It is to be understood that accesses to a memory unit typically complete much more quickly than accesses to a storage disk. To access a storage location, a storage system microprocessor will specify a physical storage location on the storage disk. However, instead of retrieving only the specified storage location, the microprocessor will also retrieve (i.e., “prefetch”) a subsequent, sequential portion of the storage disk, loading it into the cache memory. The reasoning behind this caching of sequential data from the storage disk is that storage disk accesses are most likely sequential in nature. Therefore, it is probable that the next access by the microprocessor will be to the next sequential storage location on the storage disk, which has already been loaded into the cache memory during the prefetch. When the requested data is found in a cache memory, it is referred to as a “cache hit.” In contrast, when the requested data in not found in the cache memory, it is referred to as a “cache miss”. A cache miss requires that the storage control system perform the normal retrieval of the requested data from the disk array. As a result, the microprocessor in the storage controller can, with a cache hit, avoid the time-consuming access to the next storage location on the storage disk, and instead quickly access the data in the cache memory and return it to the microprocessor. In most circumstances, disk caching results in increased performance of the data storage system.
One method of identifying a storage location is called Logical Block Addressing (LBA). LBA is an addressing scheme to overcome a 528 megabyte limit imposed on the current addressing standard for an IDE (Integrated Disk Electronics) disk drive. Effectively, LBA is used with SCSI and IDE disk drives to translate specified cylinder, head, and sector parameters of the disk drive into addresses that can be used by an enhanced BIOS to access the disk drive. In SCSI systems, a Logical Unit Number (LUN) is preferably combined with the LBA address to constitute a storage location identifier. Alternative addressing schemes employing storage location identifies also exist, including ECHS (Extended Cylinder, Head, Sector addressing), “Large”, and “Big IDE” addressing.
Typically, a storage processor manages a disk cache by manipulating cache structures in main memory. However, the operations of inserting, deleting, and searching cached elements are highly processor-intensive and divert processing power from other functions of the storage control system. Furthermore, cache management relies heavily on memory access to manipulate the cache structures and, therefore, can consume a significant amount of processor bus bandwidth if managed by the storage processor in processor memory. Also, manipulation of the cache management structures by the storage processor can dilute the storage processor's first and second level caches. A need exists for a high performance cache management state machine capable of reducing the impact of cache management on the storage processor, processor memory bus, and processor caches.
SUMMARY OF THE INVENTION
A method for managing data stored in a cache block in a cache memory is provided. The cache block is located at a cache block address in the cache memory. The data in the cache block corresponds to a storage location in a storage array identified by a storage location identifier. A cache management command and a processor memory address are received from a storage processor. The processor memory address is associated with a search key based on the storage location identifier. The search key is transferred from the processor memory in accordance with the processor memory address. A cache management structure is manipulated in accordance with the cache management command and the search key.
A system for managing data stored in a cache block in a cache memory is also provided. The cache block is located at a cache block address in the cache memory, and the data in the cache block corresponds to a storage location in a storage array identified by a storage location identifier. A storage processor accesses the cache block in the cache memory and provides a cache management command to a command processor. A processor memory coupled to the storage processor stores a search key based on the storage location identifier corresponding to the cache block. A command processor coupled to the storage processor receives a cache management command specified by the storage processor and transfers the storage location identifier from the processor memory. A cache management memory stores a cache management structure including the cache block address and the search key. A cache management processor is coupled to the cache management memory by a second link to manipulate the cache management structure in a linked data structure within the cache management memory in accordance with the cache management command and the search key.
An embodiment of the present invention provides several advantages over the prior art. By employing a hardware-oriented cache manager, a storage processor in storage control system can continue performing other work in parallel with the cache manager, improving overall performance of the storage control system. Furthermore, the instruction working set size for the storage processor is greatly reduced because the storage processor is not responsible for accessing and managing the cache management structures in cache memory. In addition, a hardware state machine embodying a cache manager may optimize its cache management memory accesses to retrieve the appropriate structure lengths during its prefetch. By minimizing the cache management operations performed by the storage processor, it is possible to maximize the performance of a storage control system, especially one that controls large disk caches.


REFERENCES:
patent: 5008820 (1991-04-01), Christopher, Jr. et al.
patent: 5751993 (1998-05-01), Ofek et al.
patent: 5761501 (1998-06-01), Lubbers et al.
patent: 5960452 (1999-09-01), Chi
patent: 6115790 (2000-09-01), Schimmel

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Hardware mechanism for managing cache structures in a data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Hardware mechanism for managing cache structures in a data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hardware mechanism for managing cache structures in a data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2453349

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.