Cache mechanism for shared resources in a multibus data...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S148000, C711S152000

Reexamination Certificate

active

06397295

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates generally to data processing systems and more particularly to a method and apparatus for simplifying data caching in memory systems accessible from more than one data bus.
As is known in the art, data processing systems generally include several types of processing resources which can be interconnected by one or more communications buses. In addition to the processor resources, data processing systems also generally include some sort of memory which is typically shared among the processor resources. As is common in many computer systems, these processing resources can act independently to perform different processing tasks. Since each of the processing resources can act independently, there may arise situations where there are common needs for certain shared memory resources within the system.
One example of a data processing system which includes several processing resources coupled to a common memory over two or more buses is a data storage system such as the Symmetrix family of data storage systems manufactured by EMC Corporation. These storage systems are typically capable of being coupled to several different host computers at any given time and provide storage services to each of those computers independently. In order to support simultaneous transactions among a plurality of host computers, the storage system includes several host controllers for managing the communication between the host computers and the storage system. In particular, the Symmetrix storage systems mentioned above include several disk controllers which are each responsible for managing one or more arrays of disk type storage devices.
In addition to the host controllers and disk controllers mentioned above, the storage subsystem can also contain a very large global memory which is used to manage the transfer of data from the host computers to the storage devices as well as to manage the transfer of data from the storage devices to the host computer.
During the operation of the data storage system described above, it is often necessary for any one of the host controllers or disk controllers to access data, either in the global memory or on a disk drive which another host controller or disk controller has just written.
The disk controllers and the host controllers, in order to increase their throughput, can often include cache memories which store data to be written to either global memory or a disk drive, or which store data which has just been read from global memory or a disk drive. In the latter instance, in particular, if a second request for data is received and is, for example, the same as or located physically near to a previous request for data, the controller need not access global memory or the disk drive for the data but can have it immediately available from its local cache memory. Such a system advantageously increases the throughput of the device, as is well known in the field. Sometimes, situations occur in which more than one controller can write to a memory or storage device. In this instances, previously read and cached data can be “stale” if another controller or processor writes to the same location in the stored memory. If all of the write operations occur over a single bus, then each processor which can access the memory, can monitor the bus and can, whenever a write operation occurs, determine whether that write operation modifies information which, at the time of writing, was contained in its own cache memory. If it does exist in cache, the data in the cache memory can either be discarded or can be overwritten with the correct new data.
When a memory or a storage system can be accessed through two separate buses, however, it becomes more difficult to determine whether data, in the cache memory of a processor, is “stale”. The processor can be connected to both buses; however, since each bus may typically have
100
or more pin terminals, and since “pin real estate” is often at a premium, it may not be practicable to connect each of two or more buses to each and every processor. As a result, the cache memory, under these circumstances, may or may not be accurate, and unless great care is taken, or substantial “pin real estate” is used, errors will occur.
Accordingly, the invention advantageously enables, with acceptable penalty, each processor in a bus memory or storage system having two or more buses, to monitor the occurrences on each bus, albeit in a different manner for each bus class, in order to determine, when or whether, data in its cache memory has become “stale”. This advantageously enables better and faster access, from the plural processors and controllers, to the memory system, while at the same time enabling high speed throughput with the knowledge that no “data stale” errors will occur and that all data will be “fresh” and up to date. Other advantages of the invention include limiting the “pin real estate” required to effect monitoring of the plural buses of the system to maintain and update a processors own cache memory.
SUMMARY OF THE INVENTION
The invention relates to a data processing system having a first and a second bus, a memory system connected to be read from each of the first and second buses, at least one first bus processor being connected to the first bus for reading data from the memory system, at least one second bus processor being connected to the second bus and for writing data to the memory system, and the first bus processor having a cache memory. The invention features each first bus processor being connected to less than all of the lines of the second bus and being connected to all of the lines of the first bus and for storing in its cache data read from the memory by the processor over the first bus. The connected first bus processor invalidates or discards any data stored in its cache memory corresponding to an identified section of the memory system which, from its connections to the second bus, could have been changed by a data write occurring on the second bus by one of the second bus processors.
In particular embodiments, the data processing system features a plurality of first bus processors and a plurality of second bus processors, each processor able to read and write data from and to the memory system over its respective connected first and second bus, each processor having a cache memory for caching previously read data, and each processor connected to less than all the lines of the other bus for determining when a write operation to the memory system occurs and approximately what section of the memory system the write operation affected.
In another aspect, each processor monitors its fully connected bus for updating its cache memory as a result of write operations by other processors able to write on its bus and being connected to only those address and control lines of the second bus necessary to determine when a write operation occurs and the section of memory which is affected.
In a particular embodiment of the invention, a processor is connected to at least half of the address lines of the other bus.
In other aspects, the invention relates to a method for maintaining the integrity of a controller cache memory. The method operates in the environment of a data processing system having a first and a second bus, a memory system connected to be read from each of the first and second buses, at least a first bus processor connected to the first bus for reading data from the memory system, at least one second bus processor connected to the second bus for writing data to the memory system, and the first bus processor having a cache memory. The method features monitoring, by each first bus processor, less then all of the lines of the second bus; storing, in each connected first bus processor, in its cache memory, data read from the memory by the processor over its first bus; and invalidating data stored in the connected first bus processor cache memory which corresponds to an indicated section of the memory system which, from its monitoring of the second bus, could have been changed by a

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache mechanism for shared resources in a multibus data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache mechanism for shared resources in a multibus data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache mechanism for shared resources in a multibus data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2845811

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.