Cache having virtual cache controller queues

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S141000, C711S142000, C711S144000, C711S162000, C711S203000

Reexamination Certificate

active

06502168

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field:
The present invention relates in general to an improved method and system for data processing. In particular, the present invention relates to an improved method and system for data communication in a data processing system and an improved method and system for cache management in a data processing system.
2. Description of the Related Art:
A conventional symmetric multiprocessor data processing system may include a number of processors that are each coupled to a shared system bus. Each processor may include an on-board cache that provides local storage for instructions and data, execution circuitry for executing instructions, and a bus interface unit (BIU) that supports communication across the shared system bus according to a predetermined bus communication protocol.
In conventional multiprocessor data processing systems, each BIU maintains a single queue of all outstanding communication requests generated within the processor. The communication requests indicate a request address and a request source within the processor. To promote maximum utilization of the system bus, the BIUs typically service the communication requests utilizing split bus transactions, which permit multiple bus transactions to be chronologically interleaved. For example, the BIU of a first processor may gain ownership of the system bus and initiate a first bus transaction by driving an address and appropriate control signals. The first processor may then relinquish ownership of the system bus while awaiting receipt of data associated with the address in order to permit a second processor to perform a portion of a second bus transaction. Thereafter, the device from which the first processor requested data may complete the first bus transaction by driving the requested data, which is then latched by the BIU of the first processor. To allow devices snooping the system bus to identify the bus transaction to which each transaction portion belongs, each BIU assigns each of its bus transactions an arbitrary bus tag that is transmitted during each tenure of the bus transaction. The bus tags are typically assigned cyclically out of a pool of bus tags equal in number to the maximum number of concurrent bus transactions supported by the device. For example, the BIU of a device supporting a maximum of eight concurrent bus transactions assigns one of eight low order 3-bit tags to each of its bus transactions The bus tags are stored by the device in association with the appropriate queue entries.
Although employing split bus transactions tends to maximize bus utilization, supporting split bus transactions concomitantly increases device complexity due to the allocation logic required to dynamically allocate and deallocate bus tags and the. associative logic utilized to determine which bus transaction is associated with each snooped bus tag. In addition, decoding each bus tag prior to routing retrieved data to the appropriate request source within a processor introduces latency in a critical timing path, thereby degrading processor performance.
A second source of performance problems within a conventional data processing system is the manner in which caches handle updates. Data processing system caches are typically set associative and accordingly contain a number of congruence classes that each include a number of ways or members. Each of the members can store a cache line of data, for example, 64 bytes. As is well-known to those skilled in the art, the cache line stored within each congruence class member is recorded in an associated directory entry utilizing a tag portion of the cache line address. The directory entry also stores the current coherency state of the associated congruence class member. An update to a congruence class member therefore entails an update of either or both of the tag and coherency state of the corresponding directory entry. Prior art data processing systems handle an update operation by holding off all other processor requests and snoops mapping to a congruence class member until the update to the corresponding directory entry update is complete. The deferral of service for processor requests and snoops causes de facto serialization of cache requests in cases in which multiple requests specifying the same congruence class member are received.
As should thus be apparent, it would be desirable to provide an improved method and system for data processing. In particular, it would be desirable to provide an improved method and system for identifying split bus transactions. In addition, it would be desirable to provide a method and system for performing cache updates that minimize contention for frequently accessed cache lines.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide an improved method and system for data processing.
It is another object of the present invention to provide an improved method and system for data communication in a data processing system.
It is yet another object of the present invention to provide an improved method and system for cache management in a data processing system.
The foregoing objects are achieved as is now described. According to a first aspect of the present invention, a data processing system is provided that includes a communication network to which multiple devices are coupled. A first of the multiple devices includes a number of requesters, which are each permanently assigned a respective one of a number of unique tags. In response to a communication request by a requester within the first device, a tag assigned to the requestor is transmitted on the communication network in conjunction with the requested communication transaction. According to a second aspect of the present invention, a data processing system includes a cache having a cache directory. A status indication indicative of the status of at least one of a plurality of data entries in the cache is stored in the cache directory. In response to receipt of a cache operation request, a determination is made whether to update the status indication. In response to the determination that the status indication is to be updated, the status indication is copied into a shadow register and updated. The status indication is then written back into the cache directory at a later time.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 4807110 (1989-02-01), Pomerene et al.
patent: 5832250 (1998-11-01), Whitaker
patent: 6049851 (2000-04-01), Bryg et al.
patent: 6338123 (2002-01-01), Joseph et al.
patent: 1263762 (1989-10-01), None
patent: 3127157 (1991-05-01), None
patent: 5210640 (1993-08-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache having virtual cache controller queues does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache having virtual cache controller queues, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache having virtual cache controller queues will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2973533

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.