Method and system for accessing a cache memory within a...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S205000, C711S206000

Reexamination Certificate

active

06226731

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to a method and system for data-processing in general, and in particular to a method and system for enhancing the speed of a memory access within a data-processing system. More particularly, the present invention relates to improved methods and systems for accessing a cache memory within a data-processing system. Still more particularly, the present invention relates to address translation in microprocessors. In addition, the present invention relates to the translation of virtual and real addresses for cache memory access.
2. Description of the Related Art
Microprocessors, usually implemented as a central processing unit (CPU) on a single integrated circuit chip within a data-processing system, lie at the heart of modern electronic and computer devices. As technology becomes increasingly complicated, the need arises for microprocessors that are faster and more efficient than previous microprocessor versions. One method of increasing the speed of such microprocessors is to decrease the amount of time necessary to access memory locations associated with such microprocessors.
A cache memory is one type of a memory location that may be associated with a typical microprocessor, either integral with the microprocessor chip itself or separate. The cache memory is also associated with a main memory located within the data-processing system. The cache memory is a special-purpose storage buffer, smaller and faster than main storage, utilized to hold a copy of instructions and data obtained from the main memory and likely to be needed next by the microprocessor. The cache memory typically functions as a storage buffer that contains frequently accessed instructions and data. The cache memory can be thought of as particular specialized high-speed memory in which frequently utilized data values are duplicated for quick access.
A typical cache memory, for example, stores the contents of frequently accessed random access memory (RAM) locations and the addresses where these data items are stored. When the microprocessor references an address in memory, the cache memory checks to see whether it holds that address. If the cache memory does hold the address, the data is returned to the microprocessor; if it does not, a regular memory access occurs.
A common technique for organizing the main memory within a data-processing system for memory access is to divide the main memory into blocks of contiguous locations called pages, each page having the same number of lines, each line having the same number of bytes. Accordingly, an address utilized to access the main memory typically includes a page number, a line number, and a byte location, and is commonly referred to as a real address (RA) or physical address. However, when a virtual addressing scheme is being utilized, the access address is then referred to as an effective address (EA) or virtual address. Given the fact that instructions or data are relocatable within the virtual addressing scheme, the effective address or virtual address must be mapped back to a corresponding real address or physical address that specifies an actual location within the main memory. Nevertheless, because the main memory is conceptually divided in pages, as mentioned previously, the low-order bits of an effective address that typically identify a byte within a page of the main memory usually do not require any translation, while only the high-order bits of the effective address are required to be translated to a corresponding real page address that specifies the actual page location within the main memory.
In order to increase the speed of access to data stored within the main memory, modern data-processing systems generally maintain the most recently used data in the cache memory. The cache memory has multiple cache lines, with several bytes per cache line for storing information in contiguous addresses within the main memory. Each cache line essentially comprises a boundary between blocks of storage that map to a specific area in the cache memory or high-speed buffer. In addition, each cache line has an associated “tag” that typically identifies a partial address of a corresponding page of the main memory. Because the information within cache may come from different pages of the main memory, the tag provides a convenient way to identify which page of the main memory a cache line belongs.
In a typical cache memory implementation, information is stored in one or several memory arrays. In addition, the corresponding tags for each cache line are stored in a structure known as a directory or tag array. Usually, an additional structure, called a translation lookaside buffer (TLB), is also utilized to facilitate the translation of a virtual address to a real address during a cache memory access. Cache memory access thus involves reading out a line of the cache and its associated tag. The real address from a translation array is then compared with the real address from the tag array. If these real addresses are identical, one realizes that the line in the cache that was read out is the desired line, based on the effective or virtual address calculated by the algorithm in use.
In order to access a byte in a cache memory with an effective or virtual address, the line portion (mid-order bits) of the effective or virtual address is utilized to select a cache line from the memory array, along with a corresponding tag from the directory or tag array. The byte portion (low-order bits) of the effective or virtual address is then utilized to choose the indicated byte from the selected cache line. At the same time, the page portion (high-order bits) of the effective address is translated via the segment register or segment lookaside buffer and TLB to determine a real page number. If the real page number obtained by this translation matches the real address tag stored within the directory, then the data read from the selected cache line is the data actually sought by the program. This is commonly referred to as a cache “hit,” meaning that the requested data was found in the cache memory. If the real address tag and translated real page number do not agree, a cache “miss” occurs, meaning that the requested data was not stored in the cache memory. Accordingly, the requested data must be retrieved from the main memory or elsewhere within the memory hierarchy.
Both address translation and cache access involve comparison of a value read from one array with another value read from a different array. In the case of address translation, the virtual segment identifier associated with a given effective address and stored in a segment register or segment lookaside buffer is compared with the virtual address stored as part of an entry in the translation lookaside buffer. Similarly, the translated real page number is compared with the real page number read from the cache tag array to determine whether the accessed line in the cache is the required real page number.
With a direct-mapped cache, only one of the group of corresponding lines from all pages in a real memory page can be stored in the cache memory at a time; but in order to achieve a better “hit” ratio, sometimes a set-associative cache is utilized instead. For example, with an N-way set-associative cache, corresponding lines from N different pages may be stored. Since all entries can be distinguished by their associated tags, it is always possible to resolve which of the N lines having the same line number contains the information a program requested. The resolution requires comparison of the translated real page number to the N tags associated with a given line number. Each comparison generates an input to an N-to-
1
multiplexor to select an appropriate cache line from among the N possibilities.
Regardless of the cache architecture being utilized, the critical path for address translation still includes a translation lookaside buffer, a directory or tag array, and a group of comparison circuits, which must be utilized during a cache access to select an appro

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for accessing a cache memory within a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for accessing a cache memory within a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for accessing a cache memory within a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2566671

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.