Cache blocking of specific data to secondary cache with a...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S139000, C711S122000, C711S163000, C711S203000, C711S206000, C711S144000, C711S145000

Reexamination Certificate

active

06343345

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to computer networks and, more specifically, to the utilization of caches within intermediate nodes of a computer network.
BACKGROUND OF THE INVENTION
A computer network is a geographically distributed collection of interconnected subnetworks for transporting data between nodes, such as computers. A local area network (LAN) is an example of such a subnetwork; a plurality of LANs may be further interconnected by an intermediate node, called a router, to extend the effective “size” of the computer network and increase the number of communicating nodes. The nodes typically communicate by exchanging discrete frames or packets of data according to predefined protocols. In this context, a protocol consists of a set of rules defining how the nodes interact with each other.
Each node typically comprises a number of basic subsystems including a processor subsystem, a main memory subsystem and an input/output (I/O) subsystem. In particular, the main memory subsystem comprises storage locations typically composed of random access memory (RAM) devices which are addressable by the processor and I/O subsystems. In the case of a router, data such as non-transient data (i.e., instructions) and transient data (i.e., network data passing through the router) are generally stored in the addressable storage locations.
Data is transferred between the main memory and processor subsystems over a system bus that typically consists of control, address and data lines. The control lines carry control signals specifying the direction and type of transfer. For example, the processor issues a read request signal to transfer data over the bus from an addressed location in the main memory to the processor. The processor then processes the retrieved data in accordance with instructions obtained from the memory. The processor thereafter issues a write request signal to store the results in an addressed location in the main memory.
The data transferred between the processor and main memory subsystems must conform to certain timing relationships between the request signals and the data on the bus. Access time is defined as the time interval between the instant at which the main memory receives a request signal from the processor and the instant at which the data is available for use by the processor. If the processor operates at a fast rate and the access time of the main memory is slow as compared to the processor rate, the processor must enter a wait state until the request to memory is completed, thereby adversely affecting the processing rate of the processor. This problem is particularly significant when the memory request is a read request, since the processor is unable to operate, that is, process data, without the requested information.
A high-speed primary cache memory may be used to alleviate this situation. The primary cache is typically located on the processor and has an access speed that is closer to the operational speed of the processor; thus, use of the cache increases the speed of data processing by providing data to the processor at a rapid rate. The cache operates in accordance with the principle of locality; that is, if a memory location is addressed by the processor, it will probably be addressed again soon and nearby memory locations also will tend to be addressed soon. As a result, the cache is generally configured to temporarily store most-recently-used data. When the processor requires data, the cache is examined first. If the data is not located in the cache (a cache “miss”), the main memory is accessed. A block mode read request is then issued by the processor to transfer a block of data, including both the required data and data from nearby memory locations, from the main memory to the cache.
A primary cache is faster and more expensive to implement than main memory and, because of its higher cost, smaller. To supplement such an expensive primary cache, a secondary cache may be employed. The secondary cache does not operate as fast as the primary cache primarily because the secondary cache is typically coupled to a processor bus within the processor subsystem; operations occurring over the processor bus generally execute at a different (slower) clock speed than that of the primary cache internal to the processor. Yet, data accesses to secondary cache occur faster than those to main memory.
Typically, a random access main memory is logically organized as a matrix of storage locations, wherein the address of each location comprises a first set of bits identifying the row of the location and a second set of bits identifying the column. A cache memory, such as the primary or secondary cache, holds a number of blocks of data, with each block containing data from one or more contiguous main memory locations. Each block is identified by a cache address. The cache address includes memory address bits that identify the corresponding memory locations. These bits are collectively called the index field. In addition to data from main memory, each block also contains the remainder of the memory address bits identifying the specific location in main memory from which the data in the cache block was obtained. These latter bits are collectively called a tag field.
Each node, including the router, is functionally organized by an operating system comprising a collection of software modules that control the execution of computer programs and manage the transfer of data among its subsystems. The processor subsystem executes the programs by fetching and interpreting instructions and processing network data in accordance with the instructions. Program-generated addresses are called virtual addresses because they refer to the contiguous logical, i.e., virtual, address space referenced by the computer program. In contrast, the physical address space consists of the actual locations where data is stored in main memory. A computer with a virtual memory allows programs to address more memory than is physically available. The operating system manages the virtual memory so that the program operates as if it is loaded into contiguous physical locations. A common process for managing virtual memory is to divide the program and main memory into equal-sized blocks or pages so that each program page fits into a memory page. A system disk participates in the implementation of virtual memory by storing pages of the program not currently in memory. The loading of pages from the disk to host memory is managed by the operating system.
When a program references an address in virtual memory, the processor calculates the corresponding main memory physical address in order to access the data. The processor typically includes memory management hardware to hasten the translation of the virtual address to a physical address. Specifically, for each program there is a page table containing a list of mapping entries, i.e., page table entries (PTEs), which, in turn, contain the physical address of each page of the program. Each PTE also indicates whether the program page is in main memory. If not, the PTE specifies where to find a copy of the page on the disk. Because of its large size, the page table is generally stored in main memory; accordingly, an additional memory access is required to obtain the physical address, which increases the time to perform the address translation.
To reduce address translation time, another cache dedicated to address translations, called a translation-lookaside buffer (TLB), may be used. The TLB contains entries for storing translations of recently accessed virtual addresses. A TLB entry is similar to a cache entry in that the tag holds portions of the virtual address and the data portion holds a physical page-frame number. When used in conjunction with a cache, the TLB is accessed first with the program-generated virtual address before the resulting physical address is applied to the cache.
Accordingly when the processor requires data, the virtual address is passed to the TLB where it is translated into a physical address which is used to acces

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache blocking of specific data to secondary cache with a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache blocking of specific data to secondary cache with a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache blocking of specific data to secondary cache with a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2846597

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.