Electrical computers and digital processing systems: memory – Address formation – Address mapping
Reexamination Certificate
1999-08-26
2001-12-11
Ellis, Kevin L. (Department: 2185)
Electrical computers and digital processing systems: memory
Address formation
Address mapping
Reexamination Certificate
active
06330654
ABSTRACT:
BACKGROUND
The invention relates generally to computer system memory architectures and more particularly, but not by way of limitation, to a translation-lookaside buffer incorporating sequential physical memory page indications.
Referring to 
FIG. 1
, conventional computer system 
100
 providing accelerated graphics port (AGP) capability includes graphics accelerator 
102
 coupled to graphics device 
104
, local frame buffer memory 
106
, and bridge circuit 
108
. Bridge circuit 
108
, in turn, provides electrical and functional coupling between graphics accelerator 
102
, system memory 
110
, processor 
112
, and system bus 
114
. For example, computer system 
100
 may be a special purpose graphics workstation, a desktop personal computer or a portable personal computer, graphics device 
104
 may be a display monitor, processor 
112
 may be a PENTIUM® processor, system memory 
110
 may be synchronous dynamic random access memory (SDRAM), and system bus 
114
 may operate in conformance with the Peripheral Component Interconnect (PCI) specification.
In accordance with the AGP specification, graphics accelerator 
102
 may use both local frame buffer 
106
 and system memory 
110
 as primary graphics memory. (See the Accelerated Graphics Port Interface Specification, revision 2.0, 1998, available from Intel Corporation.) As a consequence, AGP bus 
116
 operations tend to be short, random accesses. Because graphics accelerator 
102
 may generate direct references into system memory 
110
, a contiguous view of system memory is needed. However, since system memory 
110
 is dynamically allocated (typically in 4 kilobyte pages), it is generally not possible to provide graphics accelerator 
102
 with a single continuous memory region within system memory 
110
. Thus, it is necessary to provide an address remapping mechanism which insures graphics accelerator 
102
 will have a contiguous view of graphics data structures dynamically allocated and stored in system memory 
110
.
Address remapping is accomplished through Graphics Address Remapping Table (GART) 
118
. Referring now to 
FIG. 2
, a contiguous range of addresses 
200
 (referred to as logical addresses) is mapped 
202
 by GART 
118
 to a series of typically discontinuous pages in physical memory 
110
 (referred to as physical addresses). Each open page of physical memory within GART range 
200
 has a GART entry (referred to as a page table entry).
To speed memory access operations, bridge circuit 
108
 commonly caches up to a specified maximum number (e.g., 32) of GART page table entries in translation-lookaside buffer 
120
 (TLB, see FIG. 
1
). Once TLB 
120
 is fully lo populated, if graphics accelerator 
102
 attempts to access a page not identified by a TLB entry, a cache miss occurs. When a cache miss occurs, that page table entry in GART 
118
 providing the necessary address remapping information is identified, retrieved by bridge circuit 
108
, used to obtain the requested data, and replaces a selected entry in TLB 
120
. The specific page table entry in TLB 
120
 to replace may be determined by any desired replacement algorithm. For example, least recently used or working set cache replacement algorithms may be used. Each TLB cache miss may cause graphics accelerator 
102
 to temporarily slow or stop processing. Thus, it would be beneficial to provide a mechanism to reduce the number of TLB cache miss operations.
SUMMARY
In one embodiment, the invention provides a memory (having a plurality of page table entry (PTE) data structures) for storing address translation data. Each PTE data structure includes a base address field to identify an allocated page of memory, a prior page field to identify zero or more allocated pages of memory that are sequential to and before that page of memory identified by the base address field, and a subsequent page field to identify zero or more allocated pages of memory that are sequential to and after that page identified by the base address field. In another embodiment, the invention provides a computer system bridge circuit incorporating an address translation memory as described above. In yet another embodiment, the invention provides a computer system incorporating an address translation memory as described above.
REFERENCES:
patent: 5598553 (1997-01-01), Richter et al.
patent: 5940089 (1999-08-01), Dilliplane et al.
patent: 6069638 (2000-05-01), Porterfield
patent: 6157398 (2000-12-01), Jeddeloh
LaBerge Paul A.
Larson Douglas A.
Ellis Kevin L.
Micro)n Technology, Inc.
Trop Pruner & Hu P.C.
LandOfFree
Memory cache with sequential page indicators does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Memory cache with sequential page indicators, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory cache with sequential page indicators will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2576187