Memory cache with sequential page indicators

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06526497

ABSTRACT:

BACKGROUND
The invention relates generally to computer system memory architectures and more particularly, but not by way of limitation, to a translation-lookaside buffer incorporating sequential physical memory page indications.
Referring to
FIG. 1
, conventional computer system
100
providing accelerated graphics port (AGP) capability includes graphics accelerator
102
coupled to graphics device
104
, local frame buffer memory
106
, and bridge circuit
108
. Bridge circuit
108
, in turn, provides electrical and functional coupling between graphics accelerator
102
, system memory
110
, processor:
112
, and system bus
114
. For example, computer system
100
may be a special purpose graphics workstation, a desktop personal computer or a portable personal computer, graphics device
104
may be a display monitor, processor
112
may be a PENTIUM® processor, system memory
110
may be synchronous dynamic random access memory (SDRAM), and system bus
114
may operate in conformance with the Peripheral Component Interconnect (PCI) specification.
In accordance with the AGP specification, graphics accelerator
102
may use both local frame buffer
106
and system memory
110
as primary graphics memory. (See the Accelerated Graphics Port Interface Specification, revision 2.0, 1998, available from Intel Corporation.) As a consequence, AGP bus
116
operations tend to be short, random accesses. Because graphics accelerator
102
may generate direct references into system memory
110
, a contiguous view of system memory is needed. However, since system memory
110
is dynamically allocated (typically in 4 kilobyte pages), it is generally not possible to provide graphics accelerator
102
with a single continuous memory region within system memory
110
. Thus, it is necessary to provide an address remapping mechanism which insures graphics accelerator
102
will have a contiguous view of graphics data structures dynamically allocated and stored in system memory
110
.
Address remapping is accomplished through Graphics Address Remapping Table (GART)
118
. Referring now to
FIG. 2
, a contiguous range of addresses
200
(referred to as logical addresses) is mapped
202
by GART
118
to a series of typically discontinuous pages in physical memory
110
(referred to as physical addresses). Each open page of physical memory within GART range
200
has a GART entry (referred to as a page table entry).
To speed memory access operations, bridge circuit
108
commonly caches up to a specified maximum number (e.g., 32) of GART page table entries in translation-lookaside buffer
120
(TLB, see FIG.
1
). Once TLB
120
is fully populated, if graphics accelerator
102
attempts to access a page not identified by a TLB entry, a cache miss occurs. When a cache miss occurs, that page table entry in GART
118
providing the necessary address remapping information is identified, retrieved by bridge circuit
108
, used to obtain the requested data, and replaces a selected entry in TLB
120
. The specific page table entry in TLB
120
to replace may be determined by any desired replacement algorithm. For example, least recently used or working set cache replacement algorithms may be used. Each TLB cache miss may cause graphics accelerator
102
to temporarily slow or stop processing. Thus, it would be beneficial to provide a mechanism to reduce the number of TLB cache miss operations.
SUMMARY
In general, according to one embodiment, a method of performing address translation includes storing a portion of an address mapping table, storing a first value in the portion to indicate a base address of a first allocated page of memory, storing a second value in the portion to indicate zero or more allocated pages of memory that are sequential to and before the first page of memory, and storing a third value in the portion to indicate zero or more allocated pages of memory that are sequential to and after the first page of memory. An address is translated based on the portion of the address mapping table.
Other or alternative features will become apparent from the following description, from the drawings, or from the claims.


REFERENCES:
patent: 5598553 (1997-01-01), Richter et al.
patent: 5940089 (1999-08-01), Dilliplane et al.
patent: 6069638 (2000-05-01), Porterfield
patent: 6157398 (2000-12-01), Jeddeloh

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory cache with sequential page indicators does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory cache with sequential page indicators, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory cache with sequential page indicators will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3170183

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.