Method and apparatus for processing memory accesses...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S244000

Reexamination Certificate

active

06301648

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
This invention relates generally to computer system architectures and more particularly to memory access processes utilizing a translation look aside table (TLB).
BACKGROUND OF THE INVENTION
FIG. 1
illustrates a schematic block diagram of a portion of a computing system. The computing system includes a central processing unit, cache memory, a memory manager (e.g., a chipset), and memory. The CPU is shown to include a processor and a translation look aside table (“TLB”). The system of
FIG. 1
further illustrates cache hardware that interfaces with the cache memory. Typically, the cache hardware is included in the central processing unit.
Referring to
FIGS. 1 and 2
simultaneously, the numbers included in parentheticals of
FIG. 1
correspond to processing steps of FIG.
2
.
FIG. 2
illustrates a logic diagram of a read memory access request. The read memory access request begins when the processor generates a linear address (
1
). As is known, the central processing unit utilizes linear addressing, or virtual addressing, to internally process memory access requests. The memory, however, stores data utilizing physical addresses, thus the linear addresses need to be translated to physical addresses for the memory.
Having generated the linear address, the central processing unit determines whether a TLB entry exists for the linear address (
2
). If page translation techniques are used, a TLB entry will typically include a valid indicator, a dirty bit indicator, a page directory entry, a page table entry, and the most significant bits of the physical address. These entries are derived from the linear address utilizing known paging techniques, which essentially use a first portion of the linear address to obtain the page directory entry and a second portion of the linear address along with the page directory entry to obtain the page table entry. Such paging techniques then take the page table entry, the page directory entry, and the most significant bits of the physical address to create the physical address.
If a TLB entry does not exist, the central processing unit creates a TLB entry (
3
). Having generated a TLB entry, the central processing unit then repeats step
2
by determining whether a TLB entry exists for the linear address. Having just created one, the inquiry at step
2
is positive, thus the process proceeds to step
4
. At step
4
, the physical address is obtained from the TLB entry and used to determine whether the corresponding data is cached, which is done at step
5
. If so, the cache hardware causes the data to be retrieved from the cache, which is shown at step
6
. If, however, the data is not cached, the physical address is provided to the memory manager. The memory manager interprets the physical address to determine whether the physical address corresponds to a location in the memory, requires an AGP translation, or is within the PCI memory space. If the physical address is within the PCI memory space, the request is provided to the PCI bus. If the physical address is in the AGP memory space, an AGP translation is performed to obtain a corresponding physical address within the memory.
When the physical address identifies a location within the memory, the memory manager retrieves the data from the identified location, which occurs at step
8
. Typically, the memory manager retrieves a full line of data (e.g., 32 bits, 64 bits, 128 bits). The memory manager then coordinates sending the data, at step
10
, to the processor. In addition, a determination is made by the cache hardware at step
9
as to whether the data is to be cached. If not, the process is complete for this read memory access request. If, however, the data is to be cached, the process proceeds to step
11
where the data is cached.
A write memory access request is processed in a similar manner as the described read memory access request, however, the data flow is in the opposite direction. In addition, write requests may write to the cache memory and then subsequently flush to main memory.
While the system of
FIG. 1
performs the memory access request as illustrated in
FIG. 2
, the processing of memory access request is quite limiting. For instance, the further translations to AGP memory space is done externally to the central processing unit thus adding extra processing steps and memory requirements, since the memory manager includes its own TLB for performing the AGP translations. In addition, other memory access options, such as restricted memory, cache enabled/disabled, read-only memory, device emulation, and redefining address space require separate and additional processing.
Therefore, a need exists for a method and apparatus of enhancing memory access requests to perform a plurality of related memory access functions in addition to the basic memory access functions.


REFERENCES:
patent: 5852738 (1998-12-01), Beakowski et al.
patent: 5918251 (1999-06-01), Yamada et al.
patent: 6092172 (2000-07-01), Nishimoto et al.
patent: 6138226 (2000-10-01), Yoshioka et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for processing memory accesses... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for processing memory accesses..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for processing memory accesses... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2612991

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.