Cache or TLB using a working and auxiliary memory with...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S117000, C711S135000, C711S144000, C711S200000, C711S205000

Reexamination Certificate

active

06260130

ABSTRACT:

The invention refers to a memory device for storing data. In particular, the invention refers to a cache or a translation lookaside buffer (TLB), the innovation being a flush buffer, in particular a flush buffer operating in parallel, which allows the flushing of larger and smaller portions without having to flush the entire TLB or cache.
Modern processors use TLBs for a fast translation of virtual addresses into physical addresses. Typically, the TLB and the cache are provided on the processor chip. In physically indexed caches they are arranged in series (FIG.
4
), in virtually indexed and physically tagged caches they are arranged in parallel (FIG.
5
).
Here, the TLB is a special cache for address translation. Fully associative TLBs are also employed, as a rule, however, TLBs are n-way set-associative.
FIG. 6
illustrates a direct mapped TLB, i.e., a 1-way set-associative TLB. The higher-value part v′ of the virtual address v is used to index a line (memory entry) of the TLB. Located there are the virtual page address v′
i
associated to this entry, the physical page address r′
i
, as well as status bits not mentioned in
FIG. 6
, which, among other things, indicate whether the entry is valid at all. If it is valid and v′
i
matches the present page address v′, there is a TLB hit and the physical address is assembled from the lower-value part v″ of the virtual address and the physical page address r′
i
supplied by the TLB.
A n-way set-associative TLB differs from a direct mapped TLB in that a line holds a plurality of entries that are indexed simultaneously and checked in parallel against v′. There is a hit, if one of these entries is correct; then, the r′
i
and status bits thereof are used to form the physical address r and for validation of the access.
TLBs shorten the process of address translation. When the defining address mapping process, i.e., one or a pluarlity of page table entries, is modified, consistency requires the nullification (“flushing”) or a corresponding alteration of those TLB entries that are affected by the modification.
Should the modification of the address map refer to only a single page, it is sufficient to flush or alter the respective entry, as long as it is within the TLB. Since this will concern one TLB entry at most, the effort can be compared to a normal translation step of the TLB: the TLB is addressed using the virtual page address, and in case of a hit, the entry is made invalid or modified.
With modifications that concern larger fields, this method soon becomes to complex. For a virtual field of m pages, m steps would be needed. In fully-associative TLBs, the problem can be solved efficiently by a parallel limited flushing of all TLB entries. Unfortunately, the hardware requirements are generally too demanding for that.
With n-way set-associative TLBs, scanning the entire TLB is expensive and cannot be scaled, k
steps for a TLB with k entries. Therefore, in such cases, the entire TLB is flushed which can be done in one step. With the TLB size increasing, this becomes ever less interesting because of the TLB misses indexed thereby.
It is the object of the invention to provide a memory device, in particular a n-way set-associative cache or TLB, which allows the flushing of larger address space regions (virtual regions) in an efficient manner.
In order to solve this object, the invention proposes a memory device with the features of claim
1
. The features of advantageous embodiments of the invention are mentioned in the dependent claims, respectively.
The memory device of the present invention comprises a useful memory containing a plurality of memory entries addressable by means of addresses. Then useful memory can be, for example, a cache memory or a translation lookaside buffer that may be seen as a special cache memory for address translation. Specifically, the useful memory of the present invention has substantially fewer memory entries than the address space has addresses for addressing the useful memory. Each memory entry is provided with a data field in which one or a plurality of data words may be stored. In addition to the data field, each memory entry has a status field that may be transferred into at least one restricting status for restricting a write/read access to the data field and into a non-restricting status for not restricting a write/read access to the data field. Further, each memory entry has further fields, in particular a tag field and a valid/invalid field. When addressing the useful memory, the address or parts thereof are compared with the contents of the tag fields of all addressed entries of the useful memory. In case of a match with a tag field, a write/read access may then be had to the data field, if, on the one hand, the same is marked as valid by the valid/invalid field and, either, the status field is in a non-restricting status or the status field is in a restricting status, and the address used in indexing the useful memory is not part of at least one address space field stored in the auxiliary memory.
Moreover, the present memory device comprises an auxiliary memory containing one or a plurality of memory entries, into which data may be stored that quantify a region of the address space. Using this auxiliary memory (also referred to as flush lookaside buffer, abbreviated as FLB), the useful memory may be flushed efficiently in a region-selective manner, namely in a single operation step.
The method for a region-selective flushing of the useful memory is as follows: Upon storing an address space region in the auxiliary memory, the status fields of all useful memory entries are transferred into a restricting status. Upon a subsequent addressing of a useful memory entry, no write/read access to the data field of this useful memory entry may be had, if the address is within at least one address space region stored in the auxiliary memory and if the status field is in a restricting status. In order to determine this, it is checked whether the address used for addressing the useful memory entry is comprised by at least one address space region stored in the auxiliary memory. If this is true, the auxiliary memory outputs a signal (also referred to hereinafter as Fhit) indicating this condition. In other words, after a flushing of the useful memory, access may still be had to the data fields of all useful memory entries that are addressable using an address that is not comprised by at least one address space region stored in the auxiliary memory. When implementing the useful memory as a TLB or cache, time consuming table walk or main memory accesses for the useful memory entries not concerned by the selective flushing may be avoided, the hardware and software efforts being comparatively low, since all that must be provided besides the useful memory is an auxiliary memory and the memory entries in the useful memory have to be added with the status field, these status fields being such that, upon a central command, if desired, they can be simultaneously transferred into a restricting status and selectively be transferred into a non-restricting status.
Advantageously, when storing a useful memory entry, upon which the same is initialized (as a rule, data are written into the useful memory entry and the same is then tagged as valid), the status field of the respective useful memory entry is transferred into a restricting status exactly if the address with which the useful memory entry has been addressed is not part of at least one address space region stored in the auxiliary memory. If, however, the address is part of at least one address space region stored in the auxiliary memory, the status field of the addressed useful memory entry is transferred into a non-restricting status.
As an alternative, the status field of an addressed useful memory entry may always be transferred into a non-restricting status upon a storing access, independent of whether the address used for the addressing is a part of at least one address space region stored in the auxil

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache or TLB using a working and auxiliary memory with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache or TLB using a working and auxiliary memory with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache or TLB using a working and auxiliary memory with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2514370

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.