Method and apparatus for reducing latency in a memory system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06587920

ABSTRACT:

FIELD OF THE INVENTION
The invention generally relates to a method for transferring data between a central processing unit (CPU) and main memory in a computer system. More specifically, the invention describes various implementations for minimizing the latency in accessing main memory by using a latency hiding mechanism.
BACKGROUND OF THE INVENTION
Microprocessor speed and computing power have continuously increased due to advancements in technology. This increase in computing power depends on transferring data and instructions between a main microprocessor and the main memory at the processor speed. Unfortunately, current memory systems cannot offer the processor its data at the required rate.
The processor has to wait for the slow memory system by using wait states, thereby causing the processor to run at a much slower speed than its rated speed. This problem degrades the overall performance of the system. This trend is worsening because of the growing gap between processor speeds and memory speeds. It may soon reach a point where any performance improvements in the processor cannot produce a significant overall system performance gain. The memory system thus becomes the limiting factor to system performance.
According to Amdahl's law, the performance improvement of a system is limited by the portion of the system that cannot be improved. The following example illustrates this reasoning: if 50% of a processor's time is spent accessing memory and the other 50% is spent in internal computation cycles, Amdahl's law states that for a ten fold increase in processor speed, system performance only increases 1.82 times. Amdahl's Law states that the speedup gained by enhancing a portion of a computer system is given by the formula
Speedup
=
1
(
1
-
Fraction_enhanced
)
+
Fraction_enhanced
Speedup_enhanced
where
Fraction_enhanced is the proportion of time the enhancement is used
Speedup_enhanced is the speedup of the portion enhanced compared to the original performance of that portion.
Thus, in the example, since the processor is occupied with internal computation only 50% of the time, the processor's enhanced speed can only be taken advantage of 50% of the time.
Amdahl's Law, using the above numbers, then becomes,
Speedup
=
1
(
1
-
0.5
)
+
0.5
10
=
1.82
This is because the enhancement can only be taken advantage of 50% of the time and the enhanced processor is 10 times the speed of the original processor. Calculating the speedup yields the overall performance enhancement of 1.818 times the original system performance.
If the enhanced processor is 100 times the speed of the original processor, Amdahl's Law becomes
Speedup
=
1
(
1
-
0.5
)
+
0.5
100
=
1.98
This means that the system performance is limited by the 50% of data accesses to and from the memory. Clearly, there is a trend of declining benefit as the speed of the processor increases vs. the speed of the main memory system.
The well known cache memory system has been used to solve this problem by moving data most likely to be accessed by the processor to a fast cache memory that can match the processor speed. Various approaches to creating a cache hierarchy consisting of a first level cache (L
1
cache) and a second level cache (L
2
cache) have been proposed. Ideally, the data most likely to be accessed by the processor should be stored in the fastest cache level. Typically, both Level
1
(L
1
) and Level
2
(L
2
) caches are implemented with static random access memory (SRAM) technology due to its speed advantage over dynamic random access memory (DRAM). The most crucial aspect of cache design and the problem which cache design has focused on, is ensuring that the data next required by the processor has a high probability of being in the cache system. Two main principles operate to increase the probability of finding this required data in the cache, or having a cache “hit”: temporal locality and spatial locality. Temporal locality refers to the concept that the data next required by the processor has a high probability of being required again soon for most average processor operations. Spatial locality refers to the concept that the data next required by the processor has a high probability of being located next to the currently accessed data. Cache hierarchy therefore takes advantage of these two concepts by transferring from main memory data which is currently being accessed as well as data physically nearby.
However, cache memory systems cannot fully isolate a fast processor from the slower main memory. When an address and associated data requested by the processor is not found in the cache, a cache “miss” is said to occur. On such a cache miss, the processor has to access the slower main memory to get data. These misses represent the portion of processor time that limits overall system performance improvement.
To address this cache miss problem, Level
2
cache is often included in the overall cache hierarchy. The purpose of Level
2
cache is to expand the amount of data available to the processor for fast access without increasing Level
1
cache, which is typically implemented on the same chip as the processor itself. Since the Level
2
cache is off-chip (i.e. not on the same die as the processor and Level
1
cache), it can be larger and can run at a speed between the speed of the Level
1
cache and the main memory speed. However, in order to properly make use of Level
1
and Level
2
cache and maintain data coherency between the cache memory system and the main memory system, both the cache and the main memory must be constantly updated so that the latest data is available to the processor. If the processor memory access is a read access, this means that the processor needs to read data or code from the memory. If this requested data or code is not to be found in the cache, then the cache contents have to be updated, a process generally requiring that some cache contents have to be replaced with data or code from main memory. To ensure coherency between the cache contents and the contents of main memory, two techniques are used: write-through and write-back. The write-through technique involves writing data to both the cache and to main memory when the processor memory access is a write access and when the data being written is to be found in the cache. This technique ensures that, whichever data is accessed, either the cache contents or the main memory, the data accessed is identical. The write-back technique involves writing data only to the cache in a memory write access. To ensure coherence between the data in the cache and the data in main memory, the cache contents of a particular cache location are written to main memory when these cache contents are about to be overwritten. However, cache contents are not written to main memory if they have not been replaced by a memory write access. To determine if the cache contents of a particular cache location have been replaced by a memory write access, a flag bit is used. If the cache contents have been replaced by a memory write access, the flag bit is set or is considered “dirty”. Thus, if the flag bit of a particular cache location is “dirty”, then the cache contents of that cache location have to be written to main memory prior to being overwritten with new data.
Another approach for increasing the cache hit rate is by increasing its associativity. Associativity refers to the number of lines in the cache which are searched (i.e. checked for a hit) during a cache access. Generally, the higher the associativity, the higher the cache hit rate. A direct mapped cache system has a 1:1 mapping whereby during a cache access, only one line is checked for a hit. At the other end of the spectrum, a fully associative cache is typically implemented using a content addressable memory (CAM) whereby all cache lines (and therefore all cache locations) are searched and compared simultaneously during a single cache access. Various levels of associativity have been implemented.
Despite these various approaches to improving cache performance

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for reducing latency in a memory system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for reducing latency in a memory system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for reducing latency in a memory system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3084139

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.