Cache with block prefetch and DMA

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06697916

ABSTRACT:

This application claims priority to European Application Serial No. 00402331.3, filed Aug. 21, 2000 (TI-31366EU) and to European Application Serial No. 01400684.5, filed Mar. 15, 2001 (TI-31350EU). U.S. patent application Ser. No. 09/932,651 (TI-31366US) is incorporated herein by reference.
FIELD OF THE INVENTION
This invention generally relates to microprocessors, and more specifically to improvements in cache memory and access circuits, systems, and methods of making.
BACKGROUND
Microprocessors are general purpose processors which provide high instruction throughputs in order to execute software running thereon, and can have a wide range of processing requirements depending on the particular software applications involved. A cache architecture is often used to increase the speed of retrieving information from a main memory. A cache memory is a high speed memory that is situated between the processing core of a processing device and the main memory. The main memory is generally much larger than the cache, but also significantly slower. Each time the processing core requests information from the main memory, the cache controller checks the cache memory to determine whether the address being accessed is currently in the cache memory. If so, the information is retrieved from the faster cache memory instead of the slower main memory to service the request. If the information is not in the cache, the main memory is accessed, and the cache memory is updated with the information.
Many different types of processors are known, of which microprocessors are but one example. For example, Digital Signal Processors (DSPs) are widely used, in particular for specific applications, such as mobile processing applications. DSPs are typically configured to optimize the performance of the applications concerned and to achieve this they employ more specialized execution units and instruction sets. Particularly in applications such as mobile telecommunications, but not exclusively, it is desirable to provide ever increasing DSP performance while keeping power consumption as low as possible.
To further improve performance of a digital system, two or more processors can be interconnected. For example, a DSP may be interconnected with a general purpose processor in a digital system. The DSP performs numeric intensive signal processing algorithms while the general purpose processor manages overall control flow. The two processors communicate and transfer data for signal processing via shared memory. A direct memory access (DMA) controller is often associated with a processor in order to take over the burden of transferring blocks of data from one memory or peripheral resource to another and to thereby improve the performance of the processor.
SUMMARY OF THE INVENTION
Particular and preferred aspects of the invention are set out in the accompanying independent and dependent claims. In accordance with a first aspect of the invention, there is provided a digital system having at least one processor, with an associated multi-segment cache memory circuit. Direct memory access (DMA) circuitry is connected to the memory cache for transferring data between the memory cache and a selectable region of a secondary memory. Fetch circuitry associated with the memory cache is operable to transfer data from a pre-selected region of the secondary memory to a first segment of the plurality of segments and to assert a first valid bit corresponding to the first segment when the miss detection circuitry detects a miss in the first segment.
In an embodiment of the invention, there is mode circuitry to select between a cache mode of operation for the memory cache and a RAM mode by disabling miss detection circuitry associated with the memory cache.
In an embodiment of the invention, block circuitry is associated with the cache that has a start register and an end register. The block circuitry is operable to cause fetch circuitry to fetch a plurality of segments in response to a miss. The DMA circuitry makes use of these same start and end registers and further has a third register to specify the selectable region of the secondary memory.
In an embodiment of the invention, the block circuitry is operable to transfer a block of data to a selected portion of segments in the cache memory in such a manner that a transfer to a first segment holding valid data within the selected portion of segments is inhibited.
Another embodiment of the invention is a method of operating a digital system having a processor and a memory cache. The cache is operated in a first manner such that when a transfer request from the processor requests a first location in the cache memory that does not hold valid data, valid data is transferred from a pre-selected location in a secondary memory that corresponds directly to the first location. The cache is operated in a second manner such that data is transferred between the first location and a selectable location in the secondary memory, wherein the selected location need not directly correspond to the first location.


REFERENCES:
patent: 5313610 (1994-05-01), Oliver et al.
patent: 5586293 (1996-12-01), Baron et al.
patent: 5636364 (1997-06-01), Emma et al.
patent: 5966734 (1999-10-01), Mohamed et al.
patent: 6119167 (2000-09-01), Boyle et al.
patent: 6219759 (2001-04-01), Kumakiri
patent: 0 529 217 (1993-03-01), None
Texas Instruments Incorporated, S/N: 09/591,537, filed Jun. 9, 2000,Smart Cache.
Texas Instruments Incorporated, S/N: 09/187,118, filed Nov. 5, 1998,Computer Circuits, Systems, and Methods Using Partial Cache Cleaning.
Texas Instruments Incorporated, S/N: 09/447,194, filed Nov. 22, 1999,Optimized Hardware Cleaning Function for VIVT Data Cache.
Texas Instruments Incorporated, S/N: 09/591,656, filed Jun. 9, 2000,Cache With Multiple Fill Modes.
Tehranian, Michael M:DMA Cache Speeds Execution in Mixed-Bus Systems, Computer Design, vol. 24, No. 8, Jul. 15, 1985, pp. 85-88.
IBM Technical Disclosure Bulletin,Use of Dirty, Buffered, and Invalidate Bits for Cache Operations, vol. 35, No. 1A, Jun. 1, 1992, 1 pg.
IBM Technical Disclosure Bulletin,Asynchronouse Pipeline for Queueing Synchronous DMA Cache Management Requests, vol. 35, No. 6, Nov. 1, 1992, pp. 140-141.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache with block prefetch and DMA does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache with block prefetch and DMA, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache with block prefetch and DMA will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3327408

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.