Electrical computers and digital processing systems: processing – Processing control – Arithmetic operation instruction processing
Reexamination Certificate
1999-08-17
2003-04-22
Banankhah, Majid (Department: 2127)
Electrical computers and digital processing systems: processing
Processing control
Arithmetic operation instruction processing
C712S007000, C712S223000, C712S225000, C709S241000
Reexamination Certificate
active
06553486
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to special purpose memory integrated in general purpose computer systems, and specifically to a memory system for efficient handling of vector data.
2. Description of the Related Art
In the last few years, media processing has had a profound effect on microprocessor architecture design. It is expected that general-purpose processors will be able to process real-time, vectored media data as efficiently as they process scalar data. The recent advancements in hardware and software technologies have allowed designers to introduce fast parallel computational schemes to satisfy the high computational demands of these applications.
Dynamic random access memory (DRAM) provides cost efficient main memory storage for data and program instructions in computer systems. Static random access memory (SRAM) is faster (and more expensive) than DRAM and is typically used for special purposes such as for cache memory and data buffers coupled closely with the processor. In general a limited amount of cache memory is available compared to the amount of DRAM available.
Cache memory attempts to combine the advantages of quick SRAM with the cost efficiency of DRAM to achieve the most effective memory system. Most successive memory accesses affect only a small address area, therefore the most frequently addressed data is held in SRAM cache to provide increase speed over many closely packed memory accesses. Data and code that is not accessed as frequently is stored in slower DRAM. Typically, a memory location is accessed using a row and column within a memory block. A technique known as bursting allows faster memory access when data requested is stored in a contiguous sequence of addresses. During a typical burst, memory is accessed using the starting address, the width of each data element, and the number of data words to access, also referred to as “the stream length”. Memory access speed is improved due to the fact there is no need to supply an address for each memory location individually to fetch or store data words from the proper address. One shortfall of this technique arises when data is not stored contiguously in memory, such as when reading or writing an entire row in a matrix since the data is stored by column and then by row. It is therefore desirable to provide a bursting technique that can accommodate data elements that are not contiguous in memory.
Synchronous burst RAM cache uses an internal clock to count up to each new address after each memory operation. The internal clock must stay synchronized with the clock for the rest of the memory system for fast, error-free operation. The tight timing required by synchronous cache memory increases manufacturing difficulty and expense.
Pipelined burst cache alleviates the need for a synchronous internal clock by including an extra register that holds the next piece of information in the access sequence. While the register holds the information ready, the system accesses the next address to load into the pipeline. Since the pipeline keeps a supply of data always ready, this form of memory can run as fast as the host system requests data. The speed of the system is limited only by the access time of the pipeline register.
Multimedia applications typically present a very high level of parallelism by performing vector-like operations on large data sets. Although recent architectural extensions have addressed the computational demands of multimedia programs, the memory bandwidth requirements of these applications have generally been ignored. To accommodate the large data sets of these applications, the processors must present high memory bandwidths and must provide a means to tolerate long memory latencies. Data caches in current general-purpose processors are not large enough to hold these vector data sets which tend to pollute the caches very quickly with unnecessary data and consequently degrade the performance of other applications running on the processor.
In addition, multimedia processing often employs program loops which access long arrays without any data-dependent addressing. These programs exhibit high spatial locality and regularity, but low temporal locality. The high spatial locality and regularity arises because, if an array item n is used, then it is highly likely that array item n+s will be used, where “s” is a constant stride between data elements in the array. The term “stride” refers to the distance between two items in data in memory. The low temporal locality is due to the fact that an array item n is typically accessed only once, which diminishes the performance benefits of the caches. Further, the small line sizes of typical data caches force the cache line transfers to be carried out through short bursts, thereby causing sub-optimal usage of the memory bandwidth. Still further, large vector sizes cause thrashing in the data cache. Thrashing is detrimental to the performance of the system since the vector data spans over a space that is beyond the index space of a cache. Additionally, there is no way to guarantee when specific data will be placed in cache, which does not meet the predictability requirements of real-time applications. Therefore, there is a need for a memory system that handles multi-media vector data efficiently in modem computer systems.
SUMMARY OF THE INVENTION
This present invention provides a vector data transfer unit to improve handling of vector data in application programs. In one embodiment of the present invention, an operating system program performs context switches between application programs, and controls access to the vector transfer control unit by the application programs.
One feature of the present invention is one or more vector buffers, which are fixed-sized partitions in the vector buffer pool (VBP). The vector buffers are partitioned into variable-sized streams. Each stream corresponds to a vector segment. The operating system allocates a vector buffer to each application program that issues one or more vector transfer instructions.
The application programs transfer data into and out of the VBP using the vector data instructions. One set of instructions perform the transfer of data between the memory and the vector buffers. Another pair of instructions move the data between the vector buffers and the general-purpose registers (both integer and floating-point registers).
Another feature of the present invention is a configuration register that contains configuration information corresponding to one of the plurality of vector buffers in the vector buffer pool. The operating system program provides an identification of the vector data buffer assigned to each application program to the configuration register.
The configuration register also includes information provided by the operating system and the application programs on whether the vector buffer is free or in-use. The operating system program allows one application program at a time to access the vector buffer assigned to that application program when the vector buffer pool is available. The operating system program issues an exception when an application program attempts to access the vector buffer assigned to that application program and the vector buffer free indicator indicates that the vector buffer pool is not available.
During a context switch between application programs, the operating system program and the application programs use a synchronization instruction so that the instructions issued by one application program finish before any transfer instructions issued by the second application program may begin.
The foregoing has outlined rather broadly the objects, features, and technical advantages of the present invention so that the detailed description of the invention that follows may be better understood.
REFERENCES:
patent: 4930065 (1990-05-01), McLagan et al.
patent: 5640524 (1997-06-01), Beard et al.
Banankhah Majid
Campbell Stephenson Ascolese LLP
NEC Electronics Inc.
LandOfFree
Context switching for vector transfer unit does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Context switching for vector transfer unit, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Context switching for vector transfer unit will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3032640