Electrical computers and digital processing systems: memory – Address formation – Address multiplexing or address bus manipulation
Reexamination Certificate
1999-07-02
2003-05-20
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Address formation
Address multiplexing or address bus manipulation
C711S212000, C711S215000, C711S001000
Reexamination Certificate
active
06567908
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method of and an apparatus for processing information, and a providing medium for providing programs and data, and more particularly to a method of and an apparatus for transferring programs and data efficiently in a video entertainment system which executes various programs, and a providing medium for providing programs and data.
2. Description of the Related Art
Generally, a DRAM (Dynamic Random-Access Memory) incorporated in one chip allows the bit width of a data bus to be established freely, and can provide a high bandwidth when it is compatible with multiple bit widths.
Conversely, a CPU (Central Processing Unit) cannot be connected to a bus which has the number of bits greater than the number of bits handled by the CPU. Usually, the bit width of a CPU is smaller than the bit width of a DRAM.
Therefore, if a DRAM and a CPU are installed together on one chip, then the bit width of the data bus has to match the bit width of the CPU, with the result that the DRAM fails to offer its advantages.
There is a situation where a DSP (Digital Signal Processor) is connected to a data bus and requires a high band-width between itself and a DRAM.
FIG. 5
of the accompanying drawings shows a video entertainment system
60
that is designed for use in such a situation. As shown in
FIG. 5
, the video entertainment system
60
has a CPU bus
12
having a smaller bit width and a system bus (data bus)
13
having a larger bit width, with a DRAM
41
connected to the data bus
13
.
For example, the CPU bus
12
is 32 bits wide and the system bus
13
is 128 bits wide, and these buses
12
,
13
are used to transfer programs and data between various devices.
To the CPU bus
12
, there are connected a CPU
31
, a peripheral device
32
, and a cache memory
51
. To the system bus
13
, there are connected the cache memory
51
, the DRAM
41
, a DMA (Direct Memory Access) controller (DMAC)
42
, and a DSP
43
.
The CPU
31
is supplied with a program transferred from the DRAM
41
via the cache memory
51
, and executes a certain process according to the supplied program. The CPU
31
can also be supplied with data from the DRAM
41
via the cache memory
51
.
The peripheral device
32
comprises a timer for performing clock operation and an interrupt controller for generating interrupt pulses at preset periodic intervals.
The DRAM
41
is arranged to store data necessary for the CPU
31
and the DSP
43
to operate and also programs to be executed by the CPU
31
and the DSP
43
. The cache memory
51
stores programs and data to be supplied to the CPU
31
, which have been read from the DRAM
41
.
In response to a request from the CPU
31
or the DSP
43
, the DMA controller
42
transfers programs or data from the DRAM
41
to the cache memory
51
or the DSP
43
.
The DSP
43
executes programs supplied from the DRAM
41
.
The CPU
31
usually accesses the cache memory
51
via the CPU bus
12
, reads a necessary program and data from the cache memory
51
, and executes the program and processes the data. The DSP
43
executes a program and processes data which have been transferred from the DRAM
41
via the system bus
13
under the control of the DMA controller
42
.
Since the video entertainment system
60
has the CPU bus
12
and the system bus
13
that are provided separately from each other, the system bus
13
, which is a data bus having a large bit width, can be connected to the DRAM
41
without being limited by the bit width of the CPU
31
. As a result, data can be transferred at a high rate between the DRAM
41
and the DSP
43
.
The cache memory
51
stores programs and data that are accessed highly frequently from the CPU
31
. However, the cache memory
51
occasionally causes a cache error, i.e., fails to store necessary programs and data. While stored programs are sequentially read from the cache memory
51
by the CPU
31
, stored data are frequently requested and read randomly from the cache memory
51
by the CPU
31
. For this reason, many cache errors occur with respect to the data stored in the cache memory
51
. When a cache error takes place, the CPU
31
needs to request the DMA controller
42
to transfer data from the DRAM
41
to the cache memory
51
according to a DMA (Direct Memory Access) data transfer process. This data transfer operation places an extra burden on the CPU
31
, which then fails to perform high-speed processing.
When the DRAM
41
transfers data to the cache memory
51
according to the DMA data transfer process in the event of a cache error, the DMA controller
42
outputs an address indicative of the storage location in the DRAM
41
of data to be transferred to the cache memory
51
, to DRAM
41
, and controls the transfer of the data while outputting an address indicative of the storage location in the cache memory
51
of the data to the cache memory
51
.
FIG. 6
of the accompanying drawings is a timing chart illustrative of the data transfer from the DRAM
41
to the cache memory
51
according to the DMA data transfer process.
The DMA controller
42
starts transferring data from the DRAM
41
to the cache memory
51
according to the DMA data transfer process at the timing of the nth clock cycle (hereinafter referred to as “nth clock timing”) of a system clock signal. A request from the CPU
31
includes an initial address in the DRAM
41
of data to be transferred to the cache memory
51
, the amount (size) of the data to be transferred to the cache memory
51
, and an initial address in the cache memory
51
for storing the data to be transferred from the DRAM
41
.
As shown in
FIG. 6
, at the nth clock timing, the DMA controller
42
outputs the initial address (address D
1
) in the DRAM
41
via the system bus (address bus)
13
to the DRAM
41
, and instructs the DRAM
41
to read data (data A) stored in the address.
The DRAM
41
usually stores pages of programs and data. If the DRAM
41
reads data, for example, within a page, then the DRAM
41
can perform the reading process relatively quickly. However, if the DRAM
41
reads a plurality of pages of data, then since a page break occurs between pages of data, the DRAM
41
virtually stops its operation for a certain period of time, i.e., four clock pulses in the illustrated example, in that page break.
Normally, in the DMA data transfer process, the data that the DRAM
41
initially transfers is often not related to the data that the DRAM
41
previously processed, and the DRAM
41
suffers a page break between these data. Specifically, the DRAM
41
remains inactive up to a point immediately prior to the timing of the (n+4)th clock cycle, and cannot output the data (data A) to the cache memory
51
until the (n+4)th clock timing.
The DMA controller
42
outputs the initial address (address D
1
) in the DRAM
41
via the system bus (address bus)
13
to the DRAM
41
, and waits until the data A is outputted from the DRAM
41
to the system bus (data bus)
13
. As shown in
FIG. 6
, after the DRAM
41
outputs the data A to the system bus (data bus)
13
at the timing of the (n+3)th clock cycle, the DMA controller
42
outputs an initial address (write address) (address C
1
) in the cache memory
51
via the system bus (address bus)
13
to the cache memory
51
at the timing of the (n+4)th clock cycle. The cache memory
51
now stores the data (data A) on the system bus (data bus)
13
from the DRAM
41
into the address C
1
thereof.
Then, at the timing of the (n+5)th clock cycle, the DMA controller
42
outputs an address (address D
2
) next to the initial address in the DRAM
41
as a read address to the DRAM
41
, and instructs the DRAM
41
to read data B stored in the address D
2
. The DRAM
41
now outputs the data B to the system bus (data bus)
13
. At the timing of the (n+6)th clock cycle, the DMA controller
42
outputs an address (address C
2
) next to the initial address in the cache memory
51
as a write address via the system bus (addr
Anderson Matthew D.
Guss Paul A.
Kim Matthew
Sony Computer Entertainment Inc.
LandOfFree
Method of and apparatus for processing information, and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method of and apparatus for processing information, and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of and apparatus for processing information, and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3038814