Apparatus for and method of architecturally enhancing the...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S149000, C711S168000, C711S150000, C710S020000, C710S021000, C710S038000

Reexamination Certificate

active

06212597

ABSTRACT:

FIELD OF INVENTION
The present invention relates to dynamic random access memories, known as DRAM structures, being more particularly directed to multi-port internally cached versions thereof providing very high system bandwidth to memory to a large number of system input/output (I/O) resources by moving large blocks of data internally, as described in copending U.S. patent application Ser. No. 581,467, filed Dec. 29, 1995, for High Performance Universal Multi Port Internally Cached Dynamic Random Access Memory System, Architecture and Method, by Mukesh Chatter now U.S. Pat. No. 5,799,209, one of the co-inventors herein, and to enhanced architectures and improvements in the operation of same.
BACKGROUND OF INVENTION
A multi-port internally cached DRAM, termed AMPIC DRAM, of said copending application, later reviewed in connection with hereinafter described
FIG. 1
, is designed for high system bandwidth use in a system having a master controller, such as a central processing unit (CPU), having parallel data ports and a dynamic random access memory each connected to and competing for access to a common system bus interface. It provides an improved DRAM architecture comprising the multi-port internally cached DRAM that, in turn, encompasses a plurality of independent serial data interfaces each connected between a separate external I/O resource and internal DRAM memory through corresponding buffers; a switching module interposed between the serial interfaces and the buffers; and a switching module logic control for connecting of the serial interfaces to the buffers under a dynamic configuration by the bus master controller, such as said CPU, for switching allocation as appropriate for the desired data routability. This technique provides for the transfer of blocks of data internal to the memory chip, orders of magnitude faster than traditional approaches, and eliminates current system bandwidth limitations and related problems, providing significantly enhanced system performance at a reduced cost, and enabling substantially universal usage for many applications as a result of providing unified memory architecture.
In said co-pending application, a large number of system I/O resources may be supported, each with a wide data bus, while still maintaining low pin counts in the AMPIC DRAM device, as by stacking several such devices, later illustrated in connection with hereinafter described
FIG. 2
, with the number of system I/O resources supported, and the width of each system I/O resource bus being limited only by the technology limitations.
While such architectures, as previously stated and as described in said copending application, admirably provide a very large amount of bandwidth for each system I/O resource to access the DRAM, the system does not provide a mechanism by which one system I/O resource may send data to another system I/O resource—an improvement now provided by the present invention. As an example, if system I/O resource In has a multi-bit message that should be sent to system I/O resource n, then once the system I/O resource m has written the multi-bit message into the AMPIC DRAM stack or array, the invention now provides a mechanism for informing system I/O resource n of both the existence of such a message and the message location within the AMPIC DRAM array. In addition, upon the system I/O resource n being informed of the existence of the message and its location in the array, in accordance with the present invention, a technique is provided for allowing the system I/O resource n to extract the message from the array. While the message data is thus being distributed across the entire AMPIC DRAM array, moreover, with each element of the array holding only a portion of the data, the complete signaling information must be sent to each individual element of the AMPIC DRAM array.
The invention, in addition, provides the further improvement of a partitioning technique for allowing both several simultaneous small size transfers or single very wide transfers, using the wide system internal data bus more efficiently to accommodate for both small and large units of data transfer.
OBJECTS OF INVENTION
A primary object of the present invention, accordingly, is to provide a new and improved apparatus for and method of architecturally enhancing the performance of multi-port internally cached DRAMs and the like by providing a novel mechanism and technique for permitting system I/O resources to send message data to one another, informing both as to the existence of such a message and the message location, and then to enable extraction of the message.
A further object is to provide such an improved system wherein, through a novel partitioning technique, the wide system internal data bus is more efficiently used to accommodate for both small and large units of internal data transfer, allowing also several simultaneous small message transfers or single very wide transfers.
Other and further objects will be explained hereinafter and are more particularly delineated in the appended claims.
SUMMARY OF INVENTION
In summary, from one of its broader aspects, the invention embraces in a multi-port internally cached array of AMPIC DRAM units in which a plurality of system I/O resources interface along common internal data buses connected to corresponding DRAM cores in each unit of the array, and wherein data from a CPU or similar source is also transferred with each unit along the buses during data transfer cycles, the method of improving performance, that comprises, concurrently with the data transfer, enabling the system I/O resources to send multi-bit messages to one another by sending the message from one system I/O resource to all AMPIC DRAM units of the array during said data transfer cycles, and concurrently with bit information on message address location in the DRAM.
Preferred and best mode designs, apparatus, techniques, and alternate structures are hereinafter explained in detail.


REFERENCES:
patent: 5410540 (1995-04-01), Aiki et al.
patent: 5802580 (1998-09-01), McAlpine
patent: 5875470 (1999-02-01), Dreibelbis

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus for and method of architecturally enhancing the... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus for and method of architecturally enhancing the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus for and method of architecturally enhancing the... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2523178

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.