Electrical computers and digital data processing systems: input/ – Input/output data processing
Reexamination Certificate
2000-04-28
2003-08-26
Gaffin, Jeffrey (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
C710S004000, C710S007000, C710S020000, C710S021000, C710S033000, C710S036000, C710S046000, C710S052000
Reexamination Certificate
active
06611879
ABSTRACT:
BACKGROUND OF THE INVENTION
This invention relates generally to data storage systems, and more particularly to data storage systems having redundancy arrangements to protect against total system failure in the event of a failure in a component or subassembly of the storage system.
As is known in the art, large host computers and servers (collectively referred to herein as “host computer/servers”) require large capacity data storage systems. These large computer/servers generally includes data processors, which perform many operations on data introduced to the host computer/server through peripherals including the data storage system. The results of these operations are output to peripherals, including the storage system.
One type of data storage system is a magnetic disk storage system. Here a bank of disk drives and the host computer/server are coupled together through an interface. The interface includes “front end” or host computer/server controllers (or directors) and “back-end” or disk controllers (or directors). The interface operates the controllers (or directors) in such a way that they are transparent to the host computer/server. That is, data is stored in, and retrieved from, the bank of disk drives in such a way that the host computer/server merely thinks it is operating with its own local disk drive. One such system is described in U.S. Pat. No. 5,206,939, entitled “System and Method for Disk Mapping and Data Retrieval”, inventors Moshe Yanai, Natan Vishlitzky, Bruno Alterescu and Daniel Castel, issued Apr. 27, 1993, and assigned to the same assignee as the present invention.
As described in such U.S. Patent, the interface may also include, in addition to the host computer/server controllers (or directors) and disk controllers (or directors), addressable cache memories. The cache memory is a semiconductor memory and is provided to rapidly store data from the host computer/server before storage in the disk drives, and, on the other hand, store data from the disk drives prior to being sent to the host computer/server. The cache memory being a semiconductor memory, as distinguished from a magnetic memory as in the case of the disk drives, is much faster than the disk drives in reading and writing data.
The host computer/server controllers, disk controllers and cache memory are interconnected through a backplane printed circuit board. More particularly, disk controllers are mounted on disk controller printed circuit boards. The host computer/server controllers are mounted on host computer/server controller printed circuit boards. And, cache memories are mounted on cache memory printed circuit boards. The disk directors, host computer/server directors, and cache memory printed circuit boards plug into the backplane printed circuit board. In order to provide data integrity in case of a failure in a director, the backplane printed circuit board has a pair of buses. One set the disk directors is connected to one bus and another set of the disk directors is connected to the other bus. Likewise, one set the host computer/server directors is connected to one bus and another set of the host computer/server directors is directors connected to the other bus. The cache memories are connected to both buses. Each one of the buses provides data, address and control information.
The arrangement is shown schematically in FIG. 
1
. Thus, the use of two buses B
1
, B
2
 provides a degree of redundancy to protect against a total system failure in the event that the controllers or disk drives connected to one bus, fail. Further, the use of two buses increases the data transfer bandwidth of the system compared to a system having a single bus. Thus, in operation, when the host computer/server 
12
 wishes to store data, the host computer 
12
 issues a write request to one of the front-end directors 
14
 (i.e., host computer/server directors) to perform a write command. One of the front-end directors 
14
 replies to the request and asks the host computer 
12
 for the data. After the request has passed to the requesting one of the front-end directors 
14
, the director 
14
 determines the size of the data and reserves space in the cache memory 
18
 to store the request. The front-end director 
14
 then produces control signals on one of the address memory busses B
1
, B
2
 connected to such front-end director 
14
 to enable the transfer to the cache memory 
18
. The host computer/server 
12
 then transfers the data to the front-end director 
14
. The front-end director 
14
 then advises the host computer/server 
12
 that the transfer is complete. The front-end director 
14
 looks up in a Table, not shown, stored in the cache memory 
18
 to determine which one of the back-end directors 
20
 (i.e., disk directors) is to handle this request. The Table maps the host computer/server 
12
 addresses into an address in the bank 
14
 of disk drives. The front-end director 
14
 then puts a notification in a “mail box” (not shown and stored in the cache memory 
18
) for the back-end director 
20
, which is to handle the request, the amount of the data and the disk address for the data. Other back-end directors 
20
 poll the cache memory 
18
 when they are idle to check their “mail boxes”. If the polled “mail box” indicates a transfer is to be made, the back-end director 
20
 processes the request, addresses the disk drive in the bank 
22
, reads the data from the cache memory 
18
 and writes it into the addresses of a disk drive in the bank 
22
.
When data is to be read from a disk drive in bank 
22
 to the host computer/server 
12
 the system operates in a reciprocal manner. More particularly, during a read operation, a read request is instituted by the host computer/server 
12
 for data at specified memory locations (i.e., a requested data block). One of the front-end directors 
14
 receives the read request and examines the cache memory 
18
 to determine whether the requested data block is stored in the cache memory 
18
. If the requested data block is in the cache memory 
18
, the requested data block is read from the cache memory 
18
 and is sent to the host computer/server 
12
. If the front-end director 
14
 determines that the requested data block is not in the cache memory 
18
 (i.e., a so-called “cache miss”) and the director 
14
 writes a note in the cache memory 
18
 (i.e., the “mail box”) that it needs to receive the requested data block. The back-end directors 
20
 poll the cache memory 
18
 to determine whether there is an action to be taken (i.e., a read operation of the requested block of data). The one of the back-end directors 
20
 which poll the cache memory 
18
 mail box and detects a read operation reads the requested data block and initiates storage of such requested data block stored in the cache memory 
18
. When the storage is completely written into the cache memory 
18
, a read complete indication is placed in the “mail box” in the cache memory 
18
. It is to be noted that the front-end directors 
14
 are polling the cache memory 
18
 for read complete indications. When one of the polling front-end directors 
14
 detects a read complete indication, such front-end director 
14
 completes the transfer of the requested data which is now stored in the cache memory 
18
 to the host computer/server 
12
.
The use of mailboxes and polling requires time to transfer data between the host computer/server 
12
 and the bank 
22
 of disk drives thus reducing the operating bandwidth of the interface.
SUMMARY OF THE INVENTION
In accordance with the present invention, a system interface is provided. Such interface includes a plurality of first directors, a plurality of second directors, a data transfer section and a message network. The data transfer section includes a cache memory. The cache memory is coupled to the plurality of first and second directors. The messaging network operates independently of the data transfer section and such network is coupled to the plurality of first directors and the plurality of second directors. The first and second directors control data transfer between t
EMC Corporation
Farooq Mohammad O.
LandOfFree
Data storage system having separate data transfer section... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data storage system having separate data transfer section..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data storage system having separate data transfer section... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3089435