Method and device for connecting a data processing system...

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06256748

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a process for connecting the central processing unit of a data processing system to a redundant data storage system.
The invention also relates to a device for implementing the process.
More particularly, it relates to a device for connecting the central processing unit of a data processing system to a redundant array of magnetic disks, via controllers.
Within the scope of the invention, the term “controller” should he understood in its broadest sense; for example, it could mean a specialized processor with an internally stored program. Likewise, the term controller will be used for controllers disposed in input-outputs of the central processor or controllers of disks per se, or more generally of data storage units.
BACKGROUND OF THE INVENTION
FIG. 1
attached to the present specification illustrates in simplified fashion the general architecture of a data processing system. The data processing system
1
comprises a central processing unit CPU, controllers Ctl, through Ctl
n
, and one or more peripheral subsystems S/SP. These subsystems can comprise, for example, single disks, redundant arrays of disks, magnetic tapes, or even printers.
The redundant arrays of disks can have various architectures, including the architectures known by the well-known acronym “RAID” (for “Redundant Array of Independent Disks”).
The arrays of disks with a “RAID” architecture are in turn subdivided into several subcategories. Among others, it is possible to cite the architectures “RAID-1” and “RAID-5,” though this is not exhaustive.
To begin with, let us briefly summarize the main characteristics of these two architectures, which are used to advantage within the scope of the invention.
To obtain redundancy of the “RAID-1” type, mirrored disks are used. According to this method, the data are recorded normally on a first disk, and redundantly on a second disk, physically distinct from the first one, which represents the “mirror” of the first one. When a “normal” disk is malfunctioning, the data can he read- and/or write-accessed from its “mirror” disk. This naturally requires doubling the storage capacity, and hence the number of physical disks, relative to what is actually needed.
Redundancy of the “RAID-5” type requires less additional storage capacity. The data is divided into segments of several blocks of a given length, which can be called “usable” data blocks. A redundant segment composed of parity blocks is associated with a given number of segments.
According to this method, several physical disks are also used. In general, the disks are partitioned into “slices” and a “rotating parity” data recording schema is used.
FIG. 2
a
attached to the present specification illustrates this recording method with “RAID-5” type redundancy and rotating parity. By way of example, it is assumed that the data storage subsystem comprises five physical disks D
1
through D
5
under the control of a single controller Ctl, for example equipped with interfaces of the type known by the name “SCSI” for “Small Computer System Interface,” SCSI
1
through SCSI
5
. The controller Ctl also comprises an interface of the same type SCSI
0
connected to the central processor (not represented). The bottom part of
FIG. 2
a
represents the logical configuration of the memory, with storage equivalent to the five disks D
1
through D
5
. Each group of disks D
1
through D
5
is called a physical device PD.
This array is divided into y slices, t
1
through t
y
. It is assumed that only one segment is recorded in any slice of a disk, for example the segment S
0
(“Seg. 0”) in the slice t
1
of the disk D
2
. If one parity segment P
1
(stored in the slice t
1
of the disk D
1
) is associated with four segments of usable data, S
0
through S
4
, it is easy to see that there is a shift of the storage position of the next parity segment P
2
: naturally, the latter is stored in the slice t
2
(in the example described), but on the disk D
2
, not the disk D
1
. A regular shift also exists for the segments P
3
through P
5
, respectively stored on the disks D
3
through D
5
. Once again, a parity segment P
6
is stored on the disk D
1
, in the slice t
6
. Therefore, there is a shift of modulo
5
and more generally of modulo d, d being the number of physical disks, and s=d-1 being the number of segments of usable data associated with a parity segment.
For the example illustrated by
FIG. 2
a,
the map of the distribution among the disks D
1
through D
5
and the slices t
1
through t
y
of the usable data segments (S
0
=“Seg. S
0
” through S
x+3
=“Seg. S
x+3
”) and the parity segments (“Parity P
0
” through “Parity P
y
”) is shown in “TABLE 1,” located at the end of the present specification.
Naturally, other distribution schemata are possible, but if rotating parity recording is desired, the ratio between the accumulated number of usable data segments and the number of associated parity segments and the number of physical disks cannot be just any ratio.
As defined above, the data storage space constitutes a de facto virtual memory space or logic unit LUN.
This virtual memory space can be further subdivided into several logic units LUN
0
through LUN
2
, as is illustrated more particularly by
FIG. 2
b.
Each logic unit LUN
0
through LUN
2
comprises a certain number of slices, the total number of segments (usable data and parity data) being equal to the number of slices multiplied by the number of physical disks. In the example described in
FIG. 2
b,
it is assumed that the virtual disk array has been divided into three logic units LUN
0
through LUN
2
. In other words, a “vertical” partitioning (by physical disks) has been replaced by a “horizontal” partitioning, for a given physical device PD. The number of partitions is chosen so as to obtain optimized performance as a function of the capacity of the elementary disks D
1
through D
5
, and hence of their accumulated capacity (physical device PD). In the example described, each logic unit LUN
0
through LUN
2
forms a virtual disk with a capacity equal to one third of the accumulated capacity, i.e., of the capacity of the physical device PD.
The use of a redundant disk architecture for data makes it possible to solve only some of the problems caused by hardware failures. In fact, although not represented in the preceding figures, the disks or arrays of disks are placed under the control of at least one disk controller. In the event of a failure of this unit, access to all or part of the information would be seriously compromised. It has been proposed that redundancy also be introduced at this level, as illustrated in the detail in
FIG. 3
a.
According to this architecture, the multiple disk array, with the common reference MD, is placed under the control of two disk controllers, in this case two data storage processors SP-A and SP-B, operating redundantly. The multiple disk array MD can comprise one or more physical units PD (
FIGS. 2
a
and
2
b
), and hence, a fortiori, one or more logic units (
FIG. 2
b
: LUN
0
through LUN
2
). Normally, some of the disk space, and hence some of the logic units, (a priori half, or a value approximating half, as a function of the redundant architecture adopted), are assigned to one of the data storage processors, for example SP-A, and the rest to the other processor, for example SP-B.
In the normal operating mode, access to the first partition of the total disk space is gained via the processor SP-A, and access to the second partition of the disk space is gained via the processor SP-B. If a logic unit LUN#m (m being arbitrary and falling between 0 and n, n+1 being the maximum number of logic units) is assigned to SP-A, it is necessary to organize a redundancy of access to the resource LUN#m through the processor SP-B in the event of a failure of the processor SP-A. However, many types of processors available on the market would not make it possible to “see” the logic unit LUN#m directly through the processor SP-B.
Ordinarily, two methods a

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and device for connecting a data processing system... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and device for connecting a data processing system..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and device for connecting a data processing system... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2449774

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.