Data storage systems and methods which utilize an on-board...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06751703

ABSTRACT:

BACKGROUND OF THE INVENTION
In general, a data storage system stores and retrieves data for one or more external hosts.
FIG. 1
shows a high-level block diagram of a conventional data storage system
20
. The data storage system
20
includes front-end circuitry
22
, a cache
24
, back-end circuitry
26
and a set of disk drives
28
-A,
28
-B (collectively, disk drives
28
).
The cache
24
operates as a buffer for data exchanged between external hosts
30
and the disk drives
28
. The front-end circuitry
22
operates as an interface between the hosts
30
and the cache
24
. Similarly, the back-end circuitry
26
operates as an interface between the cache
24
and the disk drives
28
.
FIG. 1
further shows a conventional implementation
32
of the data storage system
20
. In the implementation
32
, the front-end circuitry
22
includes multiple front-end circuit boards
34
. Each front-end circuit board
34
includes a pair of front-end directors
36
-A,
36
-B. Each front-end director
36
(e.g., the front-end director
36
-A of the front-end circuit board
34
-
1
) is interconnected between a particular host
30
(e.g., the host
30
-A) and a set of M buses
38
(M being a positive integer) that lead to the cache
24
(individual memory boards), and operates as an interface between that host
30
and the cache
24
. Similarly, the back-end circuitry
26
includes multiple back-end circuit boards
40
. Each back-end circuit board
40
includes a pair of back-end directors
42
-A,
42
-B. Each back-end director
42
is interconnected between a particular disk drive
28
and the M buses
38
(a backplane interconnect) leading to the cache
24
, and operates as an interface between that disk drive
28
and the cache
24
.
It should be understood that the cache
24
is a buffer for host data exchanged between the hosts
30
and the disk drives
28
, i.e., the cache
24
is input/output (I/O) memory. Even though the directors
36
,
42
include processors that execute program instructions, the directors
36
,
42
do not use the cache
24
as processor address space. Rather, each director
36
,
42
includes some memory as processor address space.
Each disk drive
28
of the implementation
32
has multiple connections
44
,
46
to the cache
24
. For example, the disk drive
28
-A has a first connection
44
-A that leads to the cache
24
through the back-end director
42
-A of the back-end circuit board
40
-
1
, and a second connection
46
-A that leads to the cache
24
through another back-end director of another back-end circuit board
40
(e.g., a back-end director of the back-end circuit board
40
-
2
).
It should be understood that the redundant features of the data storage system implementation
32
(e.g., the multiple disk drive connections
44
,
46
of each disk drive
28
, the M buses
38
, the circuit boards
34
,
44
having multiple directors
36
,
42
, etc.) provide fault tolerance and load balancing capabilities to the implementation
32
. Further details of how the implementation
32
performs data write and read transactions will now be provided.
For a host
30
to store data on the disk drives
28
, the host
30
provides the data to one of the front-end directors
36
, and that front-end director
36
initiates a write transaction on behalf of that host
30
. In particular, the front-end director
36
provides the data to the cache
24
through one of the M buses
38
. Next, one of the back-end directors
42
reads the data from the cache
24
through one of the M buses
38
and stores the data in one or more of the disk drives
28
to complete the write transaction. To expedite data transfer, the front-end director
36
can place a message for the back-end director
42
in the cache
24
when writing the data to the cache
24
. The back-end director
42
can then respond as soon as it detects the message from the front-end director
36
. Similar operations occur for a read transaction but in the opposite direction (i.e., data moves from the back-end director
42
to the cache
24
, and then from the cache
24
to the front-end director
36
).
SUMMARY OF THE INVENTION
Unfortunately, there are deficiencies to the above-described conventional implementation
32
of the data storage system
20
of FIG.
1
. For example, the cache
24
is a highly shared main memory, and the set of M buses
38
is a highly shared interconnection mechanism. As such, arbitration and locking schemes are required to enable the front-end directors
36
and the back-end directors
42
to coordinate use of the cache
24
and the buses
38
. These arbitration and locking schemes enable the directors
36
,
42
(which equally contend for the highly shared cache
24
and buses
38
) to resolve contention issues for memory boards within the cache
24
and for the buses
38
. However, in doing so, some directors
36
,
42
need to delay their operation (i.e., wait) until they are allocated these highly shared resources. Accordingly, contention for the cache
24
and the buses
38
by the directors
36
,
42
is often a source of latency. In some high-traffic situations, the cache
24
and the buses
38
can become such a bottleneck that some external hosts
30
perceive the resulting latencies as unsatisfactory response time delays.
Additionally, since the directors
36
,
42
and the cache
24
reside on separate circuit boards (see FIG.
1
), there are latencies resulting from the physical distances between the directors
36
,
42
and the cache
24
. In particular, there are latencies incurred for the electrical signals to propagate through transmission circuitry on one circuit board (e.g., a director
36
,
42
), through a backplane interconnect (e.g., one of the buses
38
), and through receiving circuitry on another circuit board (e.g., the cache memory
24
). Typically, such latencies are on the order of microseconds, i.e., a relatively large amount of time compared to circuit board times of a few hundred nanoseconds.
Furthermore, there are scaling difficulties with the implementation
32
of FIG.
1
. In particular, as more front-end and back-end circuit boards
34
,
40
are added to the system
20
to increase the capacity of the data storage system implementation
32
, the more congested the highly shared buses
38
become. Eventually, the addition of further circuit boards
34
,
40
results in unsatisfactory delays due to over utilization of the cache
24
and the bus
38
, i.e., the arbitration and locking mechanisms become unable to satisfy the access requirements of each director
36
,
42
.
One course of action to reducing response time of the implementation
32
of
FIG. 1
is to replace the M buses
38
with a point-to-point interconnection topology, i.e., a point-to-point channel between each front-end director
36
and memory board of the cache
24
, and between each back-end director
42
and memory board of the cache
24
. Such a topology would alleviate any bus contention latencies since each director
36
,
42
would have immediate access to a communications channel with a memory board of the cache
24
. Unfortunately, there could still exist contention difficulties between the directors
36
,
42
and the cache memory boards (i.e., highly shared memories), as well as additional physical difficulties in deploying such point-to-point channels between the cache memory boards and each of the contending directors
36
,
42
(e.g., physical difficulties in providing memory boards with enough access ports and circuitry for coordinating the use of such access ports).
In contrast to the above-described conventional data storage system implementation
32
of
FIG. 1
which is prone to latency deficiencies due to contention for highly shared resources such as a highly shared cache
24
and highly shared buses
38
leading to the cache
24
, the invention is directed to data storage and retrieval techniques that utilize a cache which is preferred to a consumer (e.g., a director) of a data element stored within that cache. Since the cache is preferred to the consume

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data storage systems and methods which utilize an on-board... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data storage systems and methods which utilize an on-board..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data storage systems and methods which utilize an on-board... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3356073

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.