Methods and apparatus for accessing data within a data...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C100S113000

Reexamination Certificate

active

06516390

ABSTRACT:

BACKGROUND OF THE INVENTION
A typical data storage system stores and retrieves data for external hosts.
FIG. 1
shows a high-level block diagram of a conventional data storage system
20
. The data storage system
20
includes front-end circuitry
22
, a cache
24
, back-end circuitry
26
and a set of disk drives
28
-A,
28
-B (collectively, disk drives
28
). The cache
24
operates as a buffer for data exchanged between external hosts
30
and the disk drives
28
. The front-end circuitry
22
operates as an interface between the hosts
30
and the cache
24
. Similarly, the back-end circuitry
26
operates as an interface between the cache
24
and the disk drives
28
.
FIG. 1
further shows a particular implementation
32
of the data storage system
20
. In the implementation
32
, the front-end circuitry
22
includes multiple front-end circuit boards
34
. Each front-end circuit board
34
includes a pair of front-end directors
36
-A,
36
-B. Each front-end director
36
(e.g., the front-end director
36
-A of the front-end circuit board
34
-
1
) is interconnected between a particular host
30
(e.g., the host
30
-A) and a set of M buses
38
that lead to the cache
24
(M being a positive integer), and operates as an interface between that particular host
30
and the cache
24
.
Similarly, the back-end circuitry
26
includes multiple back-end circuit boards
40
. Each back-end circuit board
40
includes a pair of back-end directors
42
-A,
42
-B. Each back-end director
42
is interconnected between a particular disk drive
28
and the M buses
38
leading to the cache
24
, and operates as an interface between that disk drive
28
and the cache
24
.
Each disk drive
28
has multiple connections
44
,
46
to the cache
24
. For example, the disk drive
28
-A has a first connection
44
-A that leads to the cache
24
through the back-end director
42
-A of the back-end circuit board
40
-
1
, and a second connection
46
-A that leads to the cache
24
through another back-end director of another back-end circuit board
40
(e.g., a back-end director of the back-end circuit board
40
-
2
). An explanation of how the implementation
32
of the data storage system
20
retrieves a block of data (e.g., 512 bytes) for a host
30
will now be provided.
Suppose that the host
30
-A submits, to the front-end director
36
-A of the front-end circuit board
34
-
1
, a request for a block of data stored on the disk drive
28
-A. In response to the request, the front-end director
36
-A looks for the block in the cache
24
. If the front-end director
36
-A finds the block in the cache
24
(i.e., a cache hit), the front-end director
36
-A simply transfers a copy of the block from the cache
24
through one of the M buses
38
to the host
30
-A. This operation is called a cached read since the front-end director
36
-A was able to read a cached block (a block previously existing in the cache
24
) on its first attempt.
However, if the front-end director
36
-A does not find the block in the cache
24
(i.e., a cache miss), the front-end director
36
-A performs a non-cached read operation. Here, the front-end director
36
-A places a read message in the cache
24
through one of the M buses
38
. The read message directs the back-end director
42
-A of the back-end circuit board
40
-
1
to copy the block from the disk drive
28
-A to the cache
24
. The back-end director
42
-A, which periodically polls the cache
24
for such messages, eventually detects the read message from the front-end director
36
-A. In response to such detection, the back-end director
42
-A transfers a copy of the block from the disk drive
28
-A through one of the M buses
38
to the cache
24
. The back-end director
42
-A then places a notification message into the cache
24
through one of the M buses
38
. The notification message notifies the front-end director
36
-A that the requested block now resides in the cache
24
. The front-end director
36
-A, which periodically polls the cache
24
for such notification messages and for the requested block, eventually detects the notification message or the presence of the requested block in the cache
24
. In response to such detection, the front-end director
36
-A transfers the copy of the block from the cache
24
through one of the buses
38
to the host
30
-A.
As described above, the non-cached read operation requires more time to fulfill than the above-described cached read operation. In particular, the extra step of putting the data block into the cache
24
, and then reading the data block from the cache
24
takes unnecessary time and adds to the latency of the overall operation, thus reducing performance.
It should be understood that the implementation
32
of the data storage system
20
can handle a subsequent request from a host
30
for the block of data by simply transferring the copy of the block residing in the cache
24
to the host
30
(i.e., a cache hit) without having to re-read the block from a disk drive
28
. Such operation significantly reduces the block retrieval latency particularly since retrieval time for a block of data from a disk drive is typically an order of magnitude higher than retrieval time for a block of data from cache memory.
It should be further understood that the redundant features of the data storage system implementation
32
(e.g., the redundant front-end directors
36
, the redundant back-end directors
42
, the M buses
38
, the multiple disk drive connections
44
,
46
, etc.) provide fault-tolerant and load balancing capabilities for the data storage system implementation
32
. For example, if the back-end director
42
-A fails and is thus unable to retrieve a data block from the disk drive
28
-A in response to a request from the host
30
-A, another back-end director
42
(e.g., a back-end director
42
residing on the circuit board
40
-
2
) can respond to the request by retrieving the requested block through a redundant path to the disk drive
28
-A (see the connection
46
-A of FIG.
1
).
SUMMARY OF THE INVENTION
Unfortunately, there are deficiencies to the above-described conventional implementation
32
of the data storage system
20
of FIG.
1
. For example, for transactions requiring many non-cached read operations such as media streaming, there is a heavy amount of traffic through the connection infrastructure between the front-end directors
36
and the back-end directors
38
(i.e., the cache
24
and the M buses
38
). For such non-cached read operations, the exchanging of data blocks, read messages and notification messages, as well as the polling for such messages tends to clog this connection infrastructure.
Additionally, there are delays associated with using the M buses
38
. In particular, each director
36
,
42
must arbitrate for use of the buses
38
. A bus controller (not shown) typically grants the directors
36
,
42
access to the buses
38
in accordance with a fair arbitration scheme (e.g., a round-robin arbitration) to guarantee that none of the directors
36
,
42
becomes starved for bus access. Accordingly, some directors
36
,
42
may have to wait until it is their turn to use the buses
38
, and such waiting is a source of latency. Particularly, in times of heavy traffic, some directors
36
,
42
may have to wait extended amounts of time before obtaining access to the cache
24
through one of the buses
38
thus significantly increasing data retrieval latencies.
In contrast to the above-described conventional data storage system implementation
32
, the invention is directed to techniques for accessing data within a data storage system having a circuit board that includes both a front-end circuit for interfacing with a host and a back-end circuit for interfacing with a storage device. To move data between the host and the storage device, an exchange of data between the front-end circuit and the back-end circuit can occur within the circuit board thus circumventing the cache of the data storage system. Such operation not only reduces traffic through

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for accessing data within a data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for accessing data within a data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for accessing data within a data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3131092

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.