Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
2000-09-20
2003-12-30
Nguyen, Than (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C711S004000, C711S111000, C711S113000, C711S114000, C707S793000, C707S793000, C707S793000
Reexamination Certificate
active
06671772
ABSTRACT:
The present invention relates generally to computer file systems and their organization and more particularly to file systems for disk data storage.
BACKGROUND OF THE INVENTION
A file system is a hierarchical structure (file tree) of files and directories. File systems are utilized in order to maintain, manage and organize the large amounts of data ordinarily stored in a computer system, either on a physical disk drive, a volatile or non-volatile memory, or any other such storage medium. Depending on the selected storage medium, file system maintenance, management and organization can become very important in the efficient transfer of data.
For example, in disk drive systems, conventional file systems typically store information as blocks of data which are units of storage allocation. A block of data typically has a size corresponding to some integer power of two storage sectors. Thus, conventional file systems which employ a block file structure, vary in the amount of data that can be stored in a single block. For example, the UNIX System V file system and most MS-DOS floppy disk file systems are capable of storing blocks of data in block sizes of 2
9
sectors, or 512 bytes. File systems such as the SVR
3
(an extension of the Unix System V file system) and CP/M (Control Program for Microprocessors, developed by Digital Research Corporation) were capable of storing blocks of data in 2
10
sectors, or 1,024 bytes. Similarly, the MS-DOS floppy disk file system can also store blocks of data in this block size. SVR
3
also provides an alternate option for storing data in block sizes of 2
11
sectors, or 2,048 bytes. Moreover, some BSD (Berkeley Software Distribution, a version of the Unix operating system developed at and distributed at the University of California at Berkeley) systems and some MS-DOS hard disk file systems provide for data storage in block sizes of 2
12
sectors, or 4,096 bytes, while most modern UNIX file systems utilize block storage sizes of 2
13
sectors, or 8,192 bytes. Some MS-DOS systems are capable of incorporating even larger block sizes.
Unfortunately, due to the physical constraints of the disk drive (i.e., track length, density, etc.) inefficiencies in data transfer generally result once a significant amount-of data is located on the disk drive. The size of a data block has a number of implications on transfer and storage efficiency of a disk drive. Specifically, a large data block suggests that more disk space will be wasted since multiple files are not normally written to a block, even if space is available. For example, a small 100 byte data file will consume one entire data block, regardless of whether the block size is 512 bytes or 32 K bytes. Although only a portion of the data block includes the 100 byte file, the remaining space in the data block is unusable and therefore wasted.
In addition, a large data block suggests that more bits are transferred to and from the disk during a single read/write operation. Generally, the time required to perform a read or a write operation involves several parameters: (1) the service time for an operation in the disk queue to be performed, (2) the disk latency time, and (3) the data transfer time for the required data to be transferred (typically, doubling the transfer unit size doubles the data transfer time).
In conventional multi-tasking systems it is common for the disk drive to be busy while the computer is performing an operation other than a read or a write operation. Generally, a disk drive can be expected to perform one operation per every few disk rotations. Thus, given a large operation demand, the waiting time for an operation in the disk queue to be performed can be predicted as the product of the service time and the queue length. As such, delays of 100 milliseconds are not uncommon on conventional systems during operation demand peaks. These delays can become significant and can increase the time for a disk drive to perform an operation, such as a read or a write operation.
In addition, due to the mechanical nature of a disk drive, disk latency delays are typically associated with the time for the disk drive read/write head to be positioned to the proper cylinder of the disk and the time for the desired data on that cylinder to rotate under the disk drive read/write head. Conventional disk drives also typically have latency delays for selecting a particular track of the disk within the cylinder. For example, most disk drives have a rotation time of about 16.7 milliseconds (3,600 RPM) and seek at speeds greater than 13 milliseconds. Some current disk drives often rotate at 11, 8.3 or 5.5 milliseconds (5,400, 7,200, or 10,800 RPM, respectively) and seek at about the same speed. That is, disk drives can be viewed as rotating faster than they seek. Long seek delays (latency) can thus affect the transfer efficiency of the disk. It becomes extremely difficult to achieve an average of approximately one disk operation per rotation. Thus, minimizing seek delays, such as by optimally placing data on the disk to reduce seek distance and to sort outstanding disk drive read/write requests in the queue to minimize the seek time between requests, becomes important.
The complexity of metadata (the information on the disk about the files stored on it) for tracking the information about the data on the disk is also affected by the size of the data block. A file system contains information that is required to allow data to be accessed. This information takes the form of both user-visible and user-invisible data structures, such as the various directories which provide a mapping between file names and file numbers. There are also additional characteristics to maintain, such as disk drive free space, which data blocks are within which files, and the order of the data blocks within the files, along with any data gaps within the files. Most file systems also track information concerning creation, access and/or modification times for each file along with security and permission information of some type.
Large data blocks also suggest that fewer data blocks are available for a given amount of storage and fewer data blocks available per file on average. This allows the metadata structures of the files to be smaller and less sophisticated. The MS-DOS FAT (file allocation table) is a classic example of a simple metadata structure that does not scale well to a large storage architectures. While it supports up to 64 K blocks per file system, a 2 gigabyte file system utilizes blocks of 32 K each. This characteristic results in a large amount of wasted disk space and a degradation in transfer efficiency of the disk.
Physical disk geometry is also affected by the data block size. Traditional disk drives have a known number of sectors per track, a known number of heads per cylinder and a known number of cylinders per device. While these parameters may vary from device to device, these three values are generally sufficient for the system software to select the placement of on-disk data structures so that disk latency is minimized.
For example, many file systems, such as UFS (Unix File System), take this disk geometry into account at a very basic level. UFS is designed around the concept of a “cylinder group” which is simply a collection of adjacent cylinders on the disk drive. This cylinder group contains its own allocation metadata and is managed by the system as autonomously as possible. That is, files tend to be allocated within the same cylinder group as their parent directory in an attempt to minimize disk latency.
Unfortunately, disk seeking typically involves a non-linear seek process. Disk head movement is generally described by the equation delay=(number of tracks to be moved)*(per track delay)+(settling time). To help reduce disk latency, microcontrollers are often utilized to accelerate disk head movement during the beginning of a seek and decelerate disk head movement toward the end of the seek. This provides dramatically faster performance for longer seeks and renders the above equation incomplete.
Tw
Alberti David
Gray Cary Ware & Freidenrich LLP
Nguyen Than
LandOfFree
Hierarchical file system structure for enhancing disk... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Hierarchical file system structure for enhancing disk..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hierarchical file system structure for enhancing disk... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3126504