Data processing: database and file management or data structures – Database design – Data structure types
Reexamination Certificate
1998-08-24
2001-08-07
Alam, Hosain T. (Department: 2771)
Data processing: database and file management or data structures
Database design
Data structure types
C707S793000, C707S793000, C707S793000, C707S793000, C707S793000
Reexamination Certificate
active
06272491
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to lock management, and more specifically, to lock management within a multiple server database system.
BACKGROUND OF THE INVENTION
Database servers use resources while executing transactions. Even though resources may be shared between database servers, many resources may not be accessed in certain ways by more than one process at any given time. For example, resources such as data blocks of a storage medium or tables stored on a storage medium may be concurrently accessed in some ways (e.g. read) by multiple processes, but accessed in other ways (e.g. written to) by only one process at a time. Consequently, mechanisms have been developed which control access to resources.
One such mechanism is referred to as a lock. A lock is a data structure that indicates that a particular process has been granted certain rights with respect to a resource. There are many types of locks. Some types of locks may be shared on the same resource by many processes, while other types of locks prevent any other locks from being granted on the same resource.
The entity responsible for granting locks on resources is referred to as a lock manager. In a single node database system, a lock manager will typically consist of one or more processes on the node. In a multiple-node system, such as a multi-processing machine or a local area network, a lock manager may include processes distributed over numerous nodes. A lock manager that includes components that reside on two or more nodes is referred to as a distributed lock manager.
FIG. 1
is a block diagram of a multiple-node computer system
100
. Each node has stored therein a database server and a portion of a distributed lock management system
132
. Specifically, the illustrated system includes three nodes
102
,
112
and
122
on which reside database servers
104
,
114
and
124
, respectively, and lock manager units
106
,
116
and
126
, respectively. Database servers
104
,
114
and
124
have access to the same database
120
. The database
120
resides on a disk
118
that contains multiple blocks of data. Disk
118
generally represents one or more persistent storage devices which may be on any number of machines, including but not limited to the machines that contain nodes
102
,
112
and
122
.
A communication mechanism allows processes on nodes
102
,
112
, and
122
to communicate with each other and with the disks that contain portions of database
120
. The specific communication mechanism between the nodes and disk
118
will vary based on the nature of system
100
. For example, if the nodes
102
,
112
and
122
correspond to workstations on a network, the communication mechanism will be different than if the nodes
102
,
112
and
122
correspond to clusters of processors and memory within a multi-processing machine.
Before any of database servers
104
,
114
and
124
can access a resource shared with the other database servers, it must obtain the appropriate lock on the resource from the distributed lock management system
132
. Such a resource may be, for example, one or more blocks of disk
118
on which data from database
120
is stored.
Lock management system
132
stores data structures that indicate the locks held by database servers
104
,
114
and
124
on the resources shared by the database servers. If one database server requests a lock on a resource while another database server has a lock on the resource, the distributed lock management system
132
must determine whether the requested lock is consistent with the granted lock. If the requested lock is not consistent with the granted lock, then the requester must wait until the database server holding the granted lock releases the granted lock.
According to one approach, lock management system
132
maintains one master resource object for every resource managed by lock management system
132
, and includes one lock manager unit for each node that contains a database server. The master resource object for a particular resource stores, among other things, an indication of all locks that have been granted on or requested for the particular resource. The master resource object for each resource resides within only one of the lock manager units
106
,
116
and
126
.
The node on which a lock manager unit resides is referred to as the “master node” (or simply “master”) of the resources whose master resource objects are managed by that lock manager unit. Thus, if the master resource object for a resource R
1
is managed by lock manager unit
106
, then node
102
is the master of resource R
1
.
In typical systems, a hash function is employed to select the particular node that acts as the master node for a given resource. For example, system
100
includes three nodes, and therefore may employ a hash function that produces three values: 0, 1 and 2. Each value is associated with one of the three nodes. The node that will serve as the master for a particular resource in system
100
is determined by applying the hash function to the name of the resource. All resources that have names that hash to 0 are mastered on node
102
. All resources that have names that hash to 1 are mastered on node
112
. All resources that have names that hash to 2 are mastered on node
122
.
When a process on a node wishes to access a resource, a hash function is applied to the name of the resource to determine the master of the resource, and a lock request is sent to the master node for that resource. The lock manager on the master node for the resource controls the allocation and deallocation of locks for the associated resource.
While the hashing technique described above tends to distribute the resource mastering responsibility evenly among existing nodes, it has some significant drawbacks. For example, it is sometimes desirable to be able to select the exact node that will function as master node to a lock resource. For example, consider the situation when a particular lock resource is to be accessed exclusively by processes residing on node
102
. In this situation, it would be inefficient to have the lock resource and the request queue for that resource located on any node in the network other than node
102
. However, the relatively random distribution of lock resource management responsibilities that results from the hash function assignment technique makes it unlikely that resources will be mastered at the most efficient locations.
Further, lock resources that cover different resources often relate to the same overall object on the system. For example, a tablespace is a storage area that may contain a plurality of rows. Each of the rows in the tablespace may be associated with separate resource lock resources. However, each of those separate lock resources also relates to the same object (i.e. the tablespace). In operation, it may improve efficiency if the related lock resources are all located on the same node for easy access by any process that needs to work with the object as a whole (as opposed to the individual resources). However, using the hashing assignment technique, the related lock resources may end up being mastered on multiple nodes in the distributed system.
Changing the master of a lock resource from one node to another is referred to as “remastering” the lock resource. A lock resource may be remastered, for example, prior to a shutdown of the node currently mastering the lock resource. Using resource name hashing techniques, lock resources are remastered individually on a per-lock resource basis, and cannot be remastered as a group to the same node.
In addition, under certain circumstances, a process may wish to perform an operation that affects an entire group of lock resources. Using the resource name hashing approach, the operation would have to be performed on each individual lock resource.
Based on the foregoing, there is a need for a method and system that allows a particular node to be selected as master node for a lock resource, and more particularly, which allows groups of associated lock resources to
Chan Wilson Wai Shun
Grewell Patricia
Wang Tak Fung
Alam Hosain T.
Corrielus Jean M
Hickman Brian D.
Hickman Palermo & Truong & Becker LLP
Oracle Corporation
LandOfFree
Method and system for mastering locks in a multiple server... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for mastering locks in a multiple server..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for mastering locks in a multiple server... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2537034