Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing
Reexamination Certificate
1996-09-09
2001-04-03
Oberley, Alvin E. (Department: 2755)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data routing
Least weight routing
Reexamination Certificate
active
06212578
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to distributed computing environments, and more specifically, to a method and apparatus for ensuring the safety of remote procedure calls in a distributed computing environment.
BACKGROUND OF THE INVENTION
In distributed computing environments where procedure calls are being made between remote sites, software compatibility problems can arise when the software at one or more of the sites is changed without a corresponding change in the software at the other sites. This typically occurs when procedures are changed and recompiled at one site but applications at other sites which depend upon those procedures are not recompiled. When this occurs, the application which depends upon the now updated procedure will often not work correctly or, under some circumstances, not at all. Throughout this application, the term “procedure” is used generically to refer to any form of software, such as units, modules, functions, subroutines or any other type of software, and is not meant to be limiting.
For example, consider
FIG. 1
which illustrates a simple distributed environment with two remote sites, SITE
1
and SITE
2
. CALLER
1
100
is a procedure at SITE
1
which calls TARGET
1
102
, a procedure at SITE
2
. Since this is a distributed environment, CALLER
1
100
or TARGET
1
102
may change at any time without a corresponding change to the other procedure, creating software incompatibilities. For example, TARGET
1
102
may have been written to receive two integer values and CALLER
1
100
written to pass two integer values. If TARGET
1
102
is later changed to receive two character strings and then recompiled, the next time CALLER
1
100
calls TARGET
1
102
, TARGET
1
102
will most likely execute incorrectly or not at all.
Although the present problem has been presented in the context of software tasks running on different or remote physical sites, software incompatibilities also occur when remote procedure calls are made between different tasks running at the same site and even on the same processor.
Historically, several approaches have been used to manage dependencies in distributed computing environments to ensure compatibility during remote procedure calls. Four of these include (1) synchronized installation; (2) time stamps; (3) self-describing data; and (4) data type encoding.
The synchronized installation approach involves simultaneously installing software at all sites to ensure that all software dependencies between sites match. Typically, this is achieved through the implementation of strict software configuration management, and among the four approaches is most likely to ensure software compatibility between the sites. Moreover, synchronized installation does not adversely affect software performance since software overhead is unaffected. However, as the number of sites increases, it becomes significantly more difficult to maintain up-to-date object level compatibility between software at multiple sites. Consequently, this approach is only suited for a distributed environment with a limited number of sites.
The use of time stamps involves recording an actual creation time or “actual time stamp” of each top-level compilation unit and then checking that actual time stamp against an expected time stamp maintained by the calling procedure. Each time a compilation unit is compiled, the new actual creation time is stored as an actual time stamp. Similarly, each time a calling procedure is recompiled, the stored creation times for each compilation unit called are stored by the calling procedure as expected time stamps. Upon execution of a remote procedure call, the expected time stamp is then passed, usually in the parameter list, from the calling procedure to the target procedure and compared by the target procedure to the actual time stamp previously stored by the target procedure.
For example, referring again to
FIG. 1
, CALLER
1
100
maintains an expected time stamp and passes it to TARGET
1
102
each time it calls TARGET
1
102
. The expected time stamp is compared to the actual time stamp previously stored by TARGET
1
102
. If the expected time stamp passed by CALLER
1
100
does not match the actual time stamp maintained by TARGET
1
102
, the call fails and CALLER
1
100
is marked as needing to be recompiled. The next time an execution of CALLER
1
100
is attempted, CALLER
1
100
will automatically be recompiled if a compiler is available at SITE
1
. If a compiler is not available, then the execution of CALLER
1
100
will be prohibited.
The time stamp approach has a high probability of detecting software incompatibilities because of the very low probability that two incompatible versions of a particular compilation unit would be compiled at the exact same time. Moreover, the time stamp approach has little adverse effects on performance since the time stamp only adds a few bytes (typically 8) to each remote procedure call.
However, the use of time stamps has several disadvantages. First, the time stamp approach requires that a compiler be available at each site. Second, the time stamp approach is very strict and inflexible and does not readily provide for evolutionary software development. This is because any change to a compilation unit upon which a calling procedure depends, will require the recompilation of all calling procedures which depend on that particular target procedure. This is true even if the change occurred in target procedures in the compilation unit not called by the calling procedure. These problems are exacerbated as the number of sites increase, making the use of this approach frustrating and cumbersome. Consequently, as with the synchronized installation approach, the time stamp approach is only practical for distributed environments having a limited number of sites.
Perhaps the most widely used approach for ensuring the safety of remote procedure calls in distributing computing environments is the use of self-describing data. With this approach, additional data is included in each remote procedure call which fully describes each parameter. This data typically includes type, mode, constraints, and any other meta-data required to fully describe the parameter and ensure correctness.
During execution, the self describing data is compared to the target procedure data types and if either the calling procedure or the target procedure has changed, the call will only be completed if the change is compatible, or if a valid conversion can be applied by the target procedure. This inherent flexibility is particularly helpful if the change is limited in scope and relatively benign. For example, a parameter in a target procedure may have changed from an integer value to a double precision real value. Although the calling procedure continues to pass an integer, the target procedure can easily convert the integer to a double precision real and then continue executing. In addition to providing the flexibility of data conversion, the self-describing data approach does not require a compiler at each site and is well suited for distributed environments with a large number of sites.
However, the self-describing data approach adversely affects performance in two ways. First, the self-describing data greatly increases the amount of data being passed in each remote procedure call. Secondly, the data type information is typically interleaved with the parameters, requiring that all of the parameters be checked before compatibility can be confirmed. Consequently, a difference in the last parameter will not be detected until all of the other parameters have been checked.
The last approach involves encoding the data type information of the formal parameters into a number which is then included in each remote procedure call as an additional parameter. The effectiveness of this approach depends upon the encoding scheme selected and how many bytes are used for the encoding. The encoding approach does not add much remote procedure call overhead and provides for data type conversions between compatible data types if the encoding scheme is
Racicot Christopher
Sundaram Usha
Becker Edward A.
Courtenay III St. John
Hickman Palermo & Truong & Becker LLP
Oberley Alvin E.
Oracle Corporation
LandOfFree
Method and apparatus for managing dependencies in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for managing dependencies in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for managing dependencies in a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2538939