Vertical fault isolation in a computer system

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S043000

Reexamination Certificate

active

06654903

ABSTRACT:

BACKGROUND
The majority of Internet outages are directly attributable to software upgrade issues and software quality in general. Mitigation of network downtime is a constant battle for service providers. In pursuit of “five 9's availability” or 99.999% network up time, service providers must minimize network outages due to equipment (i.e., hardware) and all too common software failures. Service providers not only incur downtime due to failures, but also incur downtime for upgrades to deploy new or improved software, hardware, software or hardware fixes or patches that are needed to deal with current network problems. A network outage can also occur after an upgrade has been installed if the upgrade itself includes undetected problems (i.e., bugs) or if the upgrade causes other software or hardware to have problems. Data merging, data conversion and untested compatibilities contribute to downtime. Upgrades often result in data loss due to incompatibilities with data file formats. Downtime may occur unexpectedly days after an upgrade due to lurking software or hardware incompatibilities. Often, the upgrade of one process results in the failure of another process. This is often referred to as regression. Sometimes one change can cause several other components to fail; this is often called the “ripple” effect. To avoid compatibility problems, multiple versions (upgraded and not upgraded versions) of the same software are not executed at the same time.
Most computer systems are based on inflexible, monolithic software architectures that consist of one massive program or a single image. Though the program includes many sub-programs or applications, when the program is linked, all the subprograms are resolved into one image. Monolithic software architectures are chosen because writing subprograms is simplified since the locations of all other subprograms are known and straightforward function calls between subprograms can be used. Unfortunately, the data and code within the image is static and cannot be changed without changing the entire image. Such a change is termed an upgrade and requires creating a new monolithic image including the changes and then rebooting the computer to cause it to use the new. Thus, to upgrade, patch or modify the program requires that the entire computer system be shut down and rebooted. Shutting down a network router or switch immediately affects the network up time or “availability”. To minimize the number of reboots required for software upgrades and, consequently, the amount of network down time, new software releases to customers are often limited to a few times a year at best. In some cases, only a single release per year is feasible. In addition, new software releases are also limited to a few times a year due to the amount of testing required to release a new monolithic software program. As the size and complexity of the program grows, the amount of time required to test and the size of the regress matrix used to test the software also grows. Forcing more releases each year may negatively affect software quality as all bugs may not be detected. If the software is not fully tested and a bug is not detected—or even after extensive testing a bug is not discovered—and the network device is rebooted with the new software, more network down time may be experienced if the device crashes due to the bug or the device causes other devices on the network to have problems and it and other devices must be brought down again for repair or another upgrade to fix the bug. In addition, after each software release, the size of the monolithic image increases leading to a longer reboot time. Moreover, a monolithic image requires contiguous memory space, and thus, the computer system's finite memory resources will limit the size of the image.
Unfortunately, limiting the number of software releases also delays the release of new hardware. New hardware modules, usually ready to ship between “major” software releases, cannot be shipped more than a few times a year since the release of the hardware must be coordinated with the release of new software designed to upgrade the monolithic software architecture to run the new hardware.
An additional and perhaps less obvious issue faced by customers is encountered when customers need to scale and enhance their networks. Typically, new and faster hardware is added to increase bandwidth or add computing power to an existing network. Under a monolithic software model, since customers are often unwilling to run different software revisions in each network element, customers are forced to upgrade the entire network. This may require shutting down and rebooting each network device.
“Dynamic loading” is one method used to address some of the problems encountered with upgrading monolithic software. The core or kernel software is loaded on power-up but the dynamic loading architecture allows each application to be loaded only when requested. In some situations, instances of these software applications may be upgraded without having to upgrade the kernel and without having to reboot the system (“hot upgrade”). Unfortunately, much of the data and code required to support basic system services, for example, event logging and configuration remain static in the kernel. Application program interface (API) dependencies between dynamically loaded software applications and kernel resident software further complicate upgrade operations. Consequently, many application fixes or improvements and new hardware releases, require changes to the kernel code which—similar to monolithic software changes—requires updating the kernel and shutting down and rebooting the computer.
In addition, processes in monolithic images and those which are dynamically loadable typically use a flat (shared) memory space programming model. If a process fails, it may corrupt memory used by other processes. Detecting and fixing corrupt memory is difficult and, in many instances, impossible. As a result, to avoid the potential for memory corruption errors, when a single process fails, the computer system is often rebooted.
All of these problems impede the advancement of networks—a situation that is completely incongruous with the accelerated need and growth of networks today.
SUMMARY OF THE INVENTION
The present invention provides a method for fault isolation in a computer system which includes the steps of providing a plurality of modular processes, and forming groups, based on hardware in the computer system, of one or more of the plurality of modular processes.
The method can further include the steps of detecting a fault within a group, and recovering from the detected fault without affecting processes or hardware in other groups. Detecting a fault within a group can include detecting a fault in one of the plurality of modular processes within a group. In addition, recovering from the detected fault can include terminating and restarting the process that experienced the fault. Alternatively, recovering from the detected fault can include terminating and restarting multiple processes within the group including the process that experienced the fault.
In one aspect, detecting a fault within a group can include detecting a hardware fault in the hardware within the group, and recovering from the detected fault includes terminating and restarting one or more processes within the group that experienced the fault.
In another aspect, a protected memory block is assigned to each of the plurality of modular processes.
In another aspect, the method of fault isolation of the invention is implemented in a network device, and includes detecting a fault associated with a network port, and recovering from the detected fault without affecting processes or hardware associated with other network ports. In addition, recovery from the detected fault can include terminating and restarting one or more of the plurality of modular processes associated with the network port that experienced the fault.
The plurality of modular processes can include, for example, a device driver process, a

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Vertical fault isolation in a computer system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Vertical fault isolation in a computer system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Vertical fault isolation in a computer system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3123406

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.