Distributed process redundancy

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S013000, C707S793000

Reexamination Certificate

active

06694450

ABSTRACT:

BACKGROUND
The majority of Internet outages are directly attributable to software upgrade issues and software quality in general. Mitigation of network downtime is a constant battle for service providers. In pursuit of “five 9's availability” or 99.999% network up time, service providers must minimize network outages due to equipment (i.e., hardware) and all too common software failures. Service providers not only incur downtime due to failures, but also incur downtime for upgrades to deploy new or improved software, hardware, software or hardware fixes or patches that are needed to deal with current network problems. A network outage can also occur after an upgrade has been installed if the upgrade itself includes undetected problems (i.e., bugs) or if the upgrade causes other software or hardware to have problems. Data merging, data conversion and untested compatibilities contribute to downtime. Upgrades often result in data loss due to incompatibilities with data file formats. Downtime may occur unexpectedly days after an upgrade due to lurking software or hardware incompatibilities. Often, the upgrade of one process results in the failure of another process. This is often referred to as regression. Sometimes one change can cause several other components to fail; this is often called the “ripple” effect. To avoid compatibility problems, multiple versions (upgraded and not upgraded versions) of the same software are not executed at the same time.
Most computer systems are based on inflexible, monolithic software architectures that consist of one massive program or a single image. Though the program includes many sub-programs or applications, when the program is linked, all the subprograms are resolved into one image. Monolithic software architectures are chosen because writing subprograms is simplified since the locations of all other subprograms are known and straightforward function calls between subprograms can be used. Unfortunately, the data and code within the image is static and cannot be changed without changing the entire image. Such a change is termed an upgrade and requires creating a new monolithic image including the changes and then rebooting the computer to cause it to use the new. Thus, to upgrade, patch or modify the program requires that the entire computer system be shut down and rebooted. Shutting down a network router or switch immediately affects the network up time or “availability”. To minimize the number of reboots required for software upgrades and, consequently, the amount of network down time, new software releases to customers are often limited to a few times a year at best. In some cases, only a single release per year is feasible. In addition, new software releases are also limited to a few times a year due to the amount of testing required to release a new monolithic software program. As the size and complexity of the program grows, the amount of time required to test and the size of the regress matrix used to test the software also grows. Forcing more releases each year may negatively affect software quality as all bugs may not be detected. If the software is not fully tested and a bug is not detected—or even after extensive testing a bug is not discovered—and the network device is rebooted with the new software, more network down time may be experienced if the device crashes due to the bug or the device causes other devices on the network to have problems and it and other devices must be brought down again for repair or another upgrade to fix the bug. In addition, after each software release, the size of the monolithic image increases leading to a longer reboot time. Moreover, a monolithic image requires contiguous memory space, and thus, the computer system's finite memory resources will limit the size of the image.
Unfortunately, limiting the number of software releases also delays the release of new hardware. New hardware modules, usually ready to ship between “major” software releases, cannot be shipped more than a few times a year since the release of the hardware must be coordinated with the release of new software designed to upgrade the monolithic software architecture to run the new hardware.
An additional and perhaps less obvious issue faced by customers is encountered when customers need to scale and enhance their networks. Typically, new and faster hardware is added to increase bandwidth or add computing power to an existing network. Under a monolithic software model, since customers are often unwilling to run different software revisions in each network element, customers are forced to upgrade the entire network. This may require shutting down and rebooting each network device.
“Dynamic loading” is one method used to address some of the problems encountered with upgrading monolithic software. The core or kernel software is loaded on power-up but the dynamic loading architecture allows each application to be loaded only when requested. In some situations, instances of these software applications may be upgraded without having to upgrade the kernel and without having to reboot the system (“hot upgrade”). Unfortunately, much of the data and code required to support basic system services, for example, event logging and configuration remain static in the kernel. Application program interface (API) dependencies between dynamically loaded software applications and kernel resident software further complicate upgrade operations. Consequently, many application fixes or improvements and new hardware releases, require changes to the kernel code which—similar to monolithic software changes—requires updating the kernel and shutting down and rebooting the computer.
In addition, processes in monolithic images and those which are dynamically loadable typically use a flat (shared) memory space programming model. If a process fails, it may corrupt memory used by other processes. Detecting and fixing corrupt memory is difficult and, in many instances, impossible. As a result, to avoid the potential for memory corruption errors, when a single process fails, the computer system is often re-booted.
All of these problems impede the advancement of networks—a situation that is completely incongruous with the accelerated need and growth of networks today.
SUMMARY
In one aspect of the invention, a distributed redundancy design is disclosed to minimize network outages and other problems associated with component/process failures by spreading software backup (in the so-called “hot state”) across multiple elements. In one embodiment, a 1:N redundancy design can be implemented in which a single backup process is used to backup multiple (N) primary processes.
For software backup alone, the distributed redundancy architecture of the present invention eliminates the need for hardware backup elements (e.g., spare hardware). Where hardware backup is also provided, spreading resource demands across multiple elements makes it possible to have significant (perhaps full) hot state backup without the need for a “mega spare” (e.g., a more powerful processor and additional memory). Identical backup (spare) and primary hardware provides manufacturing advantages and customer inventory advantages. The distributed redundancy architecture of the present invention permits the location of the hardware backup element to float, that is, if a primary element fails, its functions can be transferred over to the backup element. When the failed primary element is replaced, the replacement hardware can serve as the hardware backup.
In one embodiment, a distributed redundancy system is disclosed that provides software redundancy (backup) with or without redundant (backup) hardware, for example, with or without using a backup line card. In one embodiment, the computer system can employ additional primary line cards. In order to load instances of software applications, the Network Management Software (NMS) creates software load records (SLRs) in a configuration database. A typical SLR includes the name of a control shim executable file and a logical identification (LID) associat

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Distributed process redundancy does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Distributed process redundancy, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed process redundancy will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3336952

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.