Process for determining competing cause event probability...

Data processing: measuring – calibrating – or testing – Measurement system – Statistical measurement

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S047300

Reexamination Certificate

active

06816813

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a method for determining system availability and more particularly to a method which does not rely upon system simulation and which can consider the simultaneous failure of multiple components. The invention may also be used to generally determine the probability of observing an event under various circumstances.
BACKGROUND OF THE INVENTION
Complex systems may be modeled as having uptimes when the system is running and downtimes when the system is not. When the system is running it is assumed to be performing its intended function at full efficiency. When the system is down it is assumed to not be performing its function at all. The system is assumed to be repairable and the components functionally connected in series without redundant components or surge capability. In addition to downtime caused by component failures which may occur as a function of wearout or are catastrophic, downtime may occur as a repeating function of time, e.g. due to exhaustion of batch raw materials, routine equipment shutdowns, etc. Downtimes may be caused by components, failure modes, and other causes, which terms shall be used interchangeably.
The goal is to carefully plan and minimize the downtimes so that the uptime is maximized. Also one wishes to avoid intermediate states where the system operates at partial efficiency.
For the purpose of the present invention, systems can only be in one of two states. Either they are fully operational and running or producing product at a given rate, also known as uptime. Uptime is defined as the time that the system is operating. The system may not be operational as a result of a failure or planned stop events, also known as downtime. Downtime is defined as the time that the system is non-operational or stopped. We model transitional states between downtime and uptime, or vice versa, as very brief relative to typical uptimes, otherwise the transition period will be converted to an equivalent production loss downtime.
The mean time between failures in a given time period (MTBF) is the ratio of the total system uptime to the number of failures which occur during that period. The mean time to repair in a given time period (MTTR) is the ratio of the total system downtime to the number of failures which occur during that period. Availability is the ratio of total system uptime to the total time (uptime plus downtime) the system is under study. Availability may be therefore quantified as
Availability=
MTBF
sys
/(
MTBF
sys
+MTTR
sys
).  1
where MTBF is the average uptime, MTTR is the average downtime, and the subscript “sys” refers to the overall system, as opposed to an individual component.
One of skill analyzing a system is interested in the availability because it is a measure of system performance relative to asset utilization. There are other measures of availability employable by one of skill, as discussed in the Handbook of Reliability Engineering and Management, Chap. 15, Ireson and Coombs, Jr. Editors in Chief, copyrt. 1998.
The uptimes and downtimes occur alternately. Downtimes can occur due to planned stop events such as scheduled maintenance and product changes, or due to a failure of one or more components within the system. Once the system is down, repair or maintenance action can be taken to restore components back to an operational state. Repair actions are classified by the condition of the component after the repair or maintenance. If the repair action restores the component back to its original condition it is called same as new (SAN). If the repair action restores the component back to the state of that component just prior to failing, it is referred to as same as old (SAO). The repair action may also restore the component to a state that is between SAN and SAO. If the repair is SAN, then the component will be in the same state that it was at the beginning of the system mission at time 0, just before the system started for the first time, thus making the component again subject to premature or burn-in failures.
In repairable production systems, it is possible that during an attempt to restart the equipment after all repairs are completed, the system will run only briefly without reaching its target rate. This failed attempt to restart the equipment is called a false start, also referred to in the literature as a failure on demand probability. For the process described and claimed hereunder, the uptime during a false start is considered to be zero.
A false start, as discussed above, may be caused by the occurrence of a single failure mode. Alternatively, a false start may be caused by the simultaneous occurrence of two or more failure modes. If two or more failure modes simultaneously occur and cause the false start, this is considered to be a tie between the failure modes which caused the false start. However, to improve the system, one may wish to know which failure mode is associated with the false start, or assign a particular failure mode to type of false start event. In order to determine which failure mode, among multiple failure modes, caused the false start, at least two illustrative and non-limiting methods may be used.
In systems with multiple components, failures can be categorized as competing or non-competing. In a competing failure mode system, the components are assumed to be in series such that if one component fails, the entire system has to stop. Every time the system stops, all components subject to failure in a competing system are automatically reset to a SAN state, including components that were not the original cause of the system stoppage. Because competing components are in series and reset to SAN on every stop, the uptimes between successive failures of a competing failure component are not independent of other failures in the system. For competing components, the time to next failure is measured from the last repair/replacement to occur in the system, without regard to which component has failed.
A non-competing component is either not in series with the other components in the system, or does not reset to SAN every time the system stops. Non-competing failures occur with an uptime that is independent of other failures in the system. Non-competing failures are typically caused by the wear or use of one or more non-competing components. A non-competing component is not repaired or reset to SAN unless it fails or is close to failing. Time to next failure for a non-competing component begins with the last repair/replacement of only that component, as opposed to the last system failure. In the literature known to the applicants for repairable production systems, failure modes are typically considered to be non-competing for the purpose of estimating availability. We have found that the use of competing failure models increases the accuracy of model predictions of availability. Therefore there is a need to model competing failure mode systems and mixed competing and non-competing failure mode systems. In the following analysis each mission, or use of the system, must start with components that are competing, except as set forth below for some systems that combine competing and some types of non-competing components.
Uptime can be characterized by a probability density function (PDF) distribution of the times between failures that do not include false starts, i.e. t>0. Similarly, downtime can be characterized by a PDF of the times it takes to restore the system back to operation. The area under a PDF curve between t
1
and t
2
, for t
2
>t
1
, is the probability of observing values of t that are greater than t
1
but smaller than t
2
, out of a large number of observed times. Given the uptime PDF, one of skill in the art can derive other important reliability expressions.
The reliability of a competing system at time t, R(t), is the probability that the system will successfully run for a period of duration t without stopping. The reliability, R(t), distribution can be derived from the uptime PDF by subtracting from one the integral to time

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Process for determining competing cause event probability... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Process for determining competing cause event probability..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Process for determining competing cause event probability... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3348068

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.