In: Civil Engineering
Question:
Assuming that a constant failure rate/intensity is justified by the following points:
−
early failures, which are usually caused by the manufacturing process, are only observed on a small part of components; these failures can be contained by ESS;
−
failures due to overstresses or overloads are modeled by an exponential distribution for non-repairable systems and a homogeneous Poisson distribution for repairable systems;
−
premature aging due to batches of “defective” or “weak” components is also only observed on a small part of components;
−
for the components of repairable systems for which aging is unavoidable, the renewal theory and maintenance activities lead to a ROCOF which is constant in time [RIG 00];
−
Drenick’s theorem [DRE 60] shows that the constant failure rate/intensity hypothesis is correct when a large number of components and a system in series are considered;
−
according to the Occam’s razor principle, when the number of failures is small, a simple model (exponential distribution) is more precise than a more complex model (Weibull distribution) [JED 11].
The physical failure rate/intensity (λPhysical) is based on the Cox model. It is obtained by the following relation:
[8.6]λPhysicalX=λ0⋅AFX
where λo is the failure rate/intensity in reference conditions and AF(X) is the acceleration factor corresponding to the random variable X which expresses the failure factor. The unit of failure rate/intensity is the FIT (1 FIT = 10−9 h).
Due to ease in dealing with a constant failure rate, the exponential distributionfunction has proven popular as the traditional basis for reliability modeling. For the reasons enumerated below, some of which are historical in nature, it is not difficult to see why the constant failure rate model has been so widely used [1].
1.
The first generalized reliability models of the 1950s were based on electron vacuum tubes, and these exhibited constant failure rates.
2.
Failure data acquired several decades ago were “tainted by equipment accidents, repair blunders, inadequate failure reporting, reporting of mixed age equipment, defective records of equipment operating times, mixed operational environmental conditions …” [9]. The net effect was to produce what appeared to be a random constant failure rate.
3.
Early generations of electronic devices contained many intrinsically high failure rate mechanisms. This was reflected in different infant mortality and wear-out failure rates in subpopulations, and contributed to the appearance of a constant failure rate for products in service.
4.
Even in the absence of significant intrinsic failure mechanisms, early fragile devices responded to random environmental overstressing by failing at a roughly constant rate.
5.
It often happens that equipment repeatedly overhauled or repaired contains a variety of components in a variable state of wear. Even though each of the components probably obeys time-dependent failure distributions, e.g., lognormal or Weibull, the admixture of varying projected lifetimes may conspire to yield a roughly time-independent rate of failure.
6.
The simple addition of a decreasing infant mortality rate and an increasing wear-out failure rate results in a roughly constant failure over a limited time span.