A n is a sign that an adverse event is underway and has a probability of becoming an incident

Organizational factors and safety culture

Barbara G. Kanki, ... Gregory Alston, in Space Safety and Human Performance, 2018

14.1.2.2 Latent Conditions

Latent conditions, whether present in the immediate environment or incorporated into organizational systems and processes, may also contribute to an accident sequence. Latent conditions that are present in the immediate environment in which an active failure occurs are considered local factors. These factors may include the physical environment in which the task is performed, task procedures, and tooling, as well as a person's condition as affected by emotional factors, fatigue, and stress. Latent conditions that are a part of the organization's systems and processes are often organizational controls, and may take a large variety of forms; they may be the methods used to develop task procedures and tooling, the way a person is trained and supported through team/supervisory support and quality control, systems to protect the operator from dangers in the physical environment, or indeed, the culture of the organization.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081018699000145

Deepwater Programs, Safety, and Loss Control

Peter Aird, in Deepwater Drilling, 2019

Human Factors View of Deepwater Accident Causation

Deepwater accidents/incidents as illustrated in Figs. 6.9–6.11 are rest assured caused through Active failures or Latent conditions led by human error or physical (parts) or paper work standards or systems violations.

Active failures are acts or conditions that precipitate the incident situation. They usually involve the front-line workers. The consequences are immediate and can often be prevented by design, training, or operating standards, systems, and best practice.

Latent conditions are the managerial influences and social pressures that make up the culture, that influence the design of equipment standards or systems, and define supervisory inadequacies. They tend to be hidden until triggered by an event. Latent conditions can lead to latent evident failures, human error, or violations. Latent failures may occur when several latent conditions combine in an unforeseen way.

Note: People make errors irrespective of how much training and experience they possess or how motivated they are to do things right.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081022825000065

Continuing1 Airworthiness and Air Operator’s Certification

Filippo De Florio, in Airworthiness (Third Edition), 2016

10.9.2.3 Accident causation

The «Swiss-Cheese» Model, developed by Professor James Reason, illustrated that accidents involve successive breaches of multiple system defences. These breaches can be triggered by a number of enabling factors such as equipment failures or operational errors.

Breaches in safety defences can be a delayed consequence of decisions made at the highest levels of the system, which may remain dormant until their effects or damaging potential are activated by specific operational circumstances.

Under such specific circumstances, human failures or active failures at the operational level act to breach the system’s inherent safety defences. The Reason Model proposes that all accidents include a combination of both active and latent conditions.

Active failures are actions or inactions, including errors and violations, which have an immediate adverse effect. They are generally viewed, with the benefit of hindsight, as unsafe acts.

Active failures are generally associated with front-line personnel (pilots, air traffic controllers, aircraft mechanical engineers, etc.) and may result in a harmful outcome.

The difference between errors and violations is the motivational component. A person trying to do the best possible to accomplish a task, following the rules and procedures as per the training received, but failing to meet the objective of the task at hand, commits an error.

A person who, while accomplishing a task, willingly deviates from rules, procedures, or training received commits a violation. Thus, the basic difference between errors and violation is intent.

Latent conditions are those that exist in the aviation system well before a damaging outcome is experienced. The consequences of latent conditions may remain dormant for a long time.

Initially, these latent conditions are not perceived as harmful, but will become evident once the system’s defences have been breached. Latent conditions in the system may include those created by a lack of safety culture; poor equipment or procedural design; conflicting organisational goals; defective organisational systems or management decisions.

The perspective underlying the organisational accident aims to identify and mitigate these latent conditions on a system-wide basis, rather than through localised efforts to minimise active failures by individuals. Latent conditions have all the potential to breach aviation system defences. Typically, defences in aviation can be grouped under three large headings: technology, training, and regulations.

Defences are usually the last safety net to contain latent conditions, as well as the consequences of lapses in human performance. Most, if not all, mitigation strategies against the safety risks of the consequences of hazards are based upon the strengthening of existing defences or the development of new ones.

Active failures can be considered as either errors or violations. The difference between errors and violations is the motivational component, as we have mentioned.

From the perspective of the organisational accident, safety endeavours should monitor organisational processes in order to identify latent conditions and thus reinforce defences. Safety endeavours should also improve workplace conditions to contain active failures because it is the combination of all these factors that produces safety breakdowns.

Errors and violations. As indicated previously, an error is defined as – an action or inaction by an operational person that leads to deviations from organisational or the operational person’s intentions or expectations. In the context of an SMS, both the State and the product or service provider must understand and expect that humans will commit errors regardless of the level of technology used, the level of training, or the existence of regulations, processes, and procedures.

An important goal then is to set and maintain defences to reduce the likelihood of errors and, just as importantly, reduce the consequences of errors when they do occur. To effectively accomplish this task, errors must be identified, reported, and analysed so that appropriate remedial action can be taken.

A violation is defined as a deliberate act of wilful misconduct or omission resulting in a deviation from established regulations, procedures, norms, or practices. Nonetheless, noncompliance is not necessarily the result of a violation because deviations from regulatory requirements or operating procedures may be the result of an error. To further complicate the issue, while violations are intentional acts, they are not always acts of malicious intent.

Individuals may knowingly deviate from norms, in the belief that the violation facilitates mission achievement without creating adverse consequences. Violations of this nature are errors in judgement and may not automatically result in disciplinary measures depending on the policies in place.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081008881000100

Addressing adverse events in health care through a safety science lens

Jessica L. Howe, ... Kathryn M. Kellogg, in Clinical Engineering Handbook (Second Edition), 2020

Background

The late 1980s and early 1990s marked the start of an emerging paradigm shift in safety science. Researchers and engineers working in high-risk industries began to develop models integrating both the person and system approaches. Over the following years, two key frameworks—Human Factors Analysis and Classification System (HFACS) and Systems Engineering for Patient Safety (SEIPS) model—emerged in safety science and were adopted for use in health care, specifically for understanding hazards and adverse events.

The dual approach to human error, recognizing contributions from both the person and the system, was formalized by psychologist James Reason, who attributed adverse safety events to the interplay of active failures and latent conditions. Reason described active failures as distinct actions by an individual that result in immediate consequences, and latent conditions as less apparent circumstantial errors that are attributed to the design of a system (Smith and Sainfort, 1989; Reason, 1990, 2000). Reason also developed the well-known Swiss Cheese Model, in which he described the latent conditions of a system aligning like holes in swiss cheese to allow an event to occur. In this model he identified four domains of failure within a work system: unsafe acts of operators, precondition for unsafe acts, Unsafe Supervisions, and Organization Influences. Today, these four domains serve as a backbone for one of the two major frameworks used in safety science.

The HFACS framework, like the Swiss Cheese Model, emphasizes the combination of active and latent failures in the presence of an adverse event across multiple levels of a working system (Shappell and Wiegmann, 1997, 2000; Wiegmann and Shappell, 2003). The four domains included in HFACS represent the structural levels within an organization. For instance, the first two levels, Unsafe Acts of Operators and Preconditions for Unsafe Acts, focus on various aspects of active failures by frontline staff. Unsafe Acts includes two categories: errors and violations. Violations reflect events in which the frontline staff knowingly did not follow policy or procedure. These violations are common in health care and are often workarounds in response to other constraints of the system. For instance, a nurse deliberately disregarding the need to verify a patient’s allergy list due to time constraints would be a routine violation. Errors are those that a frontline worker makes unknowingly, most often in health care because she or he has been working in a mode of automaticity and has missed a step, for example, a physician placing a central line forgets to remove the guidewire. This is not an error due to lack of knowledge, but rather an error of automaticity, or a skill-based error (Embrey, 1996). Preconditions for Unsafe Acts reflects various potential system factors that can increase the chances than an error will occur. These can be environmental and related to either the physical environment or tools and technology used in the environment, such as an electronic health record with a confusing vital signs entry screen where nurses often enter pediatric patients’ weights in pounds instead of kilograms. This technological issue increases the likelihood that the patient would suffer from a medication dosing error. Conditions of Operators reminds the user of HFACS to consider all of the ways in which the environment is not optimized for the worker. For instance, if a hospital does not have proper equipment for moving bariatric patients, a petite nurse trying to move an obese patient when proper lifting assist equipment is not available, would have difficulty and the chance of the patient falling significantly increases. Finally, Personnel Factors include both the frontline worker’s personal readiness for work as well as teamwork issues. Frequent hazards in health care arise around teamwork, such as inadequate transitions of care, increasing the chance that a patient could suffer from an error.

Continuing up the structural levels within an organization, the top two categories of HFACS, Unsafe Supervisions and Organization Influences, emphasize system design-related and operational errors. Unsafe Supervisions reflect latent conditions such as the inadequate promotion of policies or an inefficient process of pairing a novice nurse with a preceptor. Finally, Organizational Influences point to important factors such as organizational culture. If an organization has a very hierarchical culture and a punitive response to error, frontline workers will be less likely to point out potential hazards to their managers, thereby leaving hazards uncorrected and increasing the likelihood that those hazards could result in an error.

The HFACS framework provides a data-driven approach to developing innovative investigation techniques and effective intervention strategies to mitigate against the occurrence of human error-related adverse events. It can also be used to optimize operation processes such as corporate decisions for everyday activities or standardized procedures.

Subsequently in the continued development of safety science, industrial engineers also examined safety with a person and system approach, focusing on work system design to optimize performance and prevent adverse events (Smith and Sainfort, 1989; Carayon and Smith, 2000; Donabedian, 1988). Through the balance theory and structure-process-outcome (SPO) models, Carayon, Smith, and Donabedian laid the foundation for the development of a second major framework: a new model funded by the Agency for Healthcare Research and Quality (AHRQ) that applies human factors and systems engineering to patient safety called the SEIPS model (Carayon et al., 2006; Holden et al., 2013). This model is unique in that it uses a person-centered approach to assess and evaluate the design of the work system and its interactions with various healthcare processes to improve patient, employee, and organizational outcomes. The work system consists of five components—the environment, organization, tasks, technology, and tools, centered around the individual which include the patient, healthcare provider, or healthcare team. In this model, the individual most central to the work is at the center of the model. For instance, if the work is administering a medication, the nurse would be at the center of the model, his or her tasks would be those related to medication administration, technology might be a bar code scanner or an electronic health record, and so on. Again, features of the environment and the organization are highlighted, as both have important influences on the work being done. The SEIPS model focuses on looking for ways to make it easy for the person at the center of the system to do the right thing, which in turn will decrease the chances of error (Leape, 1994). Additionally, while the SEIPS model recognizes that the person is central to the system, it also highlights the importance of a balanced work system. The performance of the person at the center of the system highly depends on the functioning of the other parts of the system. If a physician has adequate tools and technology and a strong team supporting her, she will likely have improved efficiency and accuracy of her care, improving safety. Importantly, when considering system design, all potential persons who could be at the center of the work should be considered, that is, changing the system to yield better balance for a physician might tip the balance of workload for the nurse.

Because of the detailed analysis of the interactions and work system of a healthcare organization that it provides, the SEIPS model has been effectively applied to system design, proactive hazard analysis, accident investigation, and patient safety research. It is the most widely used healthcare human factors system model and has recently been extended by a 2013 study, incorporating additional concepts to consider: configuration, engagement, and adaption (Holden et al., 2013). This recent paradigm shift in safety science focusing on person-centered and systems approaches has made it possible for human factors experts and safety scientists to identify more specific factors contributing to human error and develop sustainable intervention and prevention techniques.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128134672001243

Developing the Job Hazard Analysis

Nathan Crutchfield CSP, James Roughton CSP, Six-Sigma Black Belt, in Safety Culture, 2014

Building the Case for a JHA Process

The JHA is a core process element of the safety management system and offers benefits in multiple areas. As mentioned above, if you have a JHA process, it becomes part of the network of communications that spreads job information and influences safe behaviors through the organization. It covers both the job design for potential latent, built-in errors that are hidden and for potential active errors by the employee (Job Safety Analysis: A Fundamental Tool for Safety, n.d.; Volume 1: Concepts and principles, human performance, improvement Handbook, 2009).

Latent errors result in hidden organization-related weaknesses or equipment flaws that lie dormant. Such errors go unnoticed at the time they occur and have no immediate apparent outcome to the facility or to personnel. Latent conditions include actions, directives, and decisions that either create the preconditions for error or fail to prevent, catch, or mitigate the effects of error on the physical facility … Managers, supervisors, and technical staff, as well as front-line workers, are capable of creating latent conditions. Inaccuracies become embedded in paper-based directives, such as procedures, policies, drawings, and design bases documentation.

Active errors are those errors that have immediate, observable, undesirable outcomes and can be either acts of commission or omission. The majority of initiating actions are active errors. Therefore, a strategic approach to preventing events should include the anticipation and prevention of active errors.

(Volume 1: Concepts and principles, human performance, improvement handbook, 2009)

JHAs can be used as a problem-solving tool useful in assessing the potential for loss-producing events. Each job element has inherent hazards that have varying levels of risk (severity and probability). The JHA provides a structure that allows these elements to be analyzed and the desired proper controls better defined.

Possible issues that a JHA can provide further insights or explanations for would include:

Job-related hazards and associated risk.

Lack of knowledge of proper procedures.

Lack of physical ability to complete specific steps and tasks.

Improper use of “on-the-job training” where a new employee is trained by a “seasoned” (long-time) employee.

Reducing or eliminating the perception that associated risk are acceptable by clearly identifying the potential severity of loss-producing events (Job Safety Analysis: A Fundamental Tool for Safety, n.d.).

Does an employee’s perception of an existing and/or potential hazards and consequences of exposure differ from that of the employer? The employee sees a hazard and wants it fixed immediately. The employer may respond to this issue quickly and address the hazard but is often slowed down by internal structures, budget restraints, proper corrective actions, priorities, etc. Answers to critical questions must be clearly defined: Is the safety issue real? How big is the risk? What are the options? What is the best way to correct the identified risk? Who is going to correct the hazard? How long will it take to develop and implement preventative measures? How much will it cost? Is there need for additional training?

One must remember that risk is based on probability even with a blatant hazard; no loss-producing events may have been developed.

(Roughton & Crutchfield, 2008)

JHA is a technique that can help an organization focus on specific tasks as a way to identify hazards before they occur. It focuses on the relationship between the employee, the task, specific tools, material, and equipment, and the work environment. Ideally, after uncontrolled hazards are identified, steps can be taken to eliminate or reduce the risk of hazards to an acceptable level.

(Roughton & Crutchfield, 2008)

The Corps of Engineers use a form of the JHA process called Activity Hazard Analysis (AHA) on construction sites. The AHA is developed prior to performing any new task.

Before beginning each work activity involving a type of work presenting hazards not experienced in previous project operations or where a new work crew or sub-contractor is to perform the work, the Contractor(s) performing that work activity shall prepare an AHA.

(Army System Safety Management Guide, 2008)

Table 12.2 provides an overview of “JHA Basic Terms” that are used in developing the JHA.

Table 12.2. JHA Basic Terms

TermDiscussion Points
Job A job can be defined as a sequence of steps with specific tasks that are designed to accomplish a desired goal (Managing Worker Safety and Health, n.d.; Managing Worker Safety and Health, Appendix 9-4 Hazard Analysis Flow Charts, n.d.).
Steps Steps are defined as a series of actions necessary to complete the job.
Tasks Tasks are detailed actions taken to complete a step.
Analysis Analysis is the art of breaking down a job into its basic steps and their tasks and evaluating each step/task for specific inherent hazards and associated risk. Each hazard or associated risk is evaluated for methods of control (avoidance, engineering controls, administrative practices, PPE, etc.) that are implemented as part of the standard operating procedures.

Source: Adapted from Roughton, J.E, & Crutchfield, N. (2008). Job hazard analysis: a guide for voluntary compliance and beyond. Chemical, petrochemical & process. Elsevier/Butterworth-Heinemann. Retrieved from http://amzn.to/VrSAq5.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123964960000124

Disputes and claims

Sidney M. Levy, in Construction Process Planning and Management, 2010

Site conditions that differ materially from the contract documents

Site matters are some of the most frequent and complex disputes that can surface on a construction project. And they require that all parties view the matter not only from their own perspective but from that of others.

Contractors base their site work estimates on information provided by the owner’s civil engineer, a geotechnical technician, referred to as the geotech. The report that is compiled by the geotech includes a host of information. Soil samples taken from test borings not only provide the structural engineer with soil-bearing capacity, information upon which to design the project’s foundation, but they also provide the contractor with reasonable expectations of what he will find when he excavates for those foundations and other site work.

The geotechnical report is in narrative form, and it also diagrammatically provides a contractor with a view of the soil sample, its composition, an indication of underground springs and water levels, the location of rock formations and the type of rock encountered, and the elevation (depth) where these soils, rocks, and water were found.

As you recall, in the bid documents the contractor is invited to inspect the site to become familiar with the condition. This is a critical prebid requirement that has many manifestations if and when a site problem is encountered. The test borings are often qualified with a statement such as “The conditions found in the test borings are indicative of conditions found solely at that location and may vary widely in adjacent areas of the site,” Once construction begins, the contractor may find that the composition of the soil may differ “materially” from that represented in the contract documents and the accompanying geotech report and test borings because he or she has excavated in a location where borings were not taken. This not only applies to soil composition—gravel, clay, humus, decomposed vegetation, sand, and silt—but also to rock and the presence of water. Not infrequently, what was anticipated by the soil-boring analysis and what was actually uncovered vary considerably.

Government contracting issues and resolutions are probably the standard for determining the meaning of “differing materially.” Several government agencies use the “15 percent rule.” If the actual quantity of material encountered exceeds 15 percent of what was a reasonable interpretation of the geotechnical report, a differing or changed condition has been established. On that basis, the contractor may issue a request for a cost proposal to cover the added costs for the work. But for the contractor to make that claim, he must present the following documentation:

1.

The time, date, and condition when these differing conditions were observed

2.

The project superintendent’s entry of this event in the daily log or daily report

3.

Photos, still or video

4.

A statement from the field personnel involved in the discovery, such as the excavating contractor, the contractor’s foreman, or equipment operator

5.

A statement explaining the operation that was taking place at the time—for example, excavating for a water line, building foundations—and the exact location

The contractor should alert the owner’s representative or architect immediately so he or she can observe these conditions and comment on them as they are occurring rather than after the removal of the differing materials has taken place.

The two types of differing-conditions claims, established in the public sector and acknowledged by the private sector, are Type I and Type II claims. For a Type I claim, the contractor must prove the following:

The contract documents include the subsurface or latent conditions that form the basis of the claim.

The contractor’s interpretation of the contract documents is reasonable.

The contractor relied on these interpretations when they prepared the estimate of the work.

The subsurface or latent conditions actually encountered were materially different from those represented in the contract documents.

The actual conditions discovered were reasonably unforeseen.

The costs included in the claim are solely representative of the costs to correct the materially differing conditions.

For a Type II claim the contractor must prove the following:

What would have been the usual conditions the contractor would have encountered based on the information included in the contract documents?

What were the actual conditions encountered?

The physical conditions encountered differed materially from the known or usual conditions.

The encountered conditions created an increase in the cost of the work.

To counter these claims, owner and architect will present the following as their case:

The conditions encountered by the contractor were not really different from those included in the contract documents.

The encountered conditions should have reasonably been anticipated by the contractor.

The project was not managed properly, and the claim was an attempt to recoup unrelated costs due to the mismanagement.

The contractor should have conducted a more thorough site investigation, even requesting, during the bid process, permission to dig some pits to discover the real nature of the site (a rather weak owner argument but one that can be presented).

The excavating equipment was the wrong type or wrong size and could not cope with the conditions uncovered. (This would mainly apply to loose or deteriorated rock that could have been removed with a large excavator and did not require blasting.)

The contractor’s operators or supervisors were inexperienced.

Various court cases have somewhat quantified what is meant by a “changed” condition or a “materially differing” condition, and varying court interpretations as to what constitutes “materially differing conditions” have arisen. We have already discussed the 15 percent rule, but there are others.

In the late 1990s, the Federal Highway Administration responded to a query from U.S. Senator Frank Lautenberg of New Jersey concerning what constituted “differing” conditions. This FHWA interpretation may be the clearest way to quantify this difficult topic:

We recognize that after a contract gets underway, conditions may change or circumstances may exist that were not anticipated during preparation of the plans, specifications, and estimates. Our governing legislation and our implementing regulations allow for change orders within the scope of work covered by the contract. In awarding contracts for federal-aid highway projects, the State transportation department must include a standardized clause for changed conditions to provide for an adjustment of contract terms if the altered character of the work differs materially from the original contract or if a major item of work is increased or decreased by more than 25 percent of the original contract value.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856175487000100

Human performance tools as a part of programmatic human performance improvement

Kaupo Viitanen, in Human Factors in the Nuclear Industry, 2021

6.2.6 Independent verification

Independent verification utilizes third-party verifiers who have not been involved with the task to check that the task output is as expected. INPO (2006a) recommends that independent verification is used in situations where the (potentially adverse) condition resulting from performing a task can remain undetected. In essence, independent verification is used to verify that the task does not leave latent conditions in the system.

One of the fundamental principles of independent verification is the way in which information feedback between performers and verifiers is handled. DoE (2009b) proposes that independent verification is more effective in catching errors when compared to peer checking, because there is no information feedback from task performers toward the verifier. That is, verifiers are independent from performers. In practice, this independence is achieved by temporal separation (verification takes place after performance) and spatial separation (verifier is not able to receive auditory or visual information on the task performance) (DoE, 2009b).

However, if the performers know that the task will be independently verified, there is a certain type of information feedback from the verifier toward the performers that can affect the way work is performed. For instance, maintenance personnel may carry out their tasks more carefully because they do not want verifiers to discover errors or perform tasks less rigorously if they know that there will be quality check of their work afterward (Oedewald et al., 2014; Skjerve and Axelsson, 2014). Both of these effects have certain face validity and suggest that independent verification can have unexpected impact on performers’ activities if they are aware that it is used. Generally, however, maintenance personnel appear to embrace independent verification and consider it an integral part of working at nuclear power plants (Oedewald et al., 2014). It is also worth noting that in the nuclear industry, independent verification is widely institutionalized in the form of organizational entities or processes (e.g., quality control, inspections, and tests) that verify equipment status after work completion.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081028452000065

Accidents in Other Industries

Trevor Kletz, in What Went Wrong? (Fifth Edition), 2009

37.3 Human Error

On ships, as on land, there is a readiness to blame human error—poor maintenance, watch-keepers falling asleep, errors in navigation—instead of looking for underlying causes such as poor training or supervision, error-prone designs, lack of protective features, overlong hours of work, and so on. In the following extract from a report [9] on marine accidents, I have changed seamen to operators and made other similar changes:

There is an abundance of academic literature on human error which quickly lapses into language that leaves the average operator [and engineer] totally bewildered, and few will have the foggiest ideas what is meant by “visual/tactile dissimilarity,” “cognitive aspects of safety,” “rule-based behaviour,” “latent conditions and pathogens,” or “non-optimised performance related factors.” What the operator [and the engineer] needs is a simple explanation about what is meant by human factors so he or she can better understand why it matters and what needs to be done to improve safety and conditions of service.

I have tried to provide such a guide in An Engineer's Guide to Human Error [10] (see the introduction and Section 38.3).

The following are two more adopted quotations from a marine report [11], this time without change:

[When a ship has run aground] giving orders calmly will ensure success. It is not the moment to give the unfortunate helmsman his or her annual appraisal.

When the draught of your vessel exceeds the depth of water available … you can always consider the delights of gardening.

Section 26.1 drew attention to the reluctance of some operators to go and look at the plant when it is not operating correctly. The following extract from a letter by a deep-sea pilot [12] describes a view shared by many chemical engineers:

Modern watch-keepers tend to be wonderful at operating computers and twiddling radars, but abysmal in the basics, such as keeping a visual lookout and correctly applying the collision regulations. Lack of a grounding in mental arithmetic also means that they often cannot roughly estimate their computerized information and realize when it is wrong.

There are other marine accident reports in Sections 27.5 and 29.3.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856175319000378

Machinery Failure Analysis and Troubleshooting

In Practical Machinery Management for Process Plants, 1999

Sneak Circuits and Their Analysis

A sneak circuit is an unexpected path or logic flow within a system which, under certain conditions, can initiate an undesired function or inhibit a desired function. The path may consist of hardware, software, operator actions, or combinations of these elements. Sneak circuits are not the result of hardware failure but are latent conditions, inadvertently designed into the system or coded into the software program, which can cause it to malfunction under certain conditions. Categories of sneak circuits are:

1.

Sneak paths which cause current, energy, or logical sequence to flow along an unexpected path or in an unintended direction.

2.

Sneak timing in which events occur in an unexpected or conflicting sequence.

3.

Sneak indications which cause an ambiguous or false display of system operating conditions and thus may result in an undesired action taken by an operator.

4.

Sneak labels which incorrectly or imprecisely label system functions, e.g., system inputs, controls, displays, buses, etc., and thus may mislead an operator into applying an incorrect stimulus to the system.

Figure 8-1 depicts a simple sneak circuit example. With the ignition off, the radio turned to the on position, the brake pedal depressed, and the hazard switch engaged, the radio will power on with the flash of the brake lights. Although we can be reasonably certain that this automotive sneak circuit does not exist in many automobiles today, it was fairly common in certain types of late-1960s models. The radio should only be powered by current flowing through the ignition switch, but it may also be powered by current that flows along the indicated path, totally bypassing the ignition switch. The flasher module drives the radio in sync with the tail lights.

A n is a sign that an adverse event is underway and has a probability of becoming an incident

Figure 8-1. The automotive sneak circuit (sneak path), common in the 1960s. The radio should be powered by current flowing through the ignition switch; however, the electric current flows along an unintended sneak path (→), causing the radio to “blink” with the tail lights.

This type of sneak circuit is basically a stray current path. The path was designed into the system, and no hardware failures are required to activate it. Rather, a peculiar circumstance of combined operating conditions is the trigger. Until these circumstances are met, the sneak will lie latently dormant through testing and operation. Formal sneak circuit analysis can now uncover such potential “glitches” before they occur.

SCA is the term that has been applied to a group of analytical techniques that are intended to identify methodically sneak circuits in systems. SCA techniques may be either manual or computer-assisted, depending on system complexity. Current SCA techniques which have proven useful in identifying sneak circuits in systems include:

1.

Sneak path analysis. This method of SCA attempts to discover sneak-circuit conditions by means of methodical analysis of all possible electrical paths through a network. Because of the large volume of data involved, sneak path analysis normally mandates computer data processing. It has been found that sneak-circuit conditions generally have certain common characteristics which are directly related to topological patterns within the network. Sneak path analysis uses these common characteristics as “clues” to look for sneak circuits in the system being analyzed.

2.

Digital sneak circuit analysis. This SCA method is intended to discover sneak-circuit conditions in digital systems. Digital SCA may involve some features of sneak path analysis, but it may also involve additional computer-assisted techniques such as computerized logic simulation, timing analysis, etc., to handle the multiplicity of system states encountered in modern digital designs. In general, digital SCA will identify the following types of anomalies:

a.

Logic inconsistencies and errors.

b.

Sneak timing, that is, a convergence of signals which causes an erroneous output due to differing time delays along different signal paths through a digital network.

c.

Excessive signal loading or fan-out.

d.

Power supply cross-ties, grounding, or other misconnections of signal pins.

3.

Software sneak path analysis. Software sneak path analysis was adapted from hardware sneak path analysis. It was found that computer program flow diagrams which contained known sneak paths were most often associated with certain common flow diagram topologies and had other common characteristics. These common characteristics served as a basis for establishing “clues” which could be used to analyze new computer program flow diagrams. Computerized path search programs developed to do SCA on hardware were adapted or rewritten to accept software logical flows, and new clues were developed to analyze them. Software SCA can be done either manually or with computer assistance, depending primarily on the size and complexity of the software. It may be combined with the hardware SCA and is most often used on embedded software in a complete minicomputer or microcomputer-controlled hardware system. It has been used on both assembly language and higher-order language programs.

4.

Other sneak circuit analysis techniques. Variations of SCA have been developed to analyze particular types or combinations of systems, such as hardware/software interfaces. The application of any new SCA procedure to a particular system or situation must be judged on its demonstrated effectiveness in detecting sneak circuits in similar cases or on its anticipated benefit in the specific situation being considered. It is difficult to predefine a course of action for handling new SCA types which may emerge, but the general ground rules for evaluating applicability hold true also.

All of the SCA types given above, i.e., sneak path analysis, digital SCA, and software sneak path analysis may be performed manually under some limited conditions. Computer-assisted SCA data processing is also possible with each of these types and is absolutely essential for more complex systems. The availability and thoroughness of computer aids strongly influences the selection of a contractor to perform SCA on a complex system.

A few additional words relating to complexity are in order. Sneak conditions result from the following three primary sources: (1) system complexity, (2) system changes, and (3) user operations. Hardware or software system complexity results in numerous subsystem interfaces that may obscure the intended functions or produce unintended functions. The effects of even minor wiring or software changes to specific subsystems may result in undesired system operations. A system that is relatively sneak free can be made to circumvent desired functions or generate undesired functions as a consequence of improper user operations or procedures. As systems become more complex, the number of human interfaces multiplies because of the involvement of more design groups, subcontractors, and suppliers. Hence, the probability of overlooking potentially undesirable conditions is increased proportionately.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1874694299800108

Managing error on the open road: The contribution of human error models and methods

Paul M. Salmon, ... Guy H. Walker, in Safety Science, 2010

Latent condition identification methods are a general class of error management approach that are used to identify and eradicate error-causing conditions within the safety critical domains. Inspired by Reason’s Swiss cheese model, methods such as TRIPOD DELTA (Reason, 1997) and REVIEW (Reason, 1997) are used in their respective domains to identify the extent to which latent or error-causing conditions are a problem for concern, and inform the development of countermeasures designed to remove latent conditions. The process typically involves safety managers using checklists of failure/error types and latent conditions to assess the risks associated with a particular system. Upon identification of areas of concern, remedial measures are proposed and implemented.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0925753510001037

What is an adverse event quizlet?

Adverse Event. Any unfavorable/ undesirable experience associated with the use of a medical product in a patient. The event is serious and should be reported regardless of it being drug related (IF the patient was taking the medication at the time of the event).

What is a best example of a adverse event?

Which of the following is the best example of an adverse event? A patient receives an overdose of medication. Which of the following is the best definition of "adverse event"? An event where patient safety has been or might be affected.

Are events that occur suddenly with little warning taking the lives of people and destroying the means of production?

A rapid onset disaster refers to an event or hazard that occurs suddenly, with little warning, taking the lives of people, and destroying economic structures and material resources. Rapid onset disasters may be caused by earthquakes, floods, storm winds, tornadoes, or mud flows.

What are the three broad categories of incident on a system indicators Select all that apply?

To help make the detection of actual incidents more reliable, there are three broad categories of incident indicators that have been identified: possible, probable, and definite.