Understanding, Analysing, Preventing and Learning from Security Failure


Security fails all the time. Some security failures involve the human element, in terms of error or procedural violation. Other failures are more complex, and involve grievance, malice, criminal intent, process and design flaws and technology malfunction. Many organisations apply safety and security risk management to deal with and learn from this wide spectrum of failures. However, different definitions of the term security failure and divergent understandings of its concept can create confusion and misunderstanding as to what the issue actually is and which form of response shall address it. Therefore, there needs to be some sort of clarification. This article seeks to better define and conceptualise what is meant by security failure, and inform the discussion about how organisations are analysing, preventing and learning from security failures.

Jump to:

  1. Introduction
  2. Clarifying the term security failure
  3. Analysing security failures
  4. Explaining and tackling security failures
  5. Learning from security failures
  6. References



This article will examine and critically assess the state of knowledge relative to the conceptual understanding of security failure and outline a hypothetical strategic solution aimed to better reduce the incidence of the latter. First, the term security failure will be clarified. Next, the exploration of various means allowing the analysis of security failures will form the basis of the second section. Then, the abstract structuring of security failure will be explained, altogether with some preventative means associated to it. The different ways organisations are learning from security failure will then be assessed. Finally, the idea of making the compulsory reporting of security failures in UK will be explored.

Clarifying the term security failure

To begin with and in order to better grasp with the term security failure, it is important to comprehend what is actually meant by security and then to consider its lack of success or dysfunction as being what might be termed a security failure. It is also important to note that the term security failure and likewise the one of security will be defined differently from place to place, person to person and will also vary both in scope and definition, depending upon specific viewpoints, conceptualisations and experiences.

An example of this could be the consideration of the definition of security presented by Zedner (2003), which appears to be rather broad in itself and the one given by Manunta and Manunta (2006), which antagonistically appears rather narrow. According to Zedner (2003, p55), security is a dual concept. It encompasses both a state of being (ie: something is secure) and a means to that end (ie: things are done to secure something). In turn, she argued (ibid), something would be ‘security’ if and only if (a) threat is not, (b) it is protected from threat and (c) it avoids threat. With that in mind, a security failure would be if and only if (d) threat is, (e) it is unprotected from threat and (f) it is not avoiding threat. Zedner also recognised security as being subjective (ibid). To that end, something would be ‘security’ if and only if (g) it is felt as being as such and (h) it is not insecure in essence. In that respect, a security failure would be if and only if (i) it is felt as being as such and (j) it is insecure in essence.

Although being relevant to comprehend what is meant by security, such an abstract and broad definition of the term seems perhaps to be problematic, at least for the security practitioner, as for it omits the consideration of few variables inherent to the functional understanding of security. This has been partly addressed by Manunta and Manunta (2006, p641), whereas they recognised security (S) as being the function (f) of the presence and interaction of a threat (T), a given asset (A), a protector (P) and other structuring variables, what they call, a situation (Si). In turn, they argued (ibid), security could perhaps be understood as follow: S = f (A, P, T) Si

This definition is interesting because it encapsulates the concept of security system, whose function is primarily about protecting assets against intelligent actors (as opposed to a safety system concerned with protecting against non-intelligent agents such as fire, water, wind, bacterias, viruses). To that end, a security system unable to protect an asset would be understood as being a security failure, principally because of its dysfunction and lack of success towards achieving its main purpose. Arguably then, it could be extrapolated that: Sf = f (A, T) Si; where Sf is security failure; f the function of; A an asset; T a threat; and Si any given situation.

However original, such a definition of the term security appears to be problematic too, mainly because it fails to address the causality and consequentiality relative to security failure. In turn, this has been justifiably addressed by Button (2008, p29), whereas he recognised that ‘Security failure enables an act that breaches what the security system is designed to prevent’ (emphasis added). Consequently, such a view encapsulates the idea that security failures are but consequences or indeed, the resultants of complex chains of events (Reason, 1997) which are in fact converging towards failure. For example, because of multiple factors, a security guard was sleeping while on duty in a factory and because the back door of the building was left open, a thief came in and stole an asset. Arguably, it could be hypothesised that if and only if N1, N2, … ,Nn, then Sf whereas: Sf ↔ (∑ N1, N2, Nn)/T; where N1, N2 and Nn are the events converging to the security failure Sf; ↔ the biconditional logical connective if and only if; ∑ the sum of; and T the pseudo constant time.

This definition is interesting because it evidences the rationale behind security failure analysis, theme that the next section will indeed explore, and also subtracts the perceptual variable threat, putting rather the focus upon factual events, thereby causality and consequentiality. In that respect, security failure could be prevented if and only if specific events are prevented to occur. Nevertheless and as the above has demonstrated, the definition of security failure is subject to much debate and controversy. Central to this assertion is perhaps the idea that security, in essence, is neither objective nor quantifiable (Wood and Shearing, 2007, cited in Button, 2008, p3) and that indeed it fails all the time (Button, 2008, p29).

Indeed, the term security failure appears to be all too often misinterpreted. For instance, some authors would understand it as being security breach, therefore emphasising upon the act which is breaching the security system whereas others would understand it as being a security incident, therefore emphasising upon the consequence of the act which breached the security system. This practice creates confusion and misunderstanding as to what the issue actually is and which form of response shall address it. To that end, security and security failure should not necessarily be dissociated, because after all, they are but one concept. Security is a risk in itself, being subject to uncertainty (security is or is not), likelihood (when it is and when it is not) and consequence (what are the resultants from either state)

Analysing security failures

By exploring the literature surrounding the topic of security failure, notably the works of Lam (2003), Borodzicz (2005), Toft and Reynolds (2005), Briggs and Edwards (2006), Graham and Kaye (2006), Gill (2006; 2014), Pettinger (2007), Talbot and Jackeman (2009), Carrel (2010), Hopkin (2010), Boyle (2012) and Speight (2012), two broad categories of analysis became apparent, namely (a) proactive analysis and (b) retrospective analysis. Recognising that security will fail at some point in time, the former aims to assess the likelihood and consequences related to security failure in order to better control the future behaviour of a security enabled environment.

To that end, when a proactive analysis is made, security is and indeed, is calculated to remain as such. This sort of analysis is prevalent in the field of risk management and is now applied in most business practices. On the other hand, retrospective analysis examines security failure once this has manifested and thereby investigates the past. In that respect, when a retrospective analysis is made, security could be but was not. In turn, this sort of analysis prevails in both fields of security management and disaster, recovery and accident management. Either kind of analysis can be of two types indeed, namely (c) objective analysis and (d) subjective analysis.

Objective analysis is formal, scientific and quantitative in approach. Antagonistically, subjective analysis tends to involve value judgement and heuristics and is therefore pseudo-scientific and qualitative in approach. Both types of analysis have advantages and inconveniences. They will serve different security objectives and will allow the analysis of different security problems. Nonetheless, force is to acknowledge that they are but complementary.

Further reviewing the literature, notably the works of Garcia (2006, 2006b, 2008) will reveal that either kind of analysis can be shaped by three sort of logical reasoning, namely (e) inductive reasoning, (f) deductive reasoning and (g) abductive reasoning. In inductive reasoning, the truth of the conclusions relative to the failure analysis is but merely a probability based upon the evidences given (Copi, Cohen and Flage, 2007). It uses a bottom-up approach in which risks are identified at the beginning of the analysis (Garcia, 2006, p518). For example, given the preposition that ‘if a security failure is true then X, Y and Z are true’, then inductive reasoning would suggest that given that X, Y and Z are observed to be true, then security failure should be true too. On the other hand, deductive reasoning links premises with conclusions in order to ascertain that the latter are true (Eysenck and Keane, 2015, p595).

To that end, risks are identified as a result of a systematic deductive top-down approach. Considering the previous preposition, a deductive reasoning would suggest that because security failure is true, therefore X, Y and Z are true too. Lastly, abductive reasoning is a process of deriving logical conclusions from premises known or assumed to be true (often via theorisation), ideally seeking to find the simplest and most likely explanation(s) to security failure (Tavory and Timmersmans, 2014). To that end, abductive reasoning is a heuristic that eases the cognitive load of making a decision. Examples of such a reasoning could include using a rule of thumb, an educated guess or an intuitive judgment based upon an observation of patterns. In order to perhaps make sense of the above information, Table 1 will outline some of the means identified as being relative to security failure analysis.

Table 1 Security failures Analysis Table 1 Security failures Analysis

Explaining and tackling security failures

Having defined the term security failure and explored the ways allowing its analysis, this section of the paper will firstly explain the structuring of security failure, thereby the consequentiality and causality related to it and then various strategic means allowing security failure prevention. In that respect, two questions will be answered, namely (a) how security fails? and (b) how to prevent security to fail?

How security fails

A security failure, like a criminal act or any other event, does not occur randomly, spontaneously or uniformly in both time and space. Its construction follows a set of distinctive patterns (aka failure script or chain of correlated events) and its substance is conditioned by various converging but distinct causal factors (ie: individual, organisational, technological and socio-political – see Borodzicz, 2005; Button, 2008) over a certain period of time (ie: incubation phase, precipitation, event – see Toft and Reynolds, 2005). For example, security can fail because a security guard is sleeping while on duty (script element one, individual factor, incubation phase element one), thus allowing a malefactor to break-in (precipitation, script element two) through a door left open due to staff complacency towards security (script element three, organisational factor, incubation phase element two) and steal an asset (script element four, event phase).

It can also fails because a CCTV is faulty (technological factor), thus preventing the cameras operator to detect a crime in progress or because of the weakness of the security industry regulation (socio-political factor - see George and Button, 2000; Button, 2002, 2008), which could, for example, allow criminals to run a security company and infiltrate legitimate businesses in order to carry out their activities. Furthermore, it will be noted that any security failure script element can be shaped by the consequences emerging from four sorts of process, namely (c) intentional acts, (d) unintentional acts, (e) inaction and (f) malfunction. By intentional acts are to be understood acts which are purposefully serving a given security or security failure objective, for example when a security guard decides to follow, or not to do so, a security procedure.

These kinds of acts can be either legitimate or criminal indeed. This will depend upon what the frame of reference and objective of the act are. For instance, a security guard deciding not to follow a security procedure in order to steal an asset would be considered an intentional criminal act whereas an employee purposefully bypassing a security protocol in order to become more efficient and productive at work would be an intentional legitimate act. On the other hand, unintentional acts are, as their name suggests, acts which are not done on purpose and committed either by inadvertence or error. Such erroneous acts can be either due to mistakes or skill-based slips and lapses indeed (Reason, 1997, p72; 2008).

In that sense, a security guard who forgets to close the backdoor of a warehouse, thus allowing a thief to come in and steal an asset would be considered an unintentional act caused by lapse of memory, for example due to stress, tiredness or lack of focus. Thirdly, the inaction of an employee or security personnel can also lead to security failure (BBC, 2013). This is what could be termed being complacent vis-à-vis security. For example, when a security manager feeling satisfied by the relative performance of the security system s/he manages does ‘nothing’ or not much to improve the former could be considered as a case of security complacency.

Finally, by malfunction is to be understood the failure of a piece of security equipment to function normally. As the above has demonstrated, security can fail in many ways. Indeed, sometimes it fails without such an incidence being noticed. As demonstrated earlier, security failures can either be anticipated (proactive analysis) or remembered (retrospective analysis). Table 2 summarises the findings of this section.

Table 1 Security failures Analysis

How to prevent security to fail

There are many ways of preventing security failures (Gill, 2006, 2014; Button, 2008; Talbot and Jakeman, 2009), and indeed theories abound (Zimring and Hawkins, 1973; Cornish and Clarke, 1986; Wortley and Mazerolle, 2008; Hopkins Burke, 2009; Tilley, 2009). However and because of its relevance to this paper (holistic in approach and includes learning from failure in its design), it has been felt important to focus upon the model developed by Button (2008, p224) and thereafter to adapt it so as to include elements addressing the three broad security failure script components. Adapted from Button’s model and grounded on the findings of the first section, Figure 1 outlines a strategic approach to security failure prevention.

Table 1 Security failures Analysis

As the above figure reveals, seven elements have been added to the model developed by Button, namely (g) understand benefactors’ likely errors; (h) understand technical security components’ likely malfunctions; (i) analyse security failures; (j) develop a converging security system, (k) minimising security complacency; (l) proactive learning; and (m) improving benefactors’ reliability. This section has explained the causality, shaping processes and consequentiality relative to the structuring of security failures. It has also explained how to perhaps better prevent such incidences to manifest. This has been done by complementing the model developed by Button (2008) with relevant safety related features and by increasing the focus upon security failure analysis and learning, the latter being the theme of the next section indeed.

Learning from security failures

Learning is a process seeking to improve organisational behaviour (Argyris and Schon, 1998; Reavans, 1980; Wenger, 1998) via deliberate efforts of adaptation in the face of uncertainty (Rousseau, 1991, originally 1762). To this end, it could be argued that learning is about self-preservation. Consequently and according to Toft and Reynolds (2005), Borodzicz (2005) and Button (2008), learning from security failure appears of vital importance to the organisation. And it is as such because it represents indeed the foundation of security.

There are but four broad ways of learning from security failures in general terms, namely (a) cognitive learning (see Riding and Rayner, 1998; Myers-Briggs and McCaulley, 1985), (b) behaviourist learning (see Pavlo, 1927), (c) experiential learning (see Kolb, 1984) and (d) meta-cognition (see Flavell, 1979).

Cognitive learning is concerned with the development of problem-solving abilities and conscious thoughts. For example, an organisation is learning from security failures because it has decided to do so. According to Button (2008, p138), such a learning process can be done either by (e) cross-organisational isomorphism, where similar organisations are learning from one another experience, (f) common mode isomorphisms, where organisations belonging to different sectors are learning from one another failures because they share common techniques, materials and procedures or (g) self-isomorphism, where an organisation is learning from security failure via its constituents. Cognitive learning is an active and thereby planned learning process.

Behaviourism is concerned with the development of new behaviours in response to external stimuli. An example of this would be an organisation adapting its security behaviour temporarily following a security failure without analysing the failure itself and according to specific conditions based upon feelings. This is an unplanned and passive learning process to the extent that it only reacts to environmental conditions.

Experiential learning, on the other hand, is a process whereby knowledge is created through the transformation of experience. An example of this would be an organisation learning how to better prevent security failure through its own experience, rather than by merely hearing or reading about others’ experiences. This is what Button terms event isomorphism (2008, p137). Experiential learning is an active process which can be either planned or otherwise.

Lastly, meta-cognition is concerned with cognition about cognition, for example when an organisation decides to develop knowledge about when, where and how to use specific learning strategies to better prevent security failures. To this end, it presupposes that an organisation is factually conscious of having learning difficulties with regards to security failure and then decides to engineer a learning process to tackle its own learning deficiencies. To that end, it is about learning to learn. Meta-cognition is proactive and planned. Table 4 summarises the findings of this section.

Table 1 Security failures Analysis