«Abstract: Vulnerability to deception is part of human nature, owing to fundamental limitations of the human mind. This vulnerability is exploited by ...»
A CIA deception study quotes General Dudley Clarke, who led British deception operations in WWII, “all cover plans should be based on what the enemy himself not only believes but hopes for” [CIA80]. Cover plans are deceptions that hide true operations. Further, an authority on WWII British intelligence states that British deceptions “found their best targets in the obsessions of the enemy” [Wha69]. Also, in a paper on strategic military deception, Daniel and Herbig cite a study that found policy makers were vulnerable to “seeing what they devoutly wished to see, rather than what was there” [DH82b].
For deception operations to be successful, they must be received by the target and interpreted as intended. Then they must induce the desired action in the target. An effective way to accomplish this is to offer the target what he most desires. In Cliff Stoll’s investigation of hackers who had penetrated a server at Lawrence Berkeley Labs, he discovered that the hackers were seeking information on nuclear weapons [Sto89]. So Stoll ran a sting operation, posting a deceptive file that stated where one could write to obtain such information. The hackers took the bait, and the sting operation’s success revitalized the stalled investigation.
Although desires offer a valuable avenue for deception, they may play a less important role than expectations. According to Heuer, perception is influenced more by expectations than what one wants [Heu81].
3 Cognitive biases We consider four types of cognitive biases. The first three are specific ways that people “jump to conclusions”: the bias toward causal explanations, oversensitivity to consistency, and biases in estimating probabilities. The fourth bias relates to difficulties in detecting missing evidence. Psychology researchers have identified many other cognitive biases, but they are beyond the scope of this research. 1
3.1 Bias toward causal explanations There is a strong human tendency to seek causal explanations [Heu81]. However, causation is often not seen directly. Rather, it is perceived via a complex process of inference.
In general, the process of forming causal explanations is subject to bias. The human desire to understand causation, for example, leads us to see order where it does not exist. Random things or events may wrongly be attributed to a non-existent cause, e.g. to purpose, design, or the effect of some orderly process. In addition, when observing the behavior of an organization, people tend to see the organization as more centralized, disciplined, and coordinated than it truly is [Jer68]. When people see only the outward actions of an organization, they tend to underestimate effects from internal problems and non-optimal processes.
Dozens of biases are described in the Wikipedia entry “List of cognitive biases” (http://en.wikipedia.org/wiki/List_of_cognitive_biases).
DoD Cyber Crime Conference 2007 (c) 2007, by the authors 5 Conspiracy theories typically exploit the bias toward causal explanations. In any large organization, there will be random mistakes, bad outcomes, and misbehavior among its members. The promoters of conspiracy theories can attribute these actions to the sinister schemes of the organization’s leaders. Further, any missing evidence can be attributed to the conspirator’s cleverness in hiding their schemes [Sch93].
In the domain of computer security, the deception planner can exploit the power of fallacious causal explanations and conspiracy theories by portraying fake security indicators. For example, a server can randomly generate ambiguous console messages that a suspicious hacker will attribute to detection of his activity. Legitimate users are instructed to simply ignore the
messages. An example of such a message is:
[DEBUG #11] anomalous shell activity, generating IDS record at 13:43:02.36 The message is meant to be interpreted, by the hacker, as a debug statement that a developer accidentally left in an intrusion detection program. As another example, real systems can be given honeypot indicators, such as firewall rules that limit outgoing network traffic.
From the hacker’s perspective, the bogus indicators will be seen as confirming evidence of a honeypot. In both cases, missing indicators can be attributed to the network defender’s stealthiness. The deceptive indicators also take advantage of hackers’ hypersensitivity to detection, as described in Section 4.3 below. By exploiting these hacker vulnerabilities to deception, the false indicators can be random and ambiguous, and still be effective. This makes the deception easier to implement.
There are other ways that the bias toward causal explanations can be used to advantage.
In situations where the deception target knows that deception is being used, this bias can cause him to see deception where it does not exist [Heu81]. When the target suspects deception, deception will be attractive as a causal explanation. If the evidence of deception is incomplete, the target can attribute the missing evidence to the deceiver’s cleverness. For example, in World War II, there were a number of instances in which Ally plans fell into German hands [CIA80].
However, the Germans often disregarded the plans because they were thought to be deceptions.
The Germans wrongly chose a causal explanation of deception, over the true explanation of Ally mistakes. Similarly, if hackers know a network uses deceptive security measures, then the hackers will likely view anomalous security mistakes as deceptive traps.
3.2 Oversensitivity to consistency When evaluating information, people reasonably look for trends, patterns or other forms of consistency. However when there is consistency in small samples, there is a strong tendency to overestimate the relevance of the consistency [CIA80, Heu81]. The error lies in overlooking the inherent uncertainty of conclusions based on small samples. For example, in a study of psychology researchers, the researchers were observed to have “seriously incorrect notions about the amount of error and unreliability inherent in small samples of data” [TK71]. This bias is referred to as “the law of small numbers.” For deception, a useful effect of the bias is that trends or patterns may be deceptively portrayed via a small amount of consistency, e.g., in operations or systems [Heu81].
Conditioning is a well-known deception technique, and it can take advantage of a target’s oversensitivity to consistency. Conditioning works by deceptively portraying a particular pattern of operations, so that the target comes to expect that pattern [DH82b, JDD96, USA88]. Often, DoD Cyber Crime Conference 2007 (c) 2007, by the authors 6 conditioning is used to create the comforting illusion that a standard operating procedure is being followed, so that the target will come to expect, and disregard, that operation. The ultimate purpose of conditioning is to exploit the false expectations that are induced in the target. The bias of oversensitivity to consistency can make it possible to condition targets quickly.
To illustrate the application of conditioning to computer security, consider a network that hides three valuable computers from hackers’ scans by making the computers appear to be printers; that is, the computers’ operating-system signatures look like printers. To further enhance the impersonation, the network’s printers are all named after cities, e.g., Boston, so that hackers will be conditioned to associate computers named after cities with printers, after discovering a few printers. By naming the valuable computers also after cities, they are further hidden from conditioned hackers.
In addition to conditioning, there are other deceptions that can exploit a target’s oversensitivity to consistency. One such deception is the exaggeration of computer security capabilities. For instance, over a short period of time, an organization publicly announces three incidents in which hackers were caught and prosecuted. The small sample would likely induce an exaggerated expectation of prosecution, among hackers.
3.3 Biases in estimating probabilities Adversarial relationships are characterized by uncertainty. To cope with this uncertainty, opponents rely on probability estimates to aid decision making. These estimates, however, are vulnerable to the availability bias. It is the human tendency to overestimate things that can easily be imagined or recalled, and conversely, underestimate things than are not as easily imagined or recalled [Heu81]. How easily a thing can be imagined is influenced by many factors, such as how complex it is, and one’s personal interests and degree of understanding. For example, it is relatively difficult to imagine things that are complex or foreign to our thinking, but they are not necessarily less likely. Also, how easily a thing can be remembered is influenced by factors such as how recently one has been exposed to it and how vivid the memories are. However, if something occurred recently, it does not necessarily mean it is more likely to occur in the future.
When a deception story is portrayed to a target who uses probability estimates to interpret the story, it may be possible to put the availability bias to work. For example, suppose a hacker makes probability estimates regarding the computers found during network scans. A honeynet can exploit the availability bias by portraying computers that the hacker can easily imagine or recall, such as a web server rather than a special-purpose machine.
During his famous hacking case, Cliff Stoll had to stop a hacker from downloading a particular file. However, he could not do this by unplugging the network cable, as that would alert the hacker to the surveillance [Sto89]. Instead, Stoll deceptively thwarted the download by jingling his keys across the communication line, thereby creating line noise that sporadically corrupted the data transfer. The hacker could only make speculative probability estimates about the communication problems. The deception would be aided by availability bias if it were consistent with normal network problems that the hacker had recently seen.
Since deception operations are hidden, the hacker who suspects deception must constantly assess the things he sees to determine if they are real. Such assessments typically involve probability estimates, e.g., “it is most probably a deception.” The availability bias can help in exaggerating the use of deception when the target suspects deception. For example, to DoD Cyber Crime Conference 2007 (c) 2007, by the authors 7 exaggerate a network’s use of honeypots, deceptive honeypot indicators are placed on real computers. To help make the indicators believable, the network’s real honeypots are widely publicized. The publicity places honeypots at the forefront of hackers minds, and thereby induces availability bias.
Humans are particularly vulnerable to availability bias when they conduct intelligence collection and analysis [Heu81]. They are looking for specific things and have rehearsed various scenarios in their minds. Having these things at the forefront of their mind is likely to bias their probability estimates when they encounter indicators of the things they seek.
3.4 Difficulties in detecting missing evidence Investigation involves collecting evidence and forming hypotheses. Investigative abilities are a part of human nature, and investigation is an essential means for learning, e.g., from diagnosing health problems to evaluating products. However, it appears that people tend to be weak in recognizing missing evidence, and consequently, in adjusting the certainty of their hypotheses to the realities of incomplete data [Heu81].
In deception, one way to falsely portray something is to create fake evidence that implies the thing’s existence. For example, a honeypot can use this technique to falsely portray a firewall and its protected subnet. The honeypot just needs to return the packets (i.e., evidence) that hackers expect when scanning such a firewall.
The bias of not recognizing missing evidence can aid the deception planner when he deceptively portrays something by creating fake evidence of it. If the planner overlooks particular types of evidence, the target may likewise overlook the omission. In the example, if the honeypot does not return all the packets that hackers’ scans should receive, some hackers may simply overlook that missing evidence.
4 Impaired thinking “The fraud specialist is expert at taking advantage of our weaknesses. He knows how to ‘read’ a person and assess vulnerabilities.” from The Rip Off Book : The Complete Guide to Frauds 1 To carry out their deceptions, con-men often exploit some form of impaired thinking in their victims. This section presents four of the forms they use: time limitations, false expectations, cravings and compulsions, and limitations in critical thinking. A fifth type of impaired thinking that is commonly exploited in physical security is also presented: a guilty conscience. The section shows how the five forms can be applied to computer-security.
4.1 Time limitations Frauds are often “limited time” offers [San94]. Con-men create scenarios that require urgent action, so the victim does not have time to think critically about the deception or investigate it. Typically, the victim is presented with the apparent dilemma of hastily choosing now, or forever loosing the opportunity. This ploy can also be used in computer security deceptions. An example is Cliff Stoll’s sting operation (see section 2.3), which involved an offer for information on nuclear weapons. The deception was strengthened by giving the offer a soonapproaching deadline.
[San84] DoD Cyber Crime Conference 2007 (c) 2007, by the authors 8 Limited-time offers can also be used to advantage in honeypot design. When hackers discover a new network-sever vulnerability, they have a small window of time to exploit the vulnerability, as it will be promptly fixed in well-maintained networks. Hackers’ haste to exploit new server vulnerabilities can make the hackers vulnerable to deception. Honeypots can take advantage of this vulnerability by using servers with recently announced vulnerabilities.
4.2 False expectations Section 2 described how expectations influence perception and how deceptions can exploit expectations, including erroneous expectations. This section describes two specific types of false expectations that con-men exploit. Their application to computer security is also described.