Rationality & Privacy: How People Make Decisions About Confidentiality

“In everything one thing is impossible: rationality.” –Friedrich Nietzsche

Unfortunately, Nietzsche’s assessment seems to hold true with regards to peoples’ attitudes about privacy.  The New York Times published an article called “Our Paradoxical Attitudes Toward Privacy” that discusses how people make decisions regarding privacy. 


The article refers to a study conducted by a behavioral economist from Carnegie Mellon, whose findings suggest that human logic with regard to privacy practices is extremely flawed.  The findings suggest that “our privacy principles are wobbly. We are more or less likely to open up depending on who is asking, how they ask and in what context.”  This article may be interesting to those interested in the human aspect of security, particularly those looking to better understand why humans are often regarded as the least secure part of a system.

A new study from McAfee estimates that corporations spend $1 trillion annually on “in lost intellectual property and expenditures for repairing the damage” (CNET).  What is particularly concerning is that displaced workers were cited as the “biggest threat to sensitive information” by 42% of the 800 corporate CIOs surveyed for the study.  While all humans are not likely as disgruntled as the displaced employees, this survey demonstrates the severe financial consequences that could potentially arise when humans are not adequately safeguarding the information they’re entrusted with.  With the financial costs associated with human breaches of security, it would seem that the natural result would be that people would be very careful about their decisions toward confidentiality.  However, the Carnegie Mellon study demonstrated that people often do not make secure decisions with regard to privacy.  Even without the malicious intent of the disgruntled employees, those who are unconsciously making decisions about privacy could unintentionally release information with severe financial consequences.


The Carnegie Mellon study revealed that people were more apt to answer honestly on a survey where confidentiality was not mentioned than on a survey where confidentiality was guaranteed.  The researchers suspect that mentioning confidentiality brings up “issues of privacy that might not otherwise figure prominently in people’s minds,” which suggests that people will be much more candid until the question of privacy is raised.

On the same counter intuitive vain, people were more likely to answer honestly on an unprofessional (and sketchy) website than they were on an official university site.  It appears that “creating an informal online atmosphere, it seems, encourages self-revelation, even though an unprofessional site is probably more likely to pose a privacy problem than an elaborate, professional one.”  What I find most unsettling about this article is that it points to the short memory and narrow sight humans often have when giving away information, and considering how frequently that happens there are an abundance of vulnerabilities waiting to be attacked.  Without the “constant vigilance” prescribed by Harry Potter’s Mad Eye Moody, people will be putting their information at risk, often without realizing it.  It may not be realistic or practical to keep our information completely secure, but we should all be cognizant of who we’re giving our information to.

While the study was conducted on students, a relatively isolated population, students are usually also fairly technology-savvy and aware of the security threats that come with technology.  Yet somehow, these subjects were least guarded in situations that posed the most risk.  One of the author’s most poignant lines was the assessment that “normally sane people have inconsistent and contradictory impulses and opinions when it comes to their safeguarding their own private information.”

One writer from the New School of Information Security suggests that perhaps fear of breached privacy is not what will ultimately motivate people to make more secure decisions regarding privacy.  The author suggests “that fear doesn’t get people to act. A belief in the efficacy of your action gets people to act.” This brings up a valid point that perhaps the problem with these privacy decisions is that people often see it as an unnecessary nuisance that has no tangible efficacy on their actions- until after they’ve been attacked.  Only once they see the results of their privacy being violated do they understand the importance of making ration decisions.  Although the author discards fear as a motivator, until people truly understand the potential consequences of a privacy breach, they won’t have the motivation to take security precautions; and understanding the potential consequences of an attack goes hand in hand with the fear it could happen. 

The problem is that no matter how secure technology becomes, the weakest link will be the un-programmable human with often-flawed logic about security.  While there are security-savvy people who make the logical choices with regard to privacy, it seems that the vast majority of the population will not be as logical, which is a difficult, but important, issue to handle when designing security systems and educating others about information security.