Where Everybody Knows Your Name: #FacialRecognition Technology & Society

posted Nov 6, 2013, 3:47 AM by Peter Joseph Moons

Where Everybody Knows Your Name:

Facial Recognition Technology and Its Applications in Society


“You have zero privacy.  Get over it.”

- Scott McNealy, CEO of Sun Microsystems[1]


 By Peter Joseph Moons



How does the development of Facial Recognition Technology (FRT) affect what it means to be human?  Humans crave individuality, even when wanting to appear exactly like peers.  They also seek to be known for who they are, so they develop an identity.  In the first instance of identity, parents bestow on their child a name, a lifelong moniker that begins to establish the sense of self.  FRT identifies that person to others.  FRT is simply incorporating image capture, image comparison against a database, and then identification of a person, by cameras, computers, and software.  Today, FRT is a ‘system of systems,’ as the FRT is embedded into cameras, which are only one of multiple surveillance and biometric-capturing devices available to governments, allowing for the identification of the observed persons by name.  Thus, while FRT removes anonymity, the technology­ raises privacy and social issues.




A definition is in order concerning Facial Recognition in order to fully understand how the technology functions in two steps.  First, “[f]acial recognition tools identify a person by analyzing dozens of features, such as the length of a forehead and the distance between the eyes and the nose.”[2]  Algorithms analyze all of these measurements, which allows for the unique identification of each individual.  Second, the FRT can then “automatically articulate digitized visual images of the face to documentary information about individuals.”[3]  In non-tech speak, the face is matched to a file, and how the data fields are categorized and labeled allow for highlighting of specific attributes.


The history of FRT begins centuries ago with physiognomic study in the 1800’s.  This work became well known when psychiatrist Hugh Welch Diamond “used photography to analyze and document the alleged facial indicators of insanity.”[4]  Later “eugenicist Francis Galton” attempted to make a criminal classification system of faces using “composite photography.”[5]  The technological predecessors to FRT are actually not that many.  In the evolution of image-capturing technology, daguerreotype was followed by the still camera, then the video camera, the “video recorder,”[6] and eventually to facial mapping.  Along a similar line as the video camera came Closed Circuit Television (CCTV), which is also a precursor technology to FRT.  Once the software challenge of linking FRT cameras to databases of photographs was complete, the issue was merely putting cameras in the right location and fine-tuning the hardware and software.


The digital component of FRT development began in the 1960’s when “the Department of Defense and various intelligence agencies” funded research in early FRT as part of the early technological competition during the Cold War.[7]  In the 1970’s, one step to FRT was the Facial Action Coding System (FACS), which was a “widely used and versatile method for measuring and describing facial behaviors.”[8]  An early experimenter with FRT was Japanese scientist Takeo Kanade.  During “the 1970’s World’s Fair in Osaka, Japan, he ran an exhibit in which a TV camera scanned people’s faces, ran them through a computer, and then was able to “classify the faces into one of seven categories.”[9]  This was actually a giant step in the FRT development process.


Next, in the 1980’s, researchers used the BASIC computer language in an attempt to identify electronically “essential muscular actions and combinations of actions identified with an emotional expression”[10] on the human face.  As computer technology improved, so did the accuracy of the identification process.[11]  In fact, the more characteristics about facial expression added to a database by computer coders manually inputting data, the better FACS became at analyzing faces.[12]  Eventually, a commercial use was found: in the 1980’s, the US banking industry became interested in FRT in order to decrease fraud during financial transactions.[13]


Like many nascent technologies, early government funding catalyzed research.  The US Government conducted FRT tests in 1996 and 2000 with various commercial vendors, seeking to improve “algorithms,” image quality, identification accuracy rates, and use live video feeds to capture images.[14]  However, in the early 2000’s, video surveillance systems suffered from inadequate computer processing power, expensive servers, and lack of scalability in real-world application of emplacement theory.[15]  Those problems were solved with more money and better computer power.[16]  Now the surveillance systems and FRT “are massively scalable, easier to configure, and less expensive to implement.”[17]


Example: The United Kingdom and CCTV


The early use of CCTVs in the United Kingdom began benignly enough: they were used to monitor transportation systems and demonstrations.[18]  The deployment of these systems stayed small:


·      “In 1956, the police started to use cameras in one-man operations at traffic lights, in order to catch drivers running red lights.”[19]

·      “In 1960, the Metropolitan Police temporarily erected two cameras in Trafalgar Square to monitor the crowds during a public appearance by the Queen.”[20]

·      “By 1969, 14 police forces around the country were using CCTV, but there were still only 67 cameras in total.”[21]


Even in the early 1980’s, the quantity of CCTVs and their employment was not on a grand-scale.[22]  What changed the whole mentality about CCTVs was a shocking event: an assassination attempt on Prime Minister Margaret Thatcher in 1984 prompted the development of a large, urban-based surveillance system.[23]  Further, footage of a horrific crime catalyzed more CCTV use; this was “the 1993 abduction and murder of the toddler James Bulger by a pair of ten-year-old boys.”[24]  After this case, funding increased dramatically and so did public acceptance of CCTV.[25]  Terrorist bombings in the late 1990s and in the 2000’s increased the public’s appreciation for CCTV for the apprehension of criminals.[26]  The UK public now apparently expects to see footage of criminal activity captured via CCTV on their nightly news broadcasts.[27]  Further, “the absence of meaningful privacy protections” allowed for the spread of CCTV use without much pushback from the population.[28]


            The UK is known as a country with many CCTVs and growing use of FRT.  The statistics of CCTV’s are phenomenal in that relatively small nation:


·      “Throughout the country are an estimated five million CCTV cameras; that's one for every 12 citizens.”[29]

·      The UK has “20 per cent of the world's CCTV cameras,” despite being only “0.2 per cent of the world's inhabitable land mass”[30]

·      “The average Londoner…may be monitored by 300 CCTV cameras a day.”[31]

·      “Roughly 1,800 cameras watch over London's railway stations”[32]

·      “6,000 [cameras] permanently peer at commuters on the Underground and London buses.”[33]


Of course, FRT incorporated into a normal CCTV will very rapidly identify troublemakers when they enter an area known for problems and then automatically alert camera operators or security, thereby creating a tangible ‘pre-crime’ identification scenario. The UK government’s goal is to focus on “troubled spots” to get ahead of any crime.[34]  Currently, however, the success ratio for the UK in reducing crime because of FRT is not high.  In a study of “14 surveillance systems” in the UK, “only one of the 14 areas could a drop in crime levels be linked to CCTV.”[35]  FRT will have to prove its value to any population that already feels under surveillance from all quarters.  Though by identifying someone at specific locations at certain times, for example 300 times daily in London, will allow for a great catalog of each identifiable person’s activities.  Imaginably, that kind of data, when aggregated for a whole population, will allow for some fairly accurate predictive analysis.


Example: Mass Events


Large sports arenas have hosted several FRT tests in the past, and are likely to expand.  Two FRT tests have occurred at soccer games in France and Germany[36] while a third was enormous.  At the Super Bowl in 2001, FRT scanned all the spectators passing through the turnstiles and matched their faces against a criminal database.[37] While “the American Civil Liberties Union condemned [that test] as privacy-invasive,” a constitutional law scholar noted the authorities can observe and even record images and still not violate the Fourth Amendment to the US Constitution.[38]


In the US, the Department of Homeland Security (DHS) planned to test “its crowd-scanning facial recognition system, known as the Biometric Optical Surveillance System, or BOSS, at a junior hockey game” in September 2013.[39]  Cameras were to follow volunteers and comb through data to try to identify them.[40]  At the stadium, “[t]he plan…was to use 30 volunteers whose facial data would be mingled in a database among 1,000.”[41] There will likely be many more of these proof-of-concept tests and they will increase in size and complexity to validate the technology.


Governments realize that capturing and analyzing the images of a large number of people can lead to more positive ‘hits’ from their databases.  Moreover, the government acknowledges that the venue-goers could be scanned during such public tests at large events, wittingly or not, as an incidental cost to attendance.[42]  The BOSS, which uses three-dimensional modeling, “is capable of capturing images of an individual at 50-100 meters in distance.”[43]  Clearly, in five to ten years, the government and its private sector contractors will perfect BOSS; neither the ACLU nor any other privacy rights organization will stop its employment, either.


The 2014 World Cup in Brazil will also employ FRT as “police officers plan to use wearable-computer-based face recognition to look for wanted criminals in the crowds.[44]  Apparently, those wanted by the law attend soccer games in Brazil.  Though, again, large-scale venues allow for the scanning and identification of large numbers of people in a short period of time.  Eventually, as FRT use expands there will be no place to hide as FRT will be everywhere, perhaps on personal phones,[45] on Google Glass, or even on its Japanese version called Intelligent Glass, which does have integrated FRT.[46]



Example: Ohio


A very recent example from Ohio in the summer of 2013 shows how far FRT has gone as a tool for law enforcement.  The FRT system was easily activated, without the public’s knowledge and even without sufficient oversight from a court system or an “elected official.”[47]  The “Information Technology” office of the “state government in Ohio secretly implemented a face recognition program using the drivers’ license database…or police mug shot.”[48]  These photos are automatically compared against “a snapshot…or a security camera image” with the result being that anyone who is imaged and in, at least in this case an Ohio state database, is identifiable.[49] 


Surprisingly, in Ohio, which is only one of 38 states already using FRT, allows “tens of thousands of law enforcement officers…and court employees” to conduct FRT searches, all without any court oversight.[50]  Lack of elected officials’ oversight, public discourse, or public approval appears to be the norm for FRT.  The use of this technology has not undergone the scrutiny that a US national identification card has gone through,[51] even though the result -- identification of individuals within US borders -- is the same.


Theoretically, the success of FRT is only limited by the quantity and quality of databases that the system accesses.  The State of Ohio could just as easily have added in photos from state universities, libraries, hunting/fishing licenses, welfare cards, bicycle permits, airplane or boat pilot licenses, etc.  A government can, essentially, regulate any industry and require licensing of that enterprise, including a photographic image of the license holder.  At that point, when the image is captured, another person is added to the database.  In the Ohio system, like any other FRT image capture/database, the system would benefit from having multiple ‘hits:’ a person can be identified by matching their picture across various licenses or permits, thereby increasing both accuracy of the system and knowledge of who is in a captured image. In Ohio, like elsewhere, “privacy violation fears” only surface post facto, when the public realizes what their government has done.[52]


Legal and Regulatory Aspects


A preponderance of jurisprudence already exists that indicates the Fourth Amendment to the US Constitution, protecting against illegal search and seizure, will have zero effect on preventing any level of government from conducting FRT.[53] Even more so, a government can surveil activities and people from off of the suspected private property, i.e. adjacent lands, public areas, or even the air.[54]  Images obtained via FRT would be admissible in courts, likely even the US Supreme Court, as based on case history.[55]


The Fourth Amendment likewise will also offer no protection against FRT in so-called public space.  The “Coverage Test” of this amendment in determining intrusiveness of a technology would state the amendment “provides protection whenever government information gathering causes a problem of reasonable significance.”[56]  Governments, courts, and security officials would default to this formula and claim FRT causes no “problem” since faces captured do not sense anything: the data capture is completely physically harmless to humans.


The question of whether FRT violates the Wiretap Act has also arisen.  The answer, based on other technology uses,[57] will certainly be no, as this Act pertains to recording of human voice, which FRT does not.  There may already be cases where Voice Recognition Technology issues have been challenged vis-à-vis the Wiretap Act; if not, they will likely occur in the future.  Noticeably, limitations on the use of FRT will be few.  For example, no warrant from the Foreign Intelligence Surveillance Court would be necessary for any FRT use in the public arena, because of the lack of the expectation of privacy for citizens there.[58]


One recommended guideline for video surveillance and FRT is the “Deletion of old data,”[59] though this recommendation is unlikely to be followed by any government.  The accumulation of data on a person may add to the body of evidence against them.  So visualize that the federal government begins logging FRT on all citizens nationwide, when a person is a teenager; ten years later, that person engages in criminal activity.  The government will want to examine with whom the person had business, their friends, places the frequented, etc.  For this reason, governments are unlikely to voluntarily remove any data, especially geo-location data and FRT documentation of a person.  Of course, with any universal FRT, a government will easily conduct network analysis and see who knows who, as well as how, where, and how frequently they communicate or meet.  The US Government’s defunct “Total Information Awareness” office’s program will then be resurrected in fact and deed.[60]


Accountability, control, oversight, and review of surveillance collection are measures that governments, and their citizens, can perform to ensure that security services do not overstep their legal mandate, and correct them when they do.[61]  With widespread use of CCTVs, and soon of FRT, the safeguards on the “acquisition,” “retention,” and “dissemination” of collected data on people will come under pressure.[62]  There is hope from the legislation branch of the federal government for those who seek relief from some aspects of surveillance.  A bill in the House of Representatives, the “Surveillance State Repeal Act” (H.R. 2818), was introduced by Rep. Rush Holt, (D) New Jersey, and, “if enacted into law, the bill would entirely repeal both the USA PATRIOT Act and the FISA Amendments Act of 2008.”[63]  Without these Acts, FRT will become less easy for the government to pursue.


In surveillance, “the use of ethnicity as a basis for profiling imposes a cost on innocent members of the targeted group.”[64]  By contrast to any profiling, though, some biometrics scanning systems have had difficulty identifying “people of color.”[65]  Unfortunately, what this can lead to is increased scrutiny of non-white populations when CCTV’s with FRT are in use.  Realistically, populations will only tolerate so many ‘false positives’ and then seek out elected officials or ombudsmen to promote changes in the system.  If this situation were to ever come to pass, then governments are more likely to improve the technology vice abandon such systems altogether.  Since one biometric system may be less than accurate, advocates often promote the use of “multimodal biometric” systems, designed to increase the precision of identification.[66]


This is the risk that some sectors of the population face with FRT as algorithms will set off alerts on them more, just because of their identity factors: “the poor, communities of color, immigrants, and indigenous groups—may be at greater risk of data-driven discrimination.”[67]  So what may happen is that past digital surveillance may mean that marginalized groups in civil society could be susceptible to more scrutiny under massive FRT-enabled CCTV surveillance.[68]  There is a partial solution in automated FRT, especially the CCTV’s on which they reside.  The reason is simple: some studies have shown that human CCTV operators have a tendency to follow “young people and people of color.”[69]  If humans are removed from the circuit, then the monitoring of humans becomes more efficient, though by implementing this procedure, society will lose some of its humanity.


Misuse of FRT is going to happen, just like with any major surveillance technology.  The very recent scandal showing that employees of the “National Security Agency intentionally misused the government's secret surveillance systems at least 12 times over the past decade, including instances when they spied on spouses, boyfriends or girlfriends”[70] is egregious but seems not uncommon.  While state and national government agencies have “insisted that willful abuse of surveillance data by officials is almost non-existent”[71] when humans are involved, there will be deviation from intent of the technology, and ultimately a loss of privacy for some individuals.




Some people interested in freedom are beginning to develop countermeasures to FRT.  These include CV Dazzle, which is a technique of using face paint and hair design to foil FRT. CV Dazzle is a connection to the US Navy dazzle painted ships, particularly those of World War II, whose geometric paint designs obscured them from enemy observation.  Now transferred to the human body, dazzle paint, along with other alterations, is a defensive measure against FRT.  (See Figure 1) Another counter-FRT capability are “Glamoflage” t-shirts “covered with the faces of celebrities, which would cause any facial recognition program to tag them, too...[and] add more noise to the signal.”[72] (See Figure 2)  Even to the human eye, this shirt causes dissonance.

 Figure 1 CV Dazzle[73]


Figure 2 GLAMOFLAGE T-Shirt[74]


There is also an anti-drone surveillance body shroud made out of metallized fabric that traps body heat, which is particularly beneficial if FRT cameras are mounted on Remotely Piloted Vehicles overhead.[75] (See Figure 3)  The user would have to know what type of camera they are facing in order to choose the right counter-measure, of course.  Realistically, when humans seek to disguise themselves, those individuals seek to protect their identity, and to reveal that only when they choose, not when surveilled.  One could say such people are protecting their human individuality.


Figure 3 Anti-Surveillance Shroud (Left to right: Normal, thermal, and infrared image)[76]


Another anti-surveillance technology that was developed for military vehicles, helicopters, and ships, BAE Systems ADAPTIV, uses “cells in a honeycomb…that can be cooled or heated up very quickly as well as controlled individually, allowing different patterns to be created,” thereby defeating infrared surveillance.[77]  This type of technology has the potential for being miniaturized for use in creating a cloak, or clothes, for people who want to become invisible to infrared sensors from CCTV’s.   If the sensor does not detect a human, then the FRT will also not identify him or her.


Commercial Applications


Commercial applications of FRT have actually been around for over half a decade.  In one example, Sony cameras have two FRT applications built into them.  The first is Smile Shutter, in which the camera detects when a person is smiling automatically takes a picture.[78] The second is one that most digital camera users are already familiar with: Sony’s Face Detection setting can “recognise [sic] faces and automatically adjust the camera settings to ensure the best shots.”[79]  The key here is the automatic nature of these applications: the user simply turns on the features and the action is done without any more interaction.  The technology is now going in the direction of determining emotions and psychological state of the subject in the camera’s lens and, to get the best reading, more knowledge on faces and better algorithms will be developed.[80]




            Commercial uses for FRT are already available: they are a bonanza of information for companies, and depending on the perspective of the shopper, either good or bad.  The FRT will be able to scan shoppers’ faces, identifying them,[81] and noting the possession of a store’s affinity credit card, or frequent shopper, program, and then offer them information or specials on their most purchased products.  Or an FRT system can do what Amazon.com does and offer suggestions based on what was observed previously.  The hook for sellers is just like governments: match a face to an identity and then build a body of data about that identity.  Of course, while governments seek stability and control, stores seek profits, so the latter will want to match goods to price points to environmental factors in order to sell more.  There may come a time when opting-out of such FRT shopping programs are not an option.[82]


Of course, when there is a lack of transparency about surveillance practices, customers become wary.  If consumers and citizens know of the usage of surveillance technology, then they can make wise, personal choices about their activities including shopping.  For this reason, laws may require stores to post notices or End User License Agreements about FRT and other surveillance systems. [83]  The retail surveillance industry is already engaging in this arena: one company is “RetailNext,” which “enables retailers and manufacturers to collect, analyze, and visualize in-store data.”[84]


As we shop, data is collected and then “stores can rearrange fixtures and fittings in order to entice shoppers to spend more;”[85] this procedure, when first heard, will make some consumers cringe, for we do not like to be considered as only a ‘wallet-with-legs.’  However, eBay and Amazon already do these surveillance activities ‘virtually.’  Indeed, social network sites, like Facebook, are already data-mining in pursuit of advertising dollars using FRT.[86]  There is only a small step, then, to identify users of social media sites like Facebook, Twitter, YouTube etc., and link their activities on those sites to more, if not all the online activity they have, via FRT.  When that occurs, users’ lives will be an open book for all to observe.  Perhaps humans do not feel they are losing their humanity when such circumstances occur online; when FRT happens in stores or outdoors, the process becomes much more personal, and may affect how we feel about shopping and our humanity.


The Future


            The future of FRT is likely going to go into several different directions simultaneously.  First systems will become more mobile so that security officials are not tethered to any fixed site in order to access systems.  Second, private individuals will also have access to mobile FRT and other, open-source linked databases.  Someday, regular citizens will wear Google Glass type devices to recognize who-is-who, everywhere.  Third, FRT will expand to include “automated facial expression analysis…as a means of determining what people are feeling and thinking.”[87]  So the technology has gone from still cameras that record a moment in time, through video that captures moments, to FRT and identification, to emotion recognition.  Perhaps no one will have any feelings they can keep to themselves in the future.


Fourth, FRT enabled CCTV’s will begin appearing on government buildings and corporate headquarters first, then expand to transportation nodes, public venues like stadiums or concert halls.  Then they will filter down from universities to lower levels of schools, and eventually be embedded on street lamps, mailboxes, bus stops, fire hydrants, etc.  There will likely come a time when buildings, urban blocks, and cities themselves are optimized for surveillance of all types.  Just as electrical cables and telephone lines were strung on poles in front of homes and businesses, but today are unobtrusively buried underground, the cities of the future may have all kinds of surveillance technology embedded everywhere and always on, with passersby barely able to recognize their existence.


Additionally, governments and elites in a society see “fear of other’s panic, especially of those lower down”[88] on the social pecking order, and thus want the ability of identification to keep boundaries, both social and physical, in place.  With surveillance technology, governments will make the claim that losing a little freedom will allow citizens to gain more security; unfortunately, this “zero-sum relationship” is always a “paradox.”[89]  Surveillance systems will have to function in a manner that does not cause citizens to become “desperate or resentful.”[90]  So how would a government be able to convince people of a new surveillance system like FRT?  ‘Selling fear’[91] is how insurance companies push their products and how governments pass legislation or impose regulations.  The rubric of ‘protection’[92] is likely the method any organization will use to sell FRT to its constituents, especially when people know they are going to be losing some of their freedom and anonymity.


On the personal side of technology, one emerging trend may backfire.  Some parents are now trying to limit their newborn children’s web presence, and thus their identification, by not posting any pictures of them on the web.[93]  This self-limitation applies to both photos and video and is an attempt to prevent identification by FRT, as well as protect the ‘brand’ of the child’s image, name, even essence.[94]  While well intended, this effort will fail for at least two reasons.  First, at some point the child will enter school, which will issue them an identification card or put their picture in yearbook.  Once done, their image is no longer under the parent’s control.  Second, anyone without some digital record of their identity will become a non-person, an oddity; the desire for anonymity will only make them stand out in a crowd and so become suspect.


The youth perspective of FRT is also important.  If FRT becomes ubiquitous, self-reinforcing standards of conduct could become the norm, with a result being “coercing young people to grow up too fast” and not act, well, like youth, because of future deleterious effects on employment.[95]  Since young people have typically “only infrequently considered the potential career-related implications of their social media use,” there is also the potential that youth will pay less heed to standards of behavior in public.[96]  FRT, and the repeating of videos of identified people behaving badly across social media, will eventually have a normative effect on certain segments of the population, resulting in self-censorship.  Such a norm may, though, affect creativity, or even a sense of adventure, for fear of failure or ridicule may preempt attempts at art, or sports, or music, which would not bode well for any culture.  No one will want to fail when such an event is recorded for eternity and accessible by everyone, everywhere.


Conceivably, social media video companies, like YouTube and Vine, are likely working on embedding FRT into their videos.  At some time in the future, people will be able to search for their images, or those of others, in companies’ videos online, on broadcasts of sports events, or political rallies, even ones that live.  Likewise, there will certainly be businesses that capitalize on finding people in recorded videos via FRT just as there will be companies that seek to remove individuals’ images or blot them out…for a fee, of course.  Additionally, with active, omnipresent FRT, a government could potentially know where a person is at any time in the public sphere, given enough CCTV coverage and bandwidth.


            Politics is also very much at stake in a future saturated with technologies designed to identify who-is-who.  Once governments can identify and label those who express political viewpoints contrary to those who govern, the ability to “repress dissidents”[97] will become even easier.  Imagine the scenes at the infamous Democratic Party Convention in 1968 or at the World Trade Organization protest in Seattle in 1999.  If the government knew the identity of every one of the protestors via FRT, then as they got within five blocks of a downtown area, the authorities could flash their names and photos on big electronic billboards.  The signs would inform the individuals and their cohorts that the government knows who they are.  Certainly, the concepts of anonymity and safety in numbers allows for protests to swell and allow for the commission of violence.  CCTV’s with FRT, as well as multiple layers of other biometric surveillance, will decrease the obscurity of crowds.


The result may be a decrease in violent protests as well as peaceful political participation, because being labeled as a ‘protestor’ or ‘dissident’ may mean the citizen is targeted individually in other ways, such as increased government scrutiny.  Indeed, in Jiangsu, China, the government’s Golden Shield surveillance system employed “artificial intelligence to extend and improve the existing monitoring system.”[98]  The result was phenomenal: “protests and riots” decreased by almost half, with other provinces noticing large drops in demonstrations.[99]


One question stands out for societies and human behavior in a world of FRT and knowing who-is-who: Will citizens act differently when they are constantly under surveillance?  The answer is yes: On the fiction side, Orwell’s Nineteen Eighty-Four predicted that citizens knew they could be watched anytime, anywhere.  In reality, Cuban citizens participated in Committees for the Defense of the Revolution and knew their neighbors were watching them...as well as did the watching.  East Germans, too, knew that the government monitored the citizens, neighbors watched neighbors, and the citizens had a duty to report on one another.  Being under surveillance will likely cause cautious manners of behavior and make populations less trustful of their governments as well as each other.  Maybe the only way people will escape identification via FRT is from surgery.  However, facial surgery may become very heavily regulated by governments; authorities will want to know what a person looks like before and after surgery to ensure the FRT continues to work correctly.


The internationalization of FRT will only continue.  Owing to multiple countries using biometrics for their national identification systems, the comingling of data from country to country will continue.  In this vein, travel is one area already affected by FRT: “In 2004, the International Civil Aviation Organization decided that facial recognition would be used as the globally interoperable biometric on all international travel documents.”[100]  One can fathom, as global governance increases, there may come a time where countries sign bilateral or multilateral treaties to allow access to their erstwhile proprietary face databases.  As biometric scales to worldwide acceptance and deployment, there may come a time where physical passports are no longer required as FRT will allow people to travel, or purchase anything, or drive a vehicle, access a computer, or enter a building,[101] just by peering into a soulless camera.


FRT research is of course still ongoing, by both the public and private sector.  The US Government’s Defense Advanced Research Projects Agency continues to fund such projects.  One, called JANUS, will merge images from various sources to create a composite picture of an individual, thereby increasing accuracy of the system.[102]  Since there are “flaws” in any data-driven system, the goal will be to reduce the error rate.[103]  Essentially, the mis-identification rate in FRT will have to be infinitesimally small.  Otherwise, good people will mistrust the government and the surveillance system even more, while the actual targets of surveillance may slip silently through society.




The overarching question to ask on FRT, what is gained?  Governments and people want to take away the unknown, which brings fear, and instead illuminate things around them, for their security and safety.  This aspect of humanity and political structures induces the ‘knowing’ others.  This may also be the good that comes from this technology.  Though with FRT, there is a potential that no one in the future will be unknown.  Others may remain strangers, but their names and other biographical data will be known to everyone else.  A contrarian viewpoint is that FRT may make us more human as each person is identified to everyone else.  However, to be human is to have empathy and compassion, which knowledge of a name and biography does not provide.


Eventually, people may come to expect to know the name and biography of everyone around them, all the time.  So this is the dark side of FRT: humans may not get to know someone by conversing and sharing experiences; instead, that knowledge may come virtually, which would affect how we value each other.  Thus, this is the moral quandary: FRT will increase the knowledge of others but not necessarily aid in understanding others better.  FRT is unlikely to have been developed differently or gone a different route, for the identification of people as ‘friend or foe’ occurs everywhere in the animal kingdom; FRT merely provides a heuristic shortcut.

[1] Polly Sprenger, “Sun on Privacy: ‘Get Over It’,” Wired.com, 26 January 1999, http://www.wired.com/politics/law/news/1999/01/17538 (Accessed 29 September 13)

[2] Kashmir Hill, “Hello, Stranger,” Forbes, (Accessed 8 October 2013), 24 August 2011, www.forbes.com/forbes/2011/0912/technology-facebook-cameras-huxley-acquisti-hello-stranger.html

[3] Kelly A. Gates, “Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance,” New York: New York University Press, 2011, 47.

[4] Ibid., 19.

[5] Ibid.

[6] Simon Chesterman, One Nation Under Surveillance: A New Social Contract To Defend Freedom Without Sacrificing Liberty, New York: Oxford University Press, 2011, 145. The author adds: “The history of CCTV as a technology is surveillance really began with the commercial availability of the video recorder in the 1960s.”

[7] Gates, 28-29.

[8] Facial Action Coding System. (Accessed 2 October 2013) http://face-and-emotion.com/dataface/facs/description.jsp Scientists “Paul Ekman and W.V. Friesen developed the original FACS…by determining how the contraction of each facial muscle (singly and in combination with other muscles) changes the appearance of the face.”

[9] Gates, 25.

[10] Facial Action Coding System Affect Interpretation Dictionary http://face-and-emotion.com/dataface/facsaid/history.jsp (Accessed 2 October 2013)

[11] Ibid.

[12] Gates, 171.

[13] Ibid., 37-38.

[14] Ibid., 71. The government called these test “FERET” for Facial Recognition Technology.

[15] John Convy, “Ten-year retrospective on video surveillance,” Government Security News, (Accessed 3 October 2013), 1 September 2013, http://www.gsnmagazine.com/node/32866?c=video_surveillance_cctv

[16] Ibid. Convy also notes: “More powerful chips mean more compute power in the servers and cameras, allowing video management systems to be faster and more efficient to deliver better video quality, greater quantities of video data and more intelligent analytical processing of digital video streams.”

[17] Ibid.

[18] Chesterman, 145.  Chesterman writes: “The early growth in Britain…was largely confined to the retail sector with vocational experiments in using CCTV for security on underground railway stations, to monitor traffic flow, or to capture images of groups such as political demonstrators and football hooligans.”

[19] Brendan O'Neill, “Watching you watching me” NewStatesman, (Accessed 2 October 2013), 2 October 2006 www.newstatesman.com/node/154435

[20] Ibid.

[21] Ibid.

[22] Chesterman, 145. The author adds: “By 1981, only 10 British cities had open street systems in operation, mostly small-scale and locally-funded.”

[23] Ibid., “The first large-scale public system was erected in Bournemouth in 1985…”

[24] Ibid., 146.

[25] Ibid.

[26] Ibid.

[27] Ibid., 147.

[28] Ibid., 155.

[29] O'Neill.

[30] Ibid.

[31] Ibid.

[32] Ibid.

[33] Ibid.

[34] Ibid. The author notes: “This is London reduced to an A to Z of suspicion, a collection of red-squared troubled spots where men, women and children must be monitored morning to night, every day of the year.”

[35] Ibid.

[36] Tekla Perry, “Hockey Fans to Test Facial Recognition Technology,” Institute of Electrical and Electronics Engineers, (Accessed 8 October 2013) 20 September 2013. http://spectrum.ieee.org/tech-talk/computing/software/hockey-fans-to-test-facial-recognition-technology

[37] Declan McCullagh, “Call It Super Bowl Face Scan I,” Wired, (Accessed 8 October 2013), 2 February 2001,


[38] Ibid. Mccullagh writes: “Eugene Volokh, a law professor at UCLA, thinks the practice is constitutional when it takes place at a public event. ‘There's no Fourth Amendment problem if the government is simply observing -- or even recording -- what goes on in public,’ Volokh says. ’For constitutional purposes, that's just not a 'search,' because there's no legitimate expectation of privacy.’”

[39] Rawlson King, “Homeland Security to test BOSS facial recognition at junior hockey game,” (Accessed 3 October 2013), 20 September 2013. http://www.biometricupdate.com/201309/u-s-testing-crowd-scanning-facial-recognition-system

[40] Perry.

[41] Charlie Savage, “Facial Scanning Is Making Gains in Surveillance,” New York Times, (Accessed 8 October 2013), 21 August 2013 http://www.nytimes.com/2013/08/21/us/facial-scanning-is-making-gains-in-surveillance.html?pagewanted=1_r=0smid=fb-nytimes&_r=1&pagewanted=all& The amount of media coverage on this relatively minor event is substantial enough.  A Google search for “BOSS facial recognition system at a junior hockey game” resulted in 92,500 hits.

[42] Department of Homeland Security, Privacy Impact Assessment Update for the Standoff Technology Integration and Demonstration Program: Biometric Optical Surveillance System Tests, (Accessed 3 October 2013), 17 December 2012,  http://www.dhs.gov/sites/default/files/publications/privacy/PIAs/privacy_pia_st_stidpboss_dec2012.pdf.  “Because testing is being conducted at a public venue, members of the public may be present during the tests, and their images may incidentally be captured as they walk past the cameras.”

[43] Rawlson King, “Homeland Security to test BOSS facial recognition at junior hockey game,” (Accessed 3 October 2013), 20 September 2013. http://www.biometricupdate.com/201309/u-s-testing-crowd-scanning-facial-recognition-system

[44] Perry.

[45] Hill. In 2010, Hill writes: “BI2 Technologies has developed a $3,000 iPhone add-on called MORIS (Mobile Offender Recognition and Identification System)...[an application that can] take photos of suspects, and…using iris scans and facial recognition, will identify anyone who already has a photo in a criminal database.”

[46] Alexandria Ingham, "Japan’s Version of Google Glass: How Does It Work?," (Accessed 8 October 2013) 3 October  2013, www.decodedscience.com/japans-version-google-glass-work/37662 Intelligent Glass “is connected to a smartphone directory, and will look up the information you need on any person in front of you. The Glass uses facial recognition software to compare features against the pictures in the smartphone to bring up the name and company that the person works for.”  More interesting will be to bring up a face, find the name, job, education, and career history, as well as who you have in common with the person in front of you.  This scenario will be an instantaneous version of playing Six Degrees from Bacon.

[47] Kade Crockford, “Ready, fire, aim: Ohio officials implement statewide face recognition program without a whiff of public debate,” 3 September 2013, American Civil Liberties Union, (Accessed 1 October 2013) https://www.aclu.org/blog/technology-and-liberty-national-security/ready-fire-aim-ohio-officials-implement-statewide-face

[48] Ibid.

[49] Ibid.

[50] Editorial, “DeWine must limit access to facial ID data,” Marionstar.com, (Accessed 1 October 2013) 28 September 2013, http://www.marionstar.com/article/20130928/OPINION01/309280006?gcheck=1

[51] David Bier, “The New National Identification System Is Coming,” Openmarket.org (Accessed 12 October 2013) 1 February 2013 http://www.openmarket.org/2013/02/01/the-new-national-identification-system-is-coming/

[52] Editorial, “DeWine must limit access to facial ID data.”

[53] Daniel J. Solove, Nothing to Hide: The False Tradeoff between Privacy and Security, Yale University Press, 2011, 93-100 passim.

[54] Richard D. Desmond, “Big Brother Is Watching: Reasonable Expectations Of Privacy In The Technological Age,” Reporter, June 2002, Vol. 29 Issue 2. The Court concluded that aerial photography of outdoor areas merely enhanced the natural senses and considered it beyond the scope of Fourth Amendment protections.” Citing 476 U.S. 227 (1986).

[55] Daniel J. Solove, Nothing to Hide: The False Tradeoff between Privacy and Security, Yale University Press, 2011, 100. US Supreme Court case Florida v. Riley, 488 US 445 (1989).

[56] Daniel J. Solove, Nothing to Hide: The False Tradeoff between Privacy and Security, Yale University Press, 2011, 116.

[57] Solove, 175. US v. Falls, 34 F.3d 674, 680 (8th Cir. 1994).

[58] Chesterman, 67.

[59] Solove, 181.

[60] Ibid. 183-185.

[61] Chesterman, 214-217.

[62] Ibid., 239.

[63] Shahid Buttar, “NY Times endorses Surveillance State Repeal Act, joining BORDC,” People’s Blog For the Constitution, (Accessed 2 October 2013), 25 September 2013, http://www.constitutioncampaign.org/blog/?p=14922#.UkqYXhaxNgk

[64] Chesterman, 256.

[65] Shoshana Amielle Magnet, When Biometrics Fail: Gender, Race, and the Technology of Identity, Durham: Duke University Press, 2011, 28.

[66] Ibid.

[67] Seeta Peña Gangadharan, “Joining the Surveillance Society? New Internet Users in an Age of Tracking,” Open Technology Institute, September 2013 (Accessed 3 October 2013) 1, http://newamerica.net/sites/newamerica.net/files/policydocs/JoiningtheSurveillanceSociety_1.pdf

[68] Ibid., 13. The author notes that the poor, minorities, etc “carry existing inequalities with them into digital environments, including a past history of being surveilled.”

[69] John Gilliom, SuperVision: An Introduction to the Surveillance Society, University of Chicago Press: Chicago, 2013, 120-121.

[70] Stephen Braun, “NSA watchdog details surveillance misuse,” Associated Press, (Accessed 1 October 2013) 27 September 2013, http://m.startribune.com/?id=225516402

[71] Ibid.

[72] Tim Barribeau, “A T-Shirt That Tricks Facial Recognition Software: The REALFACE Glamoflage,” Popphoto.com (Accessed 11 October 2013.) 7 October 2013 http://www.popphoto.com/news/2013/10/t-shirt-tricks-facial-recognition-software-realface-glamoflage

[73] CV Dazzle, (Accessed 11 October 2013.) http://cvdazzle.com

[74] Barribeau.

[75] Dana Priest, “Government surveillance spurs Americans to fight back,” Washington Post, (Accessed 29 September 2013) 14 August 2013. http://www.washingtonpost.com/lifestyle/style/government-surveillance-spurs-americans-to-fight-back/2013/08/14/edea430a-0522-11e3-a07f-49ddc7417125_story.html?wpmk=MK0000200

[76] “Designers trying to help people fight government surveillance,” Washington Post, (Accessed 11 October 2013) http://www.washingtonpost.com/lifestyle/style/designers-trying-to-help-people-fight-government-surveillance/2013/08/15/824faf84-0533-11e3-88d6-d5795fab4637_gallery.html#photo=1

[77] BAE Systems ADAPTIV technology,  http://www.baesystems.com/magazine/BAES_019786/adaptiv--a-cloak-of-invisibility?_afrLoop=468192820105000&_afrWindowMode=0&_afrWindowId=null&baeSessionId=Lf8rSJyTydDV8kybTLGpGphgRNhPfKVSb6PwDrN2Xtjnf2R3pvft!180347335#%40%3F_afrWindowId%3Dnull%26baeSessionId%3DLf8rSJyTydDV8kybTLGpGphgRNhPfKVSb6PwDrN2Xtjnf2R3pvft%2521180347335%26_afrLoop%3D468192820105000%26_afrWindowMode%3D0%26_adf.ctrl-state%3Dhwoamgt55_4 (Accessed 8 October 2013)

[78] Sony Corporation, http://www.sony.co.uk/hub/learnandenjoy/2/2 (Accessed 8 October 2013.)

[79] Sony Corporation, http://www.sony.co.uk/hub/learnandenjoy/2/1 (Accessed 8 October 2013.)

[80] Gates, 153-155.

[81] Danny Lee, In-store surveillance systems raise privacy fears in Hong Kong, 28 July 2013, (Accessed September 24, 2013) http://www.scmp.com/news/hong-kong/article/1292261/store-surveillance-systems-raise-privacy-fears-hong-kong “Hi-tech cameras work with multiple lenses positioned to cover the entire floor space. Within minutes, the video footage is converted into a computer file format containing data on customer movements, broken down by sex.”

[82] Ibid. The author notes: “Customers cannot opt-out and shoppers have no say in how their data is used.”

[83] Ibid. Lee writes: “Lawmaker Charles Mok, an information technology expert, said retailers needed to be open and transparent about their snooping practices.”

[84] RetailNext, (Accessed September 24, 2013) http://www.retailnext.net/analytics-products/retail-system-platform

[85] Lee.

[86] Teodor Reljic, “Facial recognition in social media: a step too far?,” Maltatoday.com, (Accessed 2 October 2013), 25 September 2013 http://www.maltatoday.com.mt/en/businessdetails/business/technology/Facial-recognition-in-social-media-a-step-too-far-20130925

[87] Gates, 21-22.

[88] Molotch, 14.

[89] Chesterman, 260.

[90] Molotch, 221.

[91] Ibid., 9.  There is an “irrationality” over the fear that a lack of security may induce; this influences people’s judgment over security.

[92] Ibid., 8.  Regarding security, Molotch says that [n]o permanent solution exists, making the war on terror ongoing…”  Selling fear, though, allows for companies and businesses to continue marketing their wares to each other and governments.

[93] Amy Webb, “We Post Nothing About Our Daughter Online,” Slate, 4 September 2013, (Accessed 1 October 2013), http://www.slate.com/articles/technology/data_mine_1/2013/09/facebook_privacy_and_kids_don_t_post_photos_of_your_kids_online.html

[94] Ibid.

[95] Chris James Carter, “Your Future Boss Knows Everything About You,” RealClearTechnology, (Accessed 1 October 2013) http://www.realcleartechnology.com/articles/2013/09/09/your_future_boss_knows_everything_about_you_696.html. In the UK, one 17 year old’s tweets from three years prior were deemed inappropriate, thus preventing her appointment as a “Youth Police Crime Commissioner.”

[96] Ibid.

[97] Gilliom, 125.

[98] Naomi Klein, “China Unveils Frightening Futuristic Police State at Olympics,” Huffington Post,  http://www.alternet.org/story/94278/china_unveils_frightening_futuristic_police_state_at_olympics (Accessed 8 October 2013) 7 August 2008.

[99] Ibid.

[100] Magnet, 138.

[101] Anonymous, “Getting to Know You,” Machine Design, 9 August 2001, Vol. 73, Issue 15.

[102] John Keller, “Intelligence researchers seek to make big improvements in biometric facial recognition,” Military & Aerospace Electronics, (Accessed 8 October 2013), 4 June 2013, http://www.militaryaerospace.com/articles/2013/06/IARPA-Janus-biometrics.html JANUS “seeks to develop representational models able to encode the shape, texture, and dynamics of a face, rather than rely on one posed photo, which address the challenges of aging, pose, illumination, and expression (A-PIE) by exploiting all available imagery…[including] videos, camera stills, and scanned photos taken in real-world conditions.”

[103] Molotch, 198.

Selected Bibliography


BAE Systems ADAPTIV technology,


Barribeau, Tim. “A T-Shirt That Tricks Facial Recognition Software: The REALFACE 
     Glamoflage.” Popphoto.com. 7 October 2013.


Braun, Stephen. “NSA watchdog details surveillance misuse.” Associated Press
27 September 2013. http://m.startribune.com/?id=225516402.


Buttar, Shahid. “NY Times endorses Surveillance State Repeal Act, joining BORDC.”
     People’s Blog For the Constitution. 25 September 2013.


Carter, Chris James. “Your Future Boss Knows Everything About You.”
     RealClearTechnology.  http://www.realcleartechnology.com/articles/2013/09/09/


Chesterman, Simon. One Nation Under Surveillance: A New Social Contract To Defend
     Freedom Without Sacrificing Liberty
. New York: Oxford University Press, 2011.


Convy, John. “Ten-year retrospective on video surveillance.” Government Security
 1 September 2013. http://www.gsnmagazine.com/node/32866?


Crockford, Kade. “Ready, fire, aim: Ohio officials implement statewide face recognition
     program without a whiff of public debate.” American Civil
     Liberties Union. 3 September 2013. https://www.aclu.org/blog/technology-and-


CV Dazzle. http://cvdazzle.com.


“Designers trying to help people fight government surveillance.” Washington Post.


Desmond, Richard D. “Big Brother Is Watching: Reasonable Expectations Of Privacy In
     The Technological Age.” Reporter. June 2002, Vol. 29 Issue 2.


“DeWine must limit access to facial ID data.” Marionstar.com. 28 September 2013.


Facial Action Coding System. http://face-and-emotion.com/dataface/facs/description.jsp.


Facial Action Coding System Affect Interpretation Dictionary http://face-and-emotion


Gangadharan, Seeta Peña. “Joining the Surveillance Society? New Internet Users in an
     Age of Tracking.” Open Technology Institute. September 2013.


Gates, Kelly A. “Our Biometric Future: Facial Recognition Technology and the Culture of
     Surveillance.” New York: New York University Press, 2011.


“Getting to Know You.” Machine Design. 9 August 2001. Vol. 73, Issue 15.


Gilliom, John. SuperVision: An Introduction to the Surveillance Society. Chicago: 
     University of Chicago Press, 2013.


Hill, Kashmir, Forbes. “Hello, Stranger.” 24 August 2011. Forbes. www.forbes.com/


Ingham, Alexandria. "Japan’s Version of Google Glass: How Does It Work?."
     3 October  2013. www.decodedscience.com/japans-version-google-glass-


Keller, John. “Intelligence researchers seek to make big improvements in biometric
     facial recognition,” Military & Aerospace Electronics. 4 June 2013.


King, Rawlson. “Homeland Security to test BOSS facial recognition at junior hockey
     game,” 20 September 2013. http://www.biometricupdate.com/201309/u-s-testing-


Klein, Naomi. “China Unveils Frightening Futuristic Police State at Olympics.” Huffington
7 August 2008. http://www.alternet.org/story/94278/china_unveils_frightening


Lee, Danny. “In-store surveillance systems raise privacy fears in Hong Kong.” July 28,
     2013. http://www.scmp.com/news/hong-kong/article/1292261/store-surveillance-


Magnet, Shoshana Amielle. When Biometrics Fail: Gender, Race, and the Technology
     of Identity.
 Durham: Duke University Press, 2011.


McCullagh, Declan. “Call It Super Bowl Face Scan I.” Wired. 2 February 2001.


Molotch, Harvey. Against Security: How We Go Wrong at Airports, Subways, and
     Other Sites of Ambiguous Danger.
 Princeton: Princeton University Press, 2012.


O'Neill, Brendan. “Watching you watching me.” NewStatesman. 2 October 2006


Perry, Tekla. “Hockey Fans to Test Facial Recognition Technology.” Institute of
     Electrical and Electronics Engineers. 20 September 2013.


Priest, Dana. “Government surveillance spurs Americans to fight back.” Washington
 14 August 2013. http://www.washingtonpost.com/lifestyle/style/government-


Reljic, Teodor. “Facial recognition in social media: a step too far?.” Maltatoday.com.
     25 September 2013. http://www.maltatoday.com.mt/en/businessdetails


RetailNext. http://www.retailnext.net/analytics-products/retail-system-platform.


Savage, Charlie. “Facial Scanning Is Making Gains in Surveillance.” New York Times.
     21 August 2013 http://www.nytimes.com/2013/08/21/us/facial-scanning-is-making-


Solove, Daniel J. Nothing to Hide: The False Tradeoff between Privacy and Security.
     Yale University Press, 2011.


Sony Corporation. http://www.sony.co.uk/hub/learnandenjoy/2/1.


Sprenger, Polly. “Sun on Privacy: ‘Get Over It’.” Wired.com. 26 January 1999.


Webb, Amy. “We Post Nothing About Our Daughter Online.” Slate. 4 September 2013.