Are you aware that your face could be part of a vast facial recognition network used by police and others to help identify certain individuals many of them suspected criminals?
Collections of faces are being drawn from public record databases like driver’s license photos, state IDs, visa applications, surveillance footage, news photographs, Google and Facebook images, university studies, arrest shots and real time scans from cameras in public locations. The technology of scanning these faces is said to be so precise that it can differentiate identical twins.
According to research at Georgetown Law School in Washington, D.C., where a think tank has extensively studied the phenomenon, it is estimated that about half of all adults in America—more than 117 million people–are in at least one law enforcement facial recognition database. Facial recognition is part of the field of biometrics, the measurement and analysis of unique physical or behavioral characteristics especially as a means of verifying personal identity.
Reported uses of facial recognition
In a country of about 247 million adults, the FBI is said to have access to more than 400 million images. It claims to use facial recognition as an investigative tool only, but there is evidence it has been used in other ways and this has caused major privacy issues to be raised in the general population.
In 2001 local police used face recognition techniques at Super Bowl XXXV in Tampa Bay, Fla. to identify those in attendance who had outstanding criminal warrants. Nineteen people wanted on minor crimes were arrested.
In the United States, facial recognition has also been used to identify fake ID cards and false driver’s licenses. Use of the system came under fire in the state of Maryland in 2015 when it was used to arrest those with outstanding warrants who took to the streets to protest the death of Freddie Gray, a 25-year-old black man who died in the custody of Baltimore police.
Facial recognition has been used in Mexico to apprehend people attempting to register to vote multiple times and is routinely used in China in office buildings and at ATMs. High-end European hotels and retailers use it with celebrity customers and it is used to automate border crossings in Australia, New Zealand, Panama and Canada.
Customs and borders officers at New York airports have scanned people’s faces to track those overstaying their visas. It is projected the technology will be used more widely in the future by businesses to monitor employees by tracking attendance and keeping records of hours worked.
Television advertisements for the newly released iPhone X mobile phone feature using the human face as a password for unlocking the phone, buying merchandise and paying bills.
How facial recognition works
The automated system operates using “algorithms” which are a series of instructions followed in a step-by-step manner to solve a problem–somewhat like cooking from a recipe of ingredients. In traditional facial recognition, the process compares two images of faces to determine whether they represent the same individual.
Algorithms identify facial features by extracting landmarks from an image of the subject’s face. An algorithm may analyze the relative position, size and/or shape of the eyes, nose, cheekbones and jaw and make measurements of the distance between certain features. Researchers upload a probe image of an unknown person and search for matching images in a large database such as passport pictures, state driver’s license photos or all pictures stored in a social network application. If the system locates a match, it returns a probability score indicating the likelihood the match is accurate.
Three-dimensional facial recognition and skin texture analysis are some of the more sophisticated techniques used to enhance facial recognition. Full frontal images of the face give the best results and low resolution or thermal images are generally imperfect for use. It is also important that uniform standards are employed in lighting and focus when taking the photos entered in the database. People must be trained to perform facial recognition but it is not a highly costly program nor one that takes extensive experience.
One flaw with the system is that it allows officers to track large groups of people who aren’t suspected of committing a crime and have never been in trouble with the law. Courts haven’t determined whether facial recognition constitutes a “search.” If it does, it may limit its use under the Fourth Amendment.
Who has access to facial recognition databases?
Many local and state police departments have access to facial databases but have little restriction on how they use them. Only five states – Illinois, Texas, California, New York and Washington – have laws that even mention how law enforcement can use facial recognition although a few other states have tried to legislate the issue.
According to the verge.com, a website which does in-depth reporting on technology, science, art and culture, at least 43 states have used some form of facial recognition technology—mostly by use of driver’s license photos without the user’s consent. Seven of those states have adopted such a system in the last three years indicating facial recognition use is on the increase.
Only California, Missouri, Louisiana, Mississippi, Maine, New Hampshire and Vermont do not permit use of driver’s licenses for this purpose. The following states have considered consumer protection laws concerning facial recognition—Washington, Montana, New Hampshire, Connecticut and Alaska, but only Washington has passed a law which experts say is weak compared to Illinois regulations.
As recently as December 26, 2017 two United States senators, Mike Lee of Utah and Edward J. Markey of Massachusetts, asked the Department of Homeland Security (DHS) about its biometric exit program now operating at nine U.S. international airports in which citizens departing on certain international flights are required to submit to a face scan. The senators point out that even if DHS meets its accept rate goal, one in 25 travelers will receive a false denial and this will total in the thousands daily.
The legislators are requesting that the program not be expanded until Congress has provided explicit statutory authority for use of the program on U.S. citizens. Several times in the past, Congress has called upon DHS to establish a biometric exit program to verify the identities of foreign nationals leaving the country, but it has never authorized face scans of American citizens.
In a letter to DHS Secretary Kirstjen Nielson, the senators also questioned the “accuracy, efficacy and transparency” of the unauthorized scans and request more information on how the program will not unduly burden certain races and genders.
As the senators mentioned, face recognition software generally doesn’t do as well at identifying minorities and this is another major issue with its widespread use.
History of biometric use in law enforcement
Biometrics first came into use in 1892 when fingerprints became a criterion of identification. They remained a critical component in law enforcement until the 20th century shift to DNA analysis. In the 1960’s three American computer scientists pioneered using the computer to recognize faces.
By the late 1990’s the University of Bochum in Germany, the University of Southern California, Massachusetts Institute of Technology and the University of Maryland were further exploring the field. By 2006 face recognition algorithms were 10 times more accurate than those used in 2002 and 100 times more accurate than those of 1995.
In 2010, the FBI launched its biometric database, Next Generation Identification, enlarging the old fingerprint database with additional capabilities including facial recognition. At that time the FBI made arrangements with 18 different states to gain access to their databases of driver’s license photos. According to the Guardian (March 17, 2017), the public was never told about Next Generation until 2015 although the FBI was required by law to publish a privacy impact statement.
In 2015, according to the Guardian, the government accountability office (GAO) analyzed the FBI use of facial recognition technology and found it to be lacking in accountability and accuracy and made recommendations on how to address problems with it. Key concerns were that the FBI did not test for false positives and did not acknowledge that inaccurate matching disproportionately affects people of color, the GAO reported.
Racial bias in facial recognition
Critics of facial recognition technology claim there is disproportionate over- representation of African-Americans in databases because they have higher arrest rates. (In Minnesota, the arrest of blacks in 2014 was nearly 25 percent, although they represented only five percent of the population. In states like Michigan, NBC News reports blacks are arrested at a rate 136 percent higher than their state population share.)
More than 50 civil rights organizations, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights, have asked the U. S. Justice Department’s Civil Rights Division to investigate the impact of using this virtually unregulated technology on minority communities.
“Face recognition technology lets police recognize you from far away and in secret,” claims Alvaro Bedoya, executive director of Georgetown University’s Law School’s Center for Privacy and Technology. He maintains that a law abiding citizen can be investigated for a crime he didn’t commit because his face may have resembled a suspect.
“If you are black, you are more likely to be subjected to this technology and the technology is more likely to be wrong,” says Maryland Congressman Elijah Cummings who has called for the FBI to test facial recognition for racial bias. The FBI claims the system is “race-blind,” but even representatives of companies developing facial recognition techniques believe the technology needs to be more tightly controlled.
Brian Brackeen, CEO of Kairos, whose business is facial recognition, told the Guardian he is “not comfortable” with the lack of regulation in government use of facial recognition and noted that “algorithms used commercially are five years ahead of what the FBI is doing and far more accurate.”
Face recognition has also been found in some studies to be less accurate on women and young people.
The Georgetown study
The Georgetown University Law School report, published October 18, 2016—The Perpetual Line-Up, Unregulated Police Face Recognition in America (available at perpetuallineup.org)—was so titled because facial recognition programs are like a large digital version of a police lineup which bring a lot of innocent people to the attention of law enforcement.
The 150-page report concluded that one in four law enforcement agencies can access face recognition software that is almost completely unregulated. The University’s Center on Privacy and Technology obtained over 17,000 pages of official documents in the study which the public can access on the perpetual line-up website.
The Center requested records from more than 100 law enforcement agencies. Of the 52 acknowledging that they used face recognition, only one stated they had legislative approval for its use and only one agency provided evidence that it audited the searches for misuse. Many agencies did not even require an officer to suspect someone of committing a crime before using face recognition as an identifying tool.
Social media has role in face recognition
Facebook and other social media (slate.com) are another substantial source of biometric data and subjects are often automatically “opted in” to a database. When you tag a friend in a photo, that action feeds a massive face recognition dataset. There’s no way of telling how this biometric data could be used in the future. Although it claims to keep the data locked in encrypted storage, Facebook has patented technologies that can deliver ads based on perceived emotions and can identify a user from unique identifiers such as body shape, hair, posture and clothing.
An Illinois man, Carlo Licata, sued Facebook in 2015 under his state’s Biometric Information Privacy Act, a law which says no private company can collect or store a person’s biometric information without prior notification or consent. The yet unresolved civil case was transferred to the Northern District of California court after Facebook unsuccessfully tried to have it dismissed. According to a report from the Center of Public Integrity, Facebook is reportedly quietly trying to quelch numerous other similar state laws that have cropped up around the country.
Google claims it is not using facial recognition pending determination of privacy issues.
How does Ohio fare with this powerful identification tool?
The Perpetual Line-Up report names Ohio as one of the four states where “police agencies can scan photos for a wide variety of reasons.” (Also named were Arizona, Florida and Virginia.)
In 2008, The Ohio Organized Crime Investigations Commission piloted a facial recognition program and used federal grants to build a photo repository to make the system available to all Ohio law enforcement agencies. By 2014, the software was drawing from a base of 24 million photos from the Ohio Department of Rehabilitation and Correction, the sex offender registry, the Bureau of Motor Vehicles, the Ohio Courts Network and various law enforcement agencies.
Detectives in various Ohio police departments, including Akron, have reported using the technology to apprehend felony suspects.
Ohio Attorney General Mike DeWine came under fire in 2013 for using driver’s license photos in facial recognition technology to identify crime suspects without informing the public and without any review by his office of those using the system. DeWine defended the practice by saying Ohio’s system was similar to many other states, but a probe by the Cincinnati Enquirer revealed Ohio’s protocols for accessing the database were much looser than other states.
For example, in Kentucky about three dozen officials were permitted to run facial recognition searches while in Ohio, 30,000 police and court employees were allowed to run the searches.
The number of personnel who have access to this information has since been reduced. In fact, the Georgetown facial recognition report cited the Ohio Bureau of Criminal Investigation as “the only one of 52 agencies studied whose face recognition policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious or other protected free speech.”