Policy Archives | ROC https://roc.ai/category/policy/ Rank One develops industry-leading, American-made computer vision solutions that leverage Artifical Intelligence and make the world safer and more convenient. Mon, 07 Nov 2022 16:42:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://roc.ai/wp-content/uploads/2024/02/cropped-Group-44-1-32x32.png Policy Archives | ROC https://roc.ai/category/policy/ 32 32 ROC at ISC East, WV Small Communities, and Indian Gaming events Nov 14-17 https://roc.ai/2022/11/07/roc-stars-around-town-november-14/ Mon, 07 Nov 2022 16:42:21 +0000 https://roc.ai/?p=8221 The post ROC at ISC East, WV Small Communities, and Indian Gaming events Nov 14-17 appeared first on ROC.

]]>

Interested in meeting with our ROC stars? Well you’re in luck! The week of November 14, 2022 provides you with many opportunities to meet with us and check out our technology.

We will be participating in the following events:

ISC East – New York, NY – Nov 15-17
WV Small Communities Big Success – Charleston, WV – Nov 15-17

ROC will showcase our tailored computer vision solutions for Safe Schools, FinTech, and Healthcare:

  • ROC Watch – for live video alerting. Read more here.
  • ROC SDK – the most trusted, accurate, and efficient AI/ML computer vision algorithm to recognize faces, vehicles, license plates, and more Read more here.
  • ROC Enroll – for omni-channel Customer Enrollment, Visitor Management, & Identity Proofing

VP of Customer Success Blake Moore will highlight ROC’s latest and greatest during the sponsor spotlight on November 17 @ 9:50 am

You will also have the chance to meet with Jessica Sell, VP – Congressional Affairs & Community Outreach.

Indian Gaming Assoc. Mid-Year Conf – Fort McDonald, AZ – Nov 14-16

Meet with Eric Hess, VP – Sales and stop Table 12 to see:

  • ROC Watch – for live video alerting. Read more here. 
  • ROC SDK – the most trusted, accurate, and efficient AI/ML computer vision algorithm to recognize faces, vehicles, license plates, and more. Read more here. 
  • ROC Enroll: Enroll customers remotely, build loyalty, and prevent fraud
Want to schedule a meeting with us at any of these events? Let us know using the form below.

No Fields Found.

The post ROC at ISC East, WV Small Communities, and Indian Gaming events Nov 14-17 appeared first on ROC.

]]>
On Veteran’s Day, Rank One Computing (ROC) advocates to strengthen the Buy American Act to incorporate sensitive IT systems https://roc.ai/2021/11/10/on-veterans-day-rank-one-computing-roc-advocates-to-strengthen-the-buy-american-act-to-incorporate-sensitive-it-systems/ Wed, 10 Nov 2021 21:49:03 +0000 https://roc.ai/?p=5723 Honoring our Veterans who protect our Nation’s security, ROC is […]

The post On Veteran’s Day, Rank One Computing (ROC) advocates to strengthen the Buy American Act to incorporate sensitive IT systems appeared first on ROC.

]]>

Honoring our Veterans who protect our Nation’s security, ROC is raising awareness about the importance of securing the overall supply chain for our nation’s most sensitive IT systems. As the most trusted provider of facial recognition algorithms to U.S. military, law enforcement, and commercial organizations, ROC points to the fact that the U.S. is quickly developing trusted, domestic suppliers of Machine Learning algorithms. This comes as the U.S. is facing increased foreign attacks that exploit many of the legacy, foreign-developed algorithms. To mitigate these threats, the U.S. needs to support domestic suppliers of software that has “American nascency.”

 

Earlier this year, President Biden issued Executive Order, “Ensuring the Future is Made in All of America by All of America’s Workers” that questions whether the current economic and national security environment calls for the end to the 15-year exemption to the Buy American Act for commercial IT products. Removing this exemption would send a clear message that our national security infrastructure is taking these foreign threats seriously and is committed to developing our domestic AI/ML capabilities.

 

“Due to the gravity of potential harm that could arise from Artificial Intelligence-based attacks by foreign adversaries, trusted U.S. technology providers should be strongly preferred by federal government customers and domestic companies who take IT security risks seriously,” according to ROC General Counsel and Chief Operating Officer David Ray. “We frame the situation as follows and invite anyone interested in joining this conversation, to contact us.”

 

  • Technology in today’s world is taking on a strategic focus due to IT security risks posed by foreign adversaries.
  • Machine Learning algorithms are particularly prone to risks from “Poison AI” models that intentionally introduce untraceable security vulnerabilities into critical government systems.
  • Billions of dollars and unrestricted access to data are being provided by foreign adversaries to foreign companies with a goal of winning the technology race and thereby strategically positioning security vulnerabilities at the heart of the American economy and federal government.
  • To the extent that the exemption to the Buy American Act for commercial information technology was justified when introduced 15 years ago, that justification is no longer appropriate to today’s national interests and global technology landscape.
  • The exemption should be removed at minimum with respect to software such as facial recognition that powers critical government applications, and federal procurement should emphasize a strong bias in favor of using trusted, U.S.-made software solutions.

Press Release 

The post On Veteran’s Day, Rank One Computing (ROC) advocates to strengthen the Buy American Act to incorporate sensitive IT systems appeared first on ROC.

]]>
Rank One Takes a Stand on Ethical Face Recognition https://roc.ai/2021/09/02/rank-one-takes-a-stand-on-ethical-face-recognition/ Thu, 02 Sep 2021 19:44:10 +0000 https://roc.ai/?p=5600 Rank One Computing is taking a strong stand – in […]

The post Rank One Takes a Stand on Ethical Face Recognition appeared first on ROC.

]]>
Rank One Computing is taking a strong stand – in coordination with Security Industry Association (SIA) – to encourage the Face Recognition (FR) industry to embrace a code of ethics that incorporates privacy guidelines. To be sure, FR raises complex legal and privacy issues, especially in video surveillance, law enforcement and access control.

In the US, there is no nation-wide privacy framework for FR, so end users and integrators must navigate a patchwork of state and municipal regulations. With a lack of federal regulation, states such as Illinois, Texas, Washington, and California have passed individual laws that govern the use of biometrics in private and/or public applications. A number of cities have passed local ordinances.
Taking a “Privacy-first” stance, ROC Chief Operating Officer and General Counsel David Ray is leading efforts to create an industry code of ethics. According to Mr. Ray,

Our industry has an obligation to meet the public’s valid concerns about privacy. FR technology can and does play a significant role in protecting our citizens and our way of life. At Rank One, we are committed to ensuring that FR deployments respect civil liberties, and we encourage our industry partners to join us in that commitment.

While surveys show strong public support for government use of FR in support of public safety, national security, border security, and crime investigation, there is a significant minority view which erroneously conflates FR with surveillance and warns of a dystopian future. It is important for industry providers of FR technology to proactively support our customers in their ethical uses of the technology while avoiding aggressive efforts to extend the technology into new and controversial arenas.

Use of FR in the law enforcement domain – with platforms such as dashcams and body-worn cameras – is increasing at the same time as our nation faces increased sensitivities around privacy and racial equity. At the same time, commercial uses of FR – Apple’s Face ID, Facebook, etc. – have gained significant public acceptance but have raised concerns about data ownership and protection. ROC is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself. In fact, machine learning has led to significant improvements in FR accuracy and has demonstrated that the technology itself does not introduce any significant racial or other bias in top-tier algorithms. The primary concerns arise from how the technology is deployed and how the data is managed.

Mr. Ray will publicly launch Rank One’s campaign for an industry-wide commitment to ethical Facial Recognition at this year’s Connect-ID conference in Washington, DC, October 5-6. This will build on SIA’s core principles – to which Rank One was a significant contributor – regarding the responsible and effective use of facial recognition technology. Mr. Ray will present the ideas during a plenary session and then host a roundtable discussion to engage and build support.

We invite broad collaborative participation and engagement as we seek to advance this important initiative. To learn more about the Rank One products and services please visit us at rankone.io.

Press Release 

The post Rank One Takes a Stand on Ethical Face Recognition appeared first on ROC.

]]>
Rank One participates in face recognition policy panel in Washington, D.C. https://roc.ai/2019/12/12/rank-one-participates-in-face-recognition-policy-panel-in-washington-d-c/ Fri, 13 Dec 2019 02:07:05 +0000 https://roc.ai/?p=3871 On Dec. 5, the Information Technology and Innovation Foundation (ITIF) hosted a briefing in […]

The post Rank One participates in face recognition policy panel in Washington, D.C. appeared first on ROC.

]]>
On Dec. 5, the Information Technology and Innovation Foundation (ITIF) hosted a briefing in Washington, D.C., exploring emerging uses of facial recognition in the private sector. The discussion – moderated by Daniel Castro, vice president of ITIF and director of the Center for Data Innovation – featured panelists Jamie Boone, vice president of government affairs at the Consumer Technology Association; Matt Furlow, director of policy, Chamber Technology Engagement Center at the U.S. Chamber of Commerce; Jake Parker, senior director of government relations at the Security Industry Association (SIA); David Ray, chief operating officer and general counsel at Rank One Computing Corporation; and Mel Schantz, supervisor, engineering at Panasonic. The expert panel explored how industry leaders and policymakers can promote the adoption and use of facial recognition technology in responsible ways, the potential applications of facial recognition technology, industry best practices in the use of the technology and policies that spur innovation and deployment while preserving privacy and security. Additionally, the event highlighted the U.S. Chamber of Commerce’s newly released facial recognition policy principles, which are designed to encourage policymakers to appropriately mitigate any risks associated with the technology with the benefits it provides to consumers and the public, and featured a facial recognition technology demonstration. Check out key takeaways, photos and video from the event in this recap from SIA.

The post Rank One participates in face recognition policy panel in Washington, D.C. appeared first on ROC.

]]>
Facial Recognition Code of Ethics https://roc.ai/2019/11/22/facial-recognition-code-of-ethics/ Fri, 22 Nov 2019 20:01:15 +0000 https://roc.ai/?p=457 Rank One Computing believes in a just, non-violent world of equality and fairness. We prize democratic values, civil liberties and open and informed debate. When used to further these values, automated face recognition can continue to make the world a safer, better place for everyone. In the absence of regulatory guidance, we wish to advance limitations that we believe are appropriate in how face recognition should be utilized.

The following set of ethics serve as a guideline for how we will develop face recognition systems and how we will expect our integration partners and end-users to develop and utilize face recognition systems based on our algorithms

The post Facial Recognition Code of Ethics appeared first on ROC.

]]>
Rank One Computing believes in a just, non-violent world of equality and fairness. We prize democratic values, civil liberties and open and informed debate. When used to further these values, automated face recognition can continue to make the world a safer, better place for everyone. And, in the absence of regulatory guidance, we wish to advance limitations that we believe are appropriate in how face recognition should be utilized.

The following set of ethics serve as a guideline for how we will develop face recognition systems and how we will expect our integration partners and end-users to develop and utilize face recognition systems based on our algorithms.


First Principle

Facial recognition should be used to make the world safer, more secure and more convenient while minimizing harm through proper workflows that identify and mitigate sources of error.

Commercial Use

  • Facial recognition should not be used to track private details about a person without opt-in consent, except when used for security and safety purposes.

Law Enforcement Use

  • Facial recognition should not be used for real-time mass surveillance of lawful activity.  Any targeted surveillance of an individual should require a court-ordered warrant.
  • Facial recognition should not support probable cause for arrest, search or seizure.
  • Facial recognition should utilize best practices and workflows established by the Facial Identification Scientific Working Group (FISWG) and the Organization of Scientific Area Committees for Forensic Science (OSAC) Facial Identification Subcommittee, which require a trained human facial examiner to make final determinations based on morphological matching guidelines.
  • Facial recognition should be used to solve violent crimes and felonies, but not victimless misdemeanors.
  • Any use of facial recognition should be discoverable in criminal proceedings.
  • Facial recognition use must be in compliance with police policies and procedures, all statutes and regulations and the Constitutional limits that protect civil liberties.

Read more about face recognition policy considerations:

 

 

The post Facial Recognition Code of Ethics appeared first on ROC.

]]>
Rank One joins ITIF’s “Open Letter to Congress on Facial Recognition” https://roc.ai/2019/09/30/rank-one-joins-itifs-open-letter-to-congress-on-facial-recognition/ Tue, 01 Oct 2019 01:13:01 +0000 https://roc.ai/?p=3879 Rank One Computing joined prominent research organizations, law enforcement groups, […]

The post Rank One joins ITIF’s “Open Letter to Congress on Facial Recognition” appeared first on ROC.

]]>
Rank One Computing joined prominent research organizations, law enforcement groups, and technology companies last week in sending an open letter to Congress that clarifies the misinformation being circulated regarding the technology, and pledges to support properly informed safeguards that enables law enforcement to use facial recognition technology safely, accurately, and effectively.

Please consider joining this important effort to keep advancements that would continue to improve both public security and law enforcement oversight.

To read the open letter and for more information on how to join this coalition and effort, please click on this.

The post Rank One joins ITIF’s “Open Letter to Congress on Facial Recognition” appeared first on ROC.

]]>
Race and Face Recognition Accuracy: Common Misconceptions https://roc.ai/2019/09/12/race-and-face-recognition-accuracy-common-misconceptions/ Thu, 12 Sep 2019 21:41:42 +0000 https://roc.ai/?p=438 There is a misperception that face recognition algorithms do not work on persons of color, or are otherwise inaccurate in general. This is not true.  The truth is that across a wide range of applications, modern face recognition algorithms achieve remarkably high accuracy on all races, and accuracy continues to improve at an exponential rate.

The post Race and Face Recognition Accuracy: Common Misconceptions appeared first on ROC.

]]>
There is a misconception that face recognition algorithms do not work on persons of color, or are otherwise inaccurate in general. This is not true. 

The truth is that across a wide range of applications, modern face recognition algorithms achieve remarkably high accuracy on all races, and accuracy continues to improve at an exponential rate.

The most comprehensive industry standard source for validating a face recognition algorithm is the U.S. National Institute of Standards and Technology (NIST) Face Recognition Vendor Test (FRVT). For two decades, this program has benchmarked the accuracy of the leading commercially-available face recognition algorithms. 

Due to the rapid progression of face recognition technology in recent years, NIST FRVT introduced the “Ongoing” benchmark program, which is performed every few months on a rolling basis. FRVT Ongoing measures identity verification accuracy across millions of people and images, with wide variations in image capture conditions (constrained vs. unconstrained), and person demographics (age, gender, race, national origin). 

One dataset analyzed in depth in the NIST FRVT Ongoing benchmark is the Mugshot dataset. Performance reported on this dataset includes accuracy measurements on over one million images and persons, including accuracy breakouts across the following four demographic cohorts: Male Black, Male White, Female Black, and Female White.

In terms of overall face recognition accuracy, leading algorithms are extremely accurate on the Mugshot dataset. The top performing algorithm identified faces at 99.64% accuracy for True Positive / Same Person comparisons and 99.999% accuracy for False Positive / Different Person comparisons. There are another 50 different algorithms benchmarked that identified faces with at least an accuracy of 98.75% for Same Person comparisons and 99.999% accuracy for Different Person comparisons.

In terms of accuracy breakouts across the four race and gender cohorts, all of the top 20 algorithms are found to be the most accurate on Male Black subjects. 

The following score chart breaks down the accuracy rank for each of the four cohorts across the top 20 algorithms: 

Picture1

As shown in the above tally, Male Black was the most accurate demographic cohort for the top 20 most accurate algorithms analyzed by NIST. While this is counter to conventional wisdom and the media’s narrative, the results are not particularly surprising. Here is why: 

Face recognition algorithms are highly accurate on all races.

For the above 20 algorithms, the median difference between the most accurate and least accurate cohort for a given algorithm was only 0.3%.

Academic institutions have been publicizing non-academic research.

The widespread belief that Race significantly biases face recognition accuracy is due to a non-peer reviewed investigative journalism article from Georgetown Law titled “The Perpetual Line-Up: Unregulated Police Face Recognition in America”. The sole source for the claim on racial biases was a peer-reviewed article written by myself and colleagues in 2012. This article was incomplete on the subject and not sufficient for being the sole source cited, as it indirectly has been since the publishing of the Georgetown report. 

Another common source cited for the inaccuracy on persons of color is from a study performed by the MIT Media Lab, which did not measure face recognition accuracy. In the “Gender Shades project”, the accuracy of detecting a person’s gender (as opposed to recognizing their identity) was measured. Two of the three algorithms studied were developed in China, and had poor accuracy at predicting the gender of the Female Black cohort. Still, this study has been widely cited as an example of face recognition being inaccurate on persons of color. Again, this study did not measure face recognition accuracy.

Fast forward, and there has been a recent campaign to ban face recognition applications outright, regardless of their purpose and societal value. These initiatives are often premised on the claim that the algorithms are inaccurate on persons of color, which, as we have shown, is not true.

A path forward

Given the disproportionate impact of the criminal justice system on Black persons in the United States, the concern regarding whether a person’s race could impact a technology’s ability to function properly is valid and important. However, the current dialogue and public perception has left out a lot of key factual information, and has further confused the public regarding how face recognition is used by law enforcement. 

In addition to the public being misled on the accuracy of FR algorithms with respect to a person’s race, the public has even been misled as to how law enforcement uses face recognition technology, as clarified in a recent article. In turn, cities like San Francisco have used such misinformation to compromise the safety of their constituents

It is not in our nation’s interest to decide public policy based on politically motivated articles with weak scientific underpinnings. The benchmarks provided by NIST FRVT are currently the only reliable public source on face recognition accuracy as a function of race, and according to these benchmarks, all top-tier face recognition algorithms operating under certain conditions are highly accurate on both Black and White (as well as Male and Female).

Like this article? Subscribe to our blog or follow us on LinkedIn or Twitter to stay up to date on future articles.

Popular articles: 

FormWithSteps
Which capabilities are you interested in?

The post Race and Face Recognition Accuracy: Common Misconceptions appeared first on ROC.

]]>
How Forensic Face Recognition Works https://roc.ai/2019/06/12/how-forensic-face-recognition-works/ Wed, 12 Jun 2019 21:57:52 +0000 https://roc.ai/?p=401 Law enforcement primarily uses face recognition as a post-incident forensic tool to enable detectives and analysts to generate investigative leads in violent and harmful crimes. In this article we explain how forensic face recognition works, and how it is used by law enforcement in this country.  

The post How Forensic Face Recognition Works appeared first on ROC.

]]>
There is a misconception that law enforcement agencies in the U.S. use automated face recognition to actively surveil public spaces. Such a dragnet of mass real-time identification and surveillance would be a violation of the Fourth Amendment to the United States Constitution. While autocratic countries may intend to use face recognition technology for nefarious purposes, in the United States, and other nations with inalienable human rights, there is no systematic intent or process designed to exploit facial recognition technology in this manner.

Concerns that face recognition could be used to invade privacy are valid; however, this is not how it is being used by U.S. law enforcement.  To the contrary, law enforcement primarily uses face recognition as a post-incident forensic tool to enable detectives and analysts to generate investigative leads in violent and harmful crimes.

In this article we explain both how forensic face recognition works, and how it is used by law enforcement in this country.  

howForensicFRWorks

Step 1: A violent or harmful crime occurs

While modern societies have become safer with each passing decade, there were still over a million incidents of violent crime in the U.S. in 2017. These incidents range from murder (17,284 incidents), to rape (135,755 incidents), to aggravated assault (810,825 incidents), amongst many other crimes. Similarly, cases of burglary, larceny, arson, and fraud take a tremendous toll on victims.

Step 2: An image of a perpetrator (or victim) is available.

This image of the perpetrator could come from a number of different sources. For example:

  • The victim of a sexual assault could have the perpetrator’s image from an online dating site they met on.
  • A store owner who was the victim of an armed robbery could have a camera system installed that captured the robber’s face.
  • A high density tourist area may have recorded footage of a terrorist leaving a bomb.
  • A video of an unidentified adult engaging in inappropriate acts with a child may emerge while a warrant is being served for a related crime.
  • A homeowner’s doorbell may capture a picture of a burglar.
  • A traffic camera may have captured a person’s face before a violent act of road-rage.

In certain cases it is instead a victim who needs to be identified. This could be a deceased person without identification, or a victim filmed in a child exploitation case.

Step 3: An investigator or analyst searches the image against a database

The photograph or video frame image of the unidentified person of interest, often referred to as a probe image, is sent to a detective, analyst, or operator who manages digital forensic evidence. This human operator in turn uses automated face recognition to search the probe image against an available database of face images (often referred to as the gallery).

The galleries that are available for this search-and-compare process will vary, depending on the agency and jurisdiction. For most law enforcement agencies this will include mugshot arrest images. For certain law enforcement agencies, depending on state and local laws, the database may also include images from other Government agencies that grant identification cards (e.g., DMVs), criminal watch lists, or data otherwise meaningful to share.

Step 4: A candidate list is returned that contains the closest matching faces

Once this automated search is complete, the operator of the system will receive back a rank-ordered list of the top matches, where the first result is the image in the gallery that has the highest similarity score to the probe image. The second result will be the image in the gallery second highest similarity score, and so on.

The number of candidate matches returned will vary depending on the configuration of the system. For example, in some configurations only images that exceed a certain similarity score are presented. In other systems, the top N results are returned, regardless of similarity score, where N may be 20, 50, or 100.

Step 5: Candidate list is examined by the analyst

The operator of the system, who has often been trained in facial comparison, will examine the returned candidate list to determine if any of the candidate images match the person of interest in the probe image.

When performing comparisons, the analyst will examine the various morphological features of the face and document the entire comparison process.  Most forensic search systems have automated tools that significantly improve an analyst’s ability to compare the two faces and document the process.

Step 6: If the analyst determines there is a high likelihood of a match, then an investigative lead report is generated

If the probe image from the person of interest has facial characteristics that indicate a strong match to a person in the gallery, then an investigative lead report is generated.

An investigative lead is not probable cause for arrest.  The detective investigating the crime will use the investigative lead generated from face recognition technology as a potential clue; a clue that could potentially lead to solving the case.

Public safety benefits, without harm

An investigative lead generated by the forensic face recognition process could be the difference between whether or not a person who inflicts harm upon others is identified. This investigative method has tremendous benefits in terms of public safety. When performed under proper standards, this forensic procedure does not have the propensity for harm mistakenly claimed by those who think this use is akin to active real-time surveillance.

Given the percolating misunderstanding of how law enforcement agencies use automated face recognition technology, we will summarize certain points covered in this article as they relate to myths around use of the technology in the U.S.:

  • There is not a mass network of cameras that are identifying persons in real-time across public spaces.
  • Automated face recognition in law enforcement is predominantly a post-incident method used when a harmful crime occurs, and is a key forensic crime-solving tool
  • The results from an automated face recognition search are carefully examined by an analyst or operator. If an analyst finds a strong likelihood of a match, this information is considered  an investigative lead, and is not probable cause for arrest.
  • There are no documented cases in the U.S. of invasion of privacy or wrongful arrest due to forensic face recognition despite over a decade of successful use.

Governing the use of face recognition technology is a good idea, but it must come from an informed point of view. When legislators make decisions based on campaigns of misinformation, the safety of their constituents suffers. Face recognition technology, used as a forensic process,  provides incredible benefits that greatly enhance the safety of our citizens without compromising guaranteed civil liberties or privacy rights.

Like this article? Subscribe to our blog or follow us on LinkedIn or Twitter to stay up to date on future articles.

Popular articles: 

 

The post How Forensic Face Recognition Works appeared first on ROC.

]]>
When Misinformation Endangers Lives https://roc.ai/2019/05/16/when-misinformation-endangers-lives/ Thu, 16 May 2019 16:52:37 +0000 https://roc.ai/?p=381 The use of automated face recognition in law enforcement is one of the most powerful tools available in today’s law enforcement investigations, and delivers substantial benefits to society without any documented cases of law enforcement misuse.

The post When Misinformation Endangers Lives appeared first on ROC.

]]>
The use of automated face recognition in law enforcement is not a binary discussion point. Yet, the political leaders of San Francisco have turned it into just that with recent legislation that bans local government agencies use of the technology. In turn, the safety of their citizens is being put at risk without any rational justification.

How law enforcement uses face recognition

At the core of the issue is a misunderstanding regarding how face recognition technology is used in law enforcement.

Heavy-handed authoritarian regimes in China, Russia, and elsewhere may seek to use face recognition as a surveillance tool that would suppress the same civil liberties guaranteed to United States citizens by the constitution. However, in the U.S. the overwhelming majority use of face recognition technology by law enforcement is as a forensic identification tool used to help identify perpetrators of violent and harmful crimes.

When used properly, face recognition technology is one of the most powerful tools available in today’s law enforcement investigations. For example, if someone robbed a bank, and their face was captured on camera, trained facial examiners using face recognition technology could search a surveillance video frame against a database of mugshot arrest photos in order to generate an investigative lead. Or, as happened recently, a high quality face image was captured of a person in an elevator moments before he broke into a young woman’s apartment and assaulted her. Or, when police collect child pornagraphy evidence, face recognition technology is used to identify and rescue the victims, as well as identify any perpetrators in the imagery. There are thousands of other cases over the years where grotesque or systematic crimes have occurred and face recognition was invaluable tool for identifying the culprit.

As a frame of reference, here are the 2017 Crime Statistics in the City of San Francisco:

  • Population: 881,255
  • Cases of Violent Crime: 6,301
  • Cases of Murder: 56
  • Cases of Rape: 367
  • Cases of Robbery: 3,220
  • Cases of Aggravated Assault: 2,658
  • Cases of Property Crime: 54,356
  • Cases of Burglary: 4,935
  • Case of Larceny-Theft: 44,587
  • Cases of Motor Vehicle Theft: 4,834

Unfortunately, due to a blanket overreaction by government leaders, face recognition technology cannot be used when investigating any of these harmful crimes in San Francisco, even if a high quality face image of a perpetrator is available.

The Board of Supervisors in San Francisco do not seem to understand these clearly positive and potentially life-saving uses. As the bill states:

The propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits

That is, they believe their law enforcement officers are more likely to use face recognition technology to violate the constitution than to help the lives of their citizens. This is despite the sworn oath every officer takes to uphold the constitution and lack of precedence of face recognition misuse. Why does San Francisco even have law enforcement agencies if they cannot be trusted to protect and uphold constitutional rights?

Face recognition is an accurate identification method

As stated in the bill, there is a belief that face recognition is an inaccurate technology. Aside from the obvious contradiction that it cannot both be inaccurate and also used to track, surveil, and oppress citizens, automated face recognition technology is often more accurate than humans.

Automated face recognition accuracy is also improving at staggering rates. Over the last several years there has been over a 1,000x reduction in error rates by face recognition algorithms. And, despite these tremendous improvements, several years ago automated face recognition was already an invaluable law enforcement tool.

With over a decade of use in policing in the U.S., there have been zero documented cases of automated face recognition technology resulting in wrongful arrests. One reason is that face recognition search results are not probable cause for arrest. Another is that law enforcement agencies do not use face recognition as a purely automated technology; human analysts examine the search results in forensic identification. And there are generally strict standards and guidelines followed by law enforcement officials.

There are inaccuracies in every form of identification technology, whether it is latent fingerprint analysis, DNA, ballistic identification, witness testimony, or the countless other tools used in law enforcement investigations. The key is to understand the limits of the technology, and employ workflows that are informed by the strengths and weaknesses of various identification methods. This is already the case with automated face recognition as extensive guidelines exist for proper usage.

Factors that influence face recognition accuracy

There is a belief that face recognition algorithms are “racist” and “sexist”. This is misleading and amplified by non-scientific research articles that generate inaccurate media headlines.

While the accuracy of face recognition can vary due to race, different algorithms and camera types influence accuracy more than race (or skin phenotype). Differences in the angle of the face relative to the camera also have a stronger influence on accuracy, as do many other environmental factors. Face recognition accuracy has been shown to be lower for women than men, but this clearly seems due to the cultural use of makeup.

Biases in the way humans recognize faces

While there are various biases in face recognition algorithms, humans are unfortunately rather terrible at face recognition when they lack familiarity with the person they are viewing. Humans are also quite poor at recognizing persons of races other than their own.

The cognitive limitations of human facial identification have resulted in a wide number of false arrests and convictions due to mistaken witness testimony. The issue is so bad that in roughly 70% of DNA exonerations, eyewitness testimony was one of the pieces of evidence that resulted in the false conviction.

The importance of facial identification

Our facial appearance is arguably the single most public piece of information about us. We will hesitate to provide our name to a stranger when asked, but we will readily let everyone in public view our face. Socially progressive countries such as France and Denmark have even banned the concealment of one’s face when in public.

While the linking of our facial appearance to other personal information needs to be regulated, our facial appearance is the first piece of information provided in nearly every public engagement we have. We simply cannot conceal our face and have acceptance in society.

This is not by mistake either. Facial identification is so important to the human race that we have evolved an entire region of our brain dedicated to the task.

Being misinformed is not an excuse

Face recognition has demonstrated clear benefits in enabling law enforcement to solve thousands of serious crimes over the last decade without a single example of an innocent person being falsely arrested due to a misidentification. This is an astounding improvement over the legacy method of eyewitness identification.

The Board of Supervisors of San Francisco seem to be unaware of these law enforcement use cases, and have instead determined that the propensity for facial recognition to endanger civil rights and civil liberties (which would already be constitutional violations) substantially outweighs its purported benefits.

The bipartisan legislation recently proposed in the U.S. Senate is a great example of an effective approach to regulating face recognition technology. While there are a few areas where the bill needs stronger regulations and restrictions, as we will highlight in a forthcoming article, it is crafted in a manner that will limit the technology’s use for harmful purposes while still allowing all the overwhelmingly positive uses.

Being misinformed is not an excuse for endangering citizens, and the San Francisco city lawmakers need to properly justify their decisions. It is hoped that this article can serve as a rational discussion point moving forward amid misinformation that has now confused a growing number of elected officials.

Like this article? Subscribe to our blog or follow us on LinkedIn or Twitter to stay up to date on future articles.

Popular articles: 

 

The post When Misinformation Endangers Lives appeared first on ROC.

]]>