Axon, formerly known as Taser, has launched a new “AI ethics board” to guide its use of artificial intelligence. The board will meet twice a year to discuss the ethical implications of upcoming Axon products, particularly how their use might affect community policing. Privacy groups responded to the news by urging the board to pay close attention to Axon’s development of facial recognition technology.
The use of real-time facial recognition in policing has become a contentious topic, as police forces in the UK and China begin testing the technology in public. The UK has installed CCTV cameras with facial recognition to scan for hooligans at soccer games, while Chinese police have integrated the technology into sunglasses to scan travelers at train stations.
The hope is that the new board will help Axon navigate the more troubling possibilities of facial recognition. “They can hold us publicly accountable,” Axon spokesperson Steve Tuttle told The Verge, “and help us define a set of AI ethics principles within law enforcement.”
While Axon says it is not currently developing real-time facial recognition tech for law enforcement, such a feature would fit extremely well with the company’s new focus on body cameras and video analytics. Axon CEO Rick Smith has said in the past that real-time recognition might be useful for extreme cases like child abductions or terrorist manhunts.
Until last year, Axon was known as Taser International, but took its new name from its cloud platform, which stores videos and photos taken from police body cameras. The platform contains more than 20 petabytes (20 million gigabytes) of data, and Axon says it makes the company “the largest custodian of public safety data in the US, and likely the world.”
A group of 41 civil rights groups has already responded to the new board, urging Axon to establish up front that products like real-time facial recognition are inherently unethical to deploy. “No policy or safeguard can mitigate these risks sufficiently well for real-time face recognition ever to be marketable,” the letter reads. The letter is embedded in full below.
Facial recognition algorithms have struggled with both racial and gender biases, exhibiting higher error rates for both women and non-white subjects. While some products have managed to achieve equitable error rates across the population, many algorithms still struggle with the issue. An MIT study earlier this year found significant racial discrepancies in algorithms offered by IBM, Microsoft, and China’s Megvii.
In a law enforcement context, those error rates would have a serious human cost. Higher false-positives for African-Americans would lead to more police stops and more arrests. “There’s a real concern that it could exacerbate the risk of police use of force,” Laura Moy of Georgetown Law’s Center for Privacy and Technology told The Washington Post. “In a real-time scenario where a police officer is likely armed, the risks associated with potential misidentification are always going to exceed any possible benefits.”
Speaking to The Verge, Tuttle emphasized the company’s desire to stay ahead of public concerns over AI. “Clearly there are AI algorithms that are going to have tremendous abilities coming up in the next five years, and we want to start thinking about that now,” Tuttle said. “The overarching goal here is to develop public trust.”
Tuttle said there would be a “flow of communication” between the board and Axon, which would include not only its biannual reports, but also “phone calls, emails, all types of exchanges.”
“[The board] is going to be privy to a lot of confidential information,” Tuttle said, “and we want to make sure that they’re aware of upcoming ideas [so] we can discuss them at the policy level and during the design phase.”