Invisible Policing: Smart Technology in Law Enforcement
by Smart City Expo Atlanta
July 07, 2020
The police killings of George Floyd, Breonna Taylor, and Rayshard Brooks have sparked outrage around the world. Thousands have protested in every U.S. state, and more than 60 countries in support of the Black Lives Matter movement against police violence. Demonstrators are demanding policy changes related to police misconduct, excessive force, and racial bias. And as part of the global movement, activists have also turned their attention toward the technology tools that police use to fight crime.
As cities become smarter, what are the consequences for policing? More cameras, sensors, and data mean more surveillance of our communities. License plate readers pinpoint thousands of plates per second; predictive policing software estimates where future crimes could happen; and facial recognition technology can identify a single person in a crowd.
Law enforcement agencies say these technologies streamline and enhance police work to help keep us safe. But social justice activists, citizens, computer scientists, and even companies that develop surveillance tech worry about invasion of privacy and wrongful arrests.
Ruha Benjamin, associate professor of African American studies at Princeton University and author of “Race After Technology: Abolitionist Tools for the New Jim Code,” says with technology, law enforcement agencies can “police without the police.” She warns that calls to defund police departments could lead to more surveillance through the use of technology.
— Ruha Benjamin (@ruha9) June 12, 2020
ShotSpotter has provided crime-fighting technology to law enforcement for 24 years with the mission of reducing gun violence, a crime that typically goes unreported. The company strategically places microphones in high-crime areas to pick up the sound of gunfire and alert police to the location.
ShotSpotter has deployed its gunfire detection technology in more than 100 cities and counties in the U.S. and internationally, and some communities report a significant drop in crime. Oakland, California has seen a 66% reduction in shootings per square mile over several years, and in the Cincinnati neighborhood of Avondale, numbers dropped by half.
“Less than 1% of the population is responsible for two-thirds of the shootings nationally,” said Sam Klepper, ShotSpotter’s Senior Vice President of Marketing and Product Strategy. “So if you can get to the scene, find evidence — or the perpetrator — and get them off the streets, in some cases, it only takes a handful of key arrests to see a dramatic reduction in gun violence.”
Activists argue, though, that certain technologies increase police interactions with Black residents and their risk of police brutality and harassment. In Sacramento, they recently protested ShotSpotter’s five-year contract renewal with the city’s police department. And in Canada, the City of Toronto halted plans to install the technology after councilmembers raised privacy concerns — mainly about whether microphones would continuously record private conversations.
ShotSpotter says its technology is not a cure-all, but rather another tool that police departments can use as part of a comprehensive strategy to control gun violence. In fact, in spring 2019, the Policing Project conducted an independent audit of ShotSpotter’s gun detection technology and found that “the risk of voice surveillance was extremely low.” The company went on to further work with the non-profit to ensure its technology was even more privacy-secure.
While some technologies are shown to preserve citizen privacy, there’s growing apprehension about other tools like facial recognition software, which law enforcement use to identify criminal suspects. Evidence is mounting that despite the technology’s appearance of neutrality, racial bias is baked into the system. In separate studies, MIT researcher Joy Buolamwini and the National Institute of Standards and Technology (NIST) found that facial recognition systems often falsely identify Black and female faces, sometimes 10 to 100 times more often than white men, according to NIST.
In recent weeks, three major tech companies have made significant changes to the use of their facial recognition systems. IBM has stopped offering the technology altogether, a decision that may have been in the works long before the Black Lives Matter protests. Microsoft will no longer sell its facial recognition technology to law enforcement, while Amazon has placed a one-year moratorium on police use of their own.
All three companies have raised concerns about the misuse of facial recognition, and are advocating for federal law to regulate it. “We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Microsoft President Brad Smith said during an online event.
Democratic lawmakers have introduced a bill to ban facial recognition technology and other biometric surveillance systems at the federal level. It also withholds federal funding from state and local governments, including law enforcement agencies, that continue to use facial recognition.
While advocates of ethical and equitable tech support the legislation, some say law enforcement bans — whether temporary or permanent — don’t go far enough. They want tech companies to fully recall all facial recognition systems for all uses and say that technology cannot solve complex and deeply embedded social issues.
“We know that even if tech companies were to fix the algorithm, the injustice that happens within the current criminal justice system is still going to have a greater detriment to Black and brown bodies,” said Tawana Petty, Director of the Detroit Community Technology Project (DCTP) Data Justice Program. “We shouldn’t add a flawed technology or even a fixed algorithmic technology on top of an already flawed criminal justice system.”
Detroit, which has an 80% Black population, has been a focal point in the debate over surveillance technology. In 2016, the city’s police department rolled out a network of high-definition surveillance cameras that stream live video from local businesses to police headquarters. Over the years, the crime-fighting effort, called Project Green Light, has grown to over 700 locations and now includes facial recognition.
DCTP and other groups are fighting to ban the program, and have been able to limit its scope. Officers cannot track criminal suspects in real-time and instead must use still images to compare to their database of photos.
Even with restrictions in place, flawed facial recognition may have nearly cost a Michigan man his freedom. Police arrested Robert Williams, claiming he stole watches from a jewelry store. But the department’s facial recognition system pointed officers to the wrong man. The ACLU has now filed a complaint against the Detroit Police Department in what may be the country’s first known case of a false arrest using artificial intelligence.
In the wake of killings at the hands of police and increased scrutiny of their practices, cities are requiring more oversight over surveillance technology. In June, the New York City Council passed legislation that requires the city’s police department to disclose the technology it uses, how they use it, and processes to prevent racial, religious, and other types of discrimination. New York joins other cities like San Francisco and Seattle that mandate transparency in police technology.
Technology companies are also driving change in the criminal justice system. In 2019, Microsoft launched its Criminal Justice Reform Initiative, which invests in partnerships and programs that support reform. In the company’s home state of Washington, Microsoft has developed technology solutions to help court judges better understand the impact of fines and other legal financial obligations and hand them down more fairly. Research shows these debts disproportionately impact the poor and people of color.
Phaedra Ellis-Lamkins is CEO and Co-Founder of Promise, a California-based company that’s also working to reform the criminal justice system through digital technology. “Most technology in the criminal justice system is built for the police, to incarcerate more folks or to figure out how to make policing more effective,” said Ellis-Lamkins, who’s also a social justice advocate. “We want to understand how to build technology for the people we grew up with, the people that we saw sometimes harmed by the criminal justice system.”
Promise’s mobile application helps people who are navigating the criminal justice system to meet court-ordered obligations. They receive reminders about upcoming court dates and can upload documents, contact case managers, and pay fines from their smartphone.
We don’t yet know the full scope of how recent events have changed the public’s view of law enforcement and their use of surveillance technology. But, a 2019 study found that more than half of Americans say it’s acceptable for law enforcement agencies to use facial recognition tools to assess public security threats and trust them to deploy this technology responsibly. Black, Hispanic, and younger Americans have less confidence in it than older white people.
The police themselves welcome new technology and, according to one study, anticipate using it more often in the future. Over the next five years, they expect to integrate body-worn cameras, biometrics, video analytics, and predictive policing technologies more often.
As cities become smarter and more connected, leaders face the difficult challenge of overseeing this powerful new form of policing. It’s a careful balance between adopting new tools to keep communities safe while avoiding the violation of citizens’ civil rights. Meanwhile, if recent protests are any indication, the people most impacted by discrimination have successfully amplified a national conversation about racism in tech. They’re also prepared to turn the digital eye outward to demand that technology is used to examine the powerful, instead of the powerless.