Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jacob Carpenter

The public isn’t ready to trust police with killer robots—but that shouldn’t stop conversations about the technology

(Credit: Tony Guiterrez—AP Images)

Before jumping into this week’s The Trust Factor, I wanted to provide a brief update about the newsletter. Following today’s edition, my Fortune colleague, Eamon Barrett, will be taking over The Trust Factor from here on out.

No, I’m not the victim of trust-busting layoffs, as I explored last week. Fortune has been incredibly generous and supportive as we’ve labored to illuminate the vital work done to build trust in business. Rather, I’ve accepted a job at a local news startup in Houston, my adopted hometown.

It’s been my privilege to share this space with you, and I look forward to seeing how Eamon and the rest of the Fortune crew carry it forward. Finally, a heartfelt thank you to the many readers and sources who supported our work. 

Now, on to this week’s edition.


For the latest issue of Fortune magazine, I wrote about the ongoing discussion over whether American police should be trusted to deploy one of the most powerful tools potentially at their disposal: weaponized robots.

To date, there’s only been one widely reported instance of local law enforcement using a remote-controlled robot to maim or kill a suspect. (Dallas police used one in 2016 to take out an assailant who murdered five officers in a savage ambush.) Still, rapid advances in engineering and technology have made weaponized robots into a modern-day reality, with some police chiefs arguing they could be useful tools for stopping mass casualty events and other violent situations.

For today, I want to set aside the merits of police using weaponized robots—a prospect that looks even dimmer after the beating death of Tyre Nichols last month at the hands of police in Memphis—and focus on how trust played into the debate over their use in one city: San Francisco.

Several months ago, San Francisco’s police department and its city council, known as the Board of Supervisors, started discussing whether to allow the use of such robots. The conversation stemmed from a law enacted in California the prior year, which stated that municipalities must set policies related to their use of military-grade equipment.

City police told board members that they wanted the authority to deploy a killer robot in extreme circumstances, but the conversation didn’t end there. 

To earn the trust of the San Francisco community and make its voice feel heard, San Francisco Supervisor Aaron Peskin—who shepherded the weaponized robot policy as the board’s rules committee chair—took multiple steps to facilitate debate. In a recent interview, Peskin told me that he reached out to several advocacy organizations, including staunch opponents like the American Friends Service Committee, and held committee meetings at which members of the public could provide feedback. He also traveled to police facilities to inspect the robot himself.

Despite negative feedback from some advocacy groups, Peskin and the Board of Supervisors pressed ahead in late November with the first of two votes needed to enact the policy preferred by police, ultimately supporting it by an 8-3 margin.

In the intervening week, however, a media firestorm erupted over the decision. Local, state, and national outlets seized on the vote, often quoting critics of weaponized robots. While San Francisco police stood by the policy, few proponents emerged from the broader public.

At the second vote, scheduled in early December, board members relented to the public pressure. They voted to send the policy back to the rules committee, where it’s not expected to be revived anytime soon.

“We stood our ground, but when the time came for a second reading the week later, a political pragmatist like myself said, ‘I’d not only touched a nerve across San Francisco, but the entire country,’” Peskin said. (San Francisco police subsequently complained that the debate had been “distorted” and was “a distraction from the real issue” of preventing the loss of innocent lives in extreme cases of violence.)

Peskin said the episode became a useful lesson in discussing the limits of trust in law enforcement, as well as the potential to gain the public’s trust by admitting to a mistake.

“Were it not for that (California) law, we never would have had this conversation, and we never would have realized how triggering this dystopian potential future might be,” Peskin said. “I think it was a very healthy, albeit mildly painful, exercise for our city to go through.”

Jacob Carpenter

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.