Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Fiona Harrigan

Be Wary of the Department of Homeland Security's AI Ambitions

On Tuesday, the Department of Homeland Security (DHS) announced the hiring of 10 artificial intelligence experts as the first members of its new "AI Corps," which will eventually become a 50-member advisory group. The new hires "will help DHS responsibly leverage new technology and mitigate risks across the homeland security enterprise," a press release explained.

AI shows promise for "strategic mission areas," DHS says, such as "countering fentanyl trafficking, combatting online child sexual exploitation and abuse, delivering immigration services, fortifying critical infrastructure, and enhancing cybersecurity."

DHS Secretary Alejandro Mayorkas has gone so far as to say that "there is not a domain" of the department that "could not use AI, if in fact we learn how to use it responsibly" and understand the implications for civil liberties. But Americans shouldn't count on the government to stick to that standard and shouldn't assume that they'll be immune to AI-related harms just because DHS says it'll use the technology in specific ways.

DHS "has regularly rolled out unproven programs that rely on algorithms and risk the rights of the tens of millions of Americans," wrote the Brennan Center for Justice's Faiza Patel and Spencer Reynolds last month. For one, "the screening, vetting, and watchlisting regimes that are supposed to keep tabs on potential terrorism appear never to have been tested." DHS also "runs sweeping social media monitoring programs that collect information on Americans' political views and activities" despite having "demonstrated no security value," argued Patel and Reynolds.

Earlier this year, the Government Accountability Office (GAO) pointed out that DHS, though required "to maintain an inventory of AI use cases," did not publish an accurate one. "Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI," the GAO found.

The AI Corps is one of many recent pushes to incorporate AI in more DHS activities. Earlier this year, the department launched a $5 million set of pilot programs that would "use A.I. models like ChatGPT to help investigations of child abuse materials, human and drug trafficking," "use chatbots to train immigration officials," and help create disaster relief plans, The New York Times reported. In April, Reps. Lou Correa (D–Calif.) and Morgan Luttrell (R–Texas) introduced a bill calling on DHS to develop a plan to implement AI and other new technologies at the U.S.-Mexico border. And an extensive AI executive order issued by President Joe Biden in October repeatedly mentions DHS as a key player in researching and applying AI tools.

"The DHS doubling down on machine learning and automated decision making is a troubling but expected turn considering federal law enforcement's insistence on spending money on the newest shiniest toy regardless of whether or not it has compelling use cases, proves ineffective, or threatens civil liberties," argues Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation, a digital rights group.

DHS is already using AI tools in broad, public-facing ways, such as for flight check-ins. Transportation Security Administration lines carry out touchless check-in at airports "by taking just a photograph" of the traveler, says DHS. That may sound innocuous enough, but it amounts to mass data collection and the mass surveillance of travelers across the country (more than usual, that is). And there's always potential for mission creep, like the collection of other biometric data.

DHS' increased AI adoption will have deeper consequences for specific groups, including migrants. In the immigration space, Guariglia warns, "more and more decisions, potentially decisions as pivotal or life and death as who gets to immigrate to the United States and who gets asylum, will be decided by computers." (Plus, border AI almost certainly won't be limited to just the border region, if other border surveillance methods are any indication.)

"Automated decision making also means having to collect a tremendous amount of information and data on people—which might be collected in invasive ways or from unreliable sources. This also brings up the concern of transparency," continues Guariglia. If a migrant is turned away at the border or singled out for interrogation "because an algorithm cited them as a risk, how would the public know when or if officers are being directed by machines and where the data the decision was based on came from?"

AI tools show great promise, but it's important to remember that they're works in progress. Immigration processing, fentanyl interception, and disaster relief might all benefit from them eventually. But is it wise for those high-stakes activities to be left in the hands of a federal government using AI tools whose ethical weight and operations it doesn't fully understand?

The post Be Wary of the Department of Homeland Security's AI Ambitions appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.