Get all your news in one place.
100's of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

Google says it is ‘proud’ to serve the Pentagon – new DoD contract expansion says Gemini will only be used for ‘any lawful purpose’, but what happened to 'Don’t Be Evil'?

Mountain View, CA, USA - May 1, 2022: Google sign is seen at Googleplex, the corporate headquarters complex of Google and its parent company, Alphabet, Inc., in Mountain View, California.
  • Google is edging into the military/government market
  • New Pentagon contract allows Gemini use for 'any lawful purpose'
  • Google employees are not happy with the new contract

Google recently expanded its contract with the US Department of Defense (DoD) to provide Gemini for use in classified operations, or for “any lawful purpose”, and has also pulled out of a $100 million Pentagon challenge to build autonomous voice-controlled drone swarms.

At the same time, the company is facing internal dissatisfaction with its decision to provide the Pentagon with Gemini for classified projects, but the company has responded by telling staff it is ‘proud’ of the Pentagon AI contract.

So how have Google’s ethics and policies evolved over time? And are they changing to allow the company to edge into a highly lucrative - although ethically dubious - slice of government pie?

Grounding the drones

Google’s pivot away from its once widely recognized motto of “Don’t Be Evil” may be coming true in the eyes of some Google employees, but it's not the first time the company has changed its policy. The company's AI principles once stated that the company would not deploy its AI tools where they were “likely to cause harm,” and would not “design or deploy” AI tools for surveillance or weapons.

Pulling out of the Pentagon competition to create technology capable of turning spoken instructions into commands for an autonomous drone swarm was reported by Google to be a matter of a lack of resources, however the actual cause is reported to be an internal ethics review, Bloomberg reports.

This suggests, at least, that the internal ethics board is still functioning and not entirely toothless.

On the other hand, with the company expanding its Gemini availability into classified networks, the Pentagon is free to use Gemini for “any lawful purpose”. This clause is more bark than bite.

Back before the turn of the century, it was illegal for communications providers to install backdoors for law enforcement purposes - but CALEA and the Patriot Act changed all that. Federal law enforcement was also previously prevented from legally seizing data stored on servers in foreign countries - but the CLOUD Act changed that too.

Things are only illegal until they’re legal, and vice versa, effectively giving the Pentagon a future-proof loophole should their intended use case suddenly be legalized.

Therefore, the “any lawful purpose” clause doesn’t offer any significant protection against using AI for autonomous weapons systems or mass domestic surveillance purposes, as Anthropic protested, and is weakened further by the inclusion of a clause within the Google-DoD contract that states the company does not have “any right to… veto lawful government operational decision-making.” Something OpenAI also encountered in its Pentagon deal.

This gives the Pentagon near-free rein over the direction it chooses to take with Gemini in its classified projects. Mass surveillance has been happening for decades, but AI’s purpose within it all is just to make it smarter, more targeted, and more efficient.

A slice of Pentagon pie

The appeal of working as a government and military contractor is a simple one: there is a lot of money involved. Before the ink had fully fried on Anthropic's severance from government use, OpenAI had a shiny expanded contract to fill exactly the role Anthropic was looking to avoid.

In a similar way, Microsoft and Amazon have already won numerous contracts involving cloud, AI, and cybersecurity tools, and it appears Google is trying to play catch up.

Google’s employees have been a challenge when it comes to the ethics of working with the government. In 2018, protests by Google employees resulted in the company dropping out of Project MAVEN over the use of Google technology in analyzing drone strike footage. These protests also resulted in Google’s now-missing ‘do no harm’ AI principles.

Google also faced similar dissent when employees opposed the company's potential involvement in providing technology to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP).

As is tradition, Google’s employees are once again forming digital picket lines, with over 600 signing a letter to CEO Sundar Pichai asking him to reject any use of Google’s AI technology for military purposes.

In response, Kent Walker, Google’s president of global affairs, wrote in an internal memo on Tuesday seen by The Information, “We have proudly worked with defense departments since Google’s earliest days, and we continue to believe that it’s important to support national security in a thoughtful and responsible way.”

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.