What you need to know
- Google just released image generation for its AI tool, Bard.
- Bard competes with Microsoft Copilot, which already had an image creation tool.
- AI image generation has been in the headlines recently because AI was used to make fake sexually explicit images of Taylor Swift that were then shared by many on X (formerly Twitter).
Microsoft Copilot has a new competitor in the image creation game. Google just announced that Bard can now generate images. Powered by Google's Imagin 2 model, the feature allows you to type a description and then have AI create an image. The functionality is similar to DALL-E 3, but Google's new option is free.
Google also announced other upgrades to Bard and the expansion of Gemini Pro to more regions. In total, Bard is available in 40 languages across 230 countries and territories. Our colleagues over at Android Central covered today's announcement from Google, so I'll point you in that direction for general news and a breakdown of all of today's updates. I'll focus on how Bard competes with Microsoft's tools and the timing of Google's announcement.
Google Bard vs Microsoft Copilot
When Microsoft announced Bing Chat (now called Copilot), the tech giant caused a stir over at Google. While Google dominates the search game, the tech giant didn't want to lose out on the AI race. Thus, Google allegedly rushed Bard, announcing the product in March 2023. A $100 billion blunder later, Google was on its way to having its own AI tool for the masses.
At the time, it seemed like AI would take off, and it many ways it has, but not in the way may expected back then. Bing market share was stagnant last year, despite Microsoft investing billions into the search engine. Microsoft has since shifted its AI tools away from the Bing brand, favoring "Copilot" instead.
AI is still a major part of Microsoft's plans, but it seems that AI integration into Office, Edge, and other Microsoft services will be more important. Google may run into similar roadblocks and situations.
AI-enhanced search engines will still be around, but it seems that AI creation will see more attention from tech giants like Google and Microsoft. We'll have to see how Google's image generation stacks up against Microsoft's in real-world usage.
A dangerous time for AI
Unfortunately, it's difficult to talk about AI-generated images these days without mentioning the fake images of Taylor Swift made with artificial intelligence tools. While those tools aren't related to Google directly, Google released image generation for Bard in the immediate aftermath of one of the world's most famous celebrities being the victim of AI creation.
To catch you up, several sexually explicit images of Taylor Swift flooded X (formerly Twitter) recently. All of the photos were fake, having been made with AI image generation technology. 404 Media report explained that members of a "Telegram group dedicated to abusive images of women" created the fake images of Taylor Swift using Microsoft's AI image generator (likely Image Creator from Designer).
The saga drew criticism to AI creation of pornographic images made without the consent of the person those images depict. Microsoft CEO Satya Nadella said the images "set alarm bells off" about what can be done to restrict AI and called situation "alarming and terrible."
Now, Google will have to see if it can limit the creation of similar images with its tools. The company highlighted its guardrails in its post announcing the new capability for Bard:
"Our technical guardrails and investments in the safety of training data seek to limit violent, offensive or sexually explicit content. Additionally, we apply filters designed to avoid the generation of images of named people. We’ll continue investing in new techniques to improve the safety and privacy protections of our models."
Of course, Microsoft and other AI companies have guardrails in place as well. The reality is that people will always find ways around limits like these. Even in a hypothetical situation where Microsoft and Google's AI models could not be used to make sexually explicit images, people could use other AI models to do so.
That fact makes it important for companies like OpenAI to continue work on its tool that can identify AI-generated images with 99% reliability. While the fake images will still exist, they could be flagged up on social media for removal and lower the spread of false information.