Google’s Cloud Next 2024 has concluded with a significant announcement regarding the use of AI large language models to enhance user security, particularly for Gmail and Google Drive users. The rapid advancement of generative AI has lowered the barrier to attacks, leading to a surge in high-quality phishing attempts. In response, Google has developed custom large language models (LLMs) to combat these threats.
The custom LLMs are trained on the latest spam and phishing content to identify semantically similar malicious activities. Deployed in late 2023, these LLMs have proven to be effective in safeguarding users, with Google reporting a significant reduction in spam and malware incidents.
Google's AI-powered defenses have been successful in detecting twice as much malware compared to traditional third-party security products and have achieved a 99.9% success rate in blocking spam. The company remains committed to continuous innovation to address the remaining 0.1% of security threats.
In addition to the built-in security enhancements for over 3 billion Google Workspace users and 10 million paying customers, Google has introduced an optional AI-security add-on. This tool aims to automatically classify and protect confidential information in files, addressing a common request from Workspace customers.
The new AI tooling can identify hidden sensitive data and recommend additional protections, which can be implemented with ease. Priced at $10 per user per month, the tooling offers customizable options to meet the specific needs of each customer and can be integrated into most Workspace plans.
Google's proactive approach to leveraging AI for enhancing user security underscores its commitment to providing a safe and secure digital environment for its vast user base. The continuous development and deployment of advanced AI technologies demonstrate Google's dedication to staying ahead of evolving cybersecurity threats.