To help you understand how AI and other new technologies are affecting energy consumption, trends in this space and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe). You'll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…
A new and rising cybersecurity threat: Vulnerabilities from artificial intelligence as companies increasingly adopt generative AI, the tech behind ChatGPT and many other chatbots.
AI threats join a slew of other cyber risks. Though new AI security risks are known there are no surefire ways to address them. One big issue is the massive amounts of data needed to train complex AI models so users can create text, code, images, video, data analyses, charts, etc., by writing questions or prompts in plain English.
The AI chatbots can leak company info in a data breach. Sensitive company info is used for internal AI chatbots, including financial data, customer info, product research and legal files. Publicly available AI chatbots may pose more risk because now an outside party controls company data.
Hackers can even trick the AI model to get it to leak sensitive data with clever phrasing or repeated questions…a “prompt injection attack”, even if there are guardrails in place. Sources of data, programming code, company secrets, etc., are at risk.
Cyber pros are not confident in the current security of generative AI tools — both internal company apps and external ones from Google, ChatGPT and others. Some of the top recommendations for securing AI tools:
- Vet vendors closely.
- Have a clear policy on generative AI, including what apps are approved and in use, who is using them and what data are involved.
- Consider blocking certain outside apps.
- Vendors such as Microsoft, Forcepoint and Palo Alto Networks can scan AI models for security risks and track sensitive data and employee use.
It’s a fast-growing area...
Meanwhile, hackers are weaponizing emerging AI tools for cyberattacks, such as creating malicious software or sophisticated email phishing operations. Criminals can do this without technical know-how — just ask an AI chatbot for help.
Other trends to watch:
- An increase in supply chain attacks, where hackers target third-party software vendors and regular suppliers to steal the info they hold.
- New legal liability for security leaders related to SEC regulations and lawsuits. More companies are extending directors' and officers' insurance to security execs.
- Companies spending more to battle deepfakes — AI-manipulated media that put executives or customers at risk. Defensive tools include Reality Defender, a deepfake detection system.
- Ransomware remains a big problem, with hackers trying to lock down data and extort a payment. Attackers are becoming more evasive and persistent, too.
Businesses should continue to emphasize tried-and-true best practices — patching software regularly, two-factor authentication, employee security training, incident response plans, etc. Plus, they should always prioritize security when adopting new AI tools.
This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.