So far, 2023 has brought with it a powerful wave of news about artificial intelligence, driven by OpenAI's natural language processing tool ChatGPT, which is a fascinating bit of technology that can do everything from writing your resume to solving complex math problems (if you ask it nicely, that is).
Many major companies have been quick to jump on board. Microsoft (MSFT) briskly threw its own hat in the ring with its lesser-used web browser Bing, giving it a brand-new relevance. Snapchat, Spotify, and Instacart are just a few of the others who are also using the tool.
DON'T MISS: Bill Gates Reacts to Warren Buffett's 'Most Important' Advice
Now Microsoft co-founder Bill Gates has written a long, insightful post on his blog GatesNotes on the topic, starting out with how he believes AI is quick becoming "the most important advance in technology since the graphical user interface."
"The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it," Gates writes.
Gates sees much possibility for what AI can achieve to make our lives easier by acting as a digital personal assistant. But he also mentions how the technology can contribute to the improvement of the world overall, specifically within the health and education sectors.
For all that good, however, Gates also warns of the problems that could come of a tool so powerful that it "will be able to do everything a human brain can, but without any practical limits on the size of its memory or the speed at which it operates."
The biggest threat, Gates observes, is posed by humans armed with AI, saying that governments need to work with the private sector on ways to limit the risks.
And what about AI running out of control, bringing all those fears from decades of sci-fi films to life? Gates says it's possible, but we're not quite there at the moment.
"Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months," he wrote.
Get exclusive access to portfolio managers and their proven investing strategies with Real Money Pro. Get started now.