The most significant breakthrough of 2022 wasn't nuclear fusion, which is still decades away from being a reality, but the advent of artificially intelligent chatbots.
Former U.S. Treasury Secretary Lawrence Summers even declared that one of these chatbots, ChatGPT, is a development on par with the printing press, electricity, and even the wheel and fire. While there is a lot to be excited about, the new technologies also have no guardrails, and my family has already seen their dark side.
ChatGPT is a chatbot developed by OpenAI that can generate text that is fluent, coherent, and relevant to a given context. It can provide personalized responses to common customer inquiries, generate reports and summaries based on large datasets, and help scientists and researchers by providing summaries of complex research papers and articles, as well as generating ideas for further investigation.
However, ChatGPT can also be used for generating fake news articles or social media posts to spread misinformation or influence public opinion and creating deepfake videos or audio recordings by synthesizing realistic human voices or faces.
The problem is that the answers that ChatGPT provides are so realistic and seem so authoritative that they fool even the best technology experts and economists, such as Lawrence Summers.
As cognitive psychologist and A.I. researcher Gary Marcus noted in a recent blog post, these systems can be fun to play with, but they are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination. As Marcus wrote, if you ask them to explain why crushed porcelain is good in breast milk, they may tell you that “porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop.”
The reliability and trustworthiness of ChatGPT and other similar technologies have been a source of concern for many A.I. researchers. The issue was significant enough that Meta AI, the company behind the chatbot Galactica, decided to withdraw the product just three days after its release in mid-November due to concerns over the potential for political and scientific misinformation.
I didn't take the warnings seriously until my son, Vineet, started using a version of OpenAI's GPT technologies and asked it to tell him "interesting details about Vivek Wadhwa and his family." The response seemed very credible but had significant inaccuracies, the most glaring of which was that it stated that I am married to Ritu, who is an executive at Microsoft and a graduate of the University of California, Berkeley and together we have three children: Anjali, Anupamam, and Arjun. It also detailed where the children worked and their educational backgrounds.
I lost my dear wife Tavinder to cancer three years ago and both of my sons, Vineet and Tarun, are still as devastated as I am. I have no idea how this A.I. gathered this hurtful misinformation or how to correct it. I’ve never met someone called Ritu Wadhwa and can’t even find a Microsoft employee with this name on LinkedIn.
This is a deep flaw with all machine learning technologies: they are designed to mimic the way the human brain's neural networks function, but they do this in a limited and imperfect way. Deep learning systems have millions or even billions of parameters, identifiable to their developers only in terms of their geography within a complex neural network. They are often referred to as a "black box," meaning that the processes and reasoning behind their outputs are not transparent or easily understood.
Once a neural network is trained, not even its designer knows exactly how it is doing what it does. This makes it difficult to reverse engineer or understand how the A.I. system learned what it did.
When I re-ran the query that Vineet did, I got several different responses including one that said that I am married to someone called Quatrina Hosain who is an entrepreneur and technology executive and we have two children, a son and a daughter. She too is a mystery–and there is no way to determine where the A.I. got this misinformation from.
ChatGpt is still in development and the founders of OpenAI have acknowledged its weaknesses, which will surely be addressed over the next few years as the technologies continue to advance exponentially. But they will create even greater societal problems than misinformation–by decimating jobs in data entry, customer service, data analysis, manufacturing, and transportation.
Note that more than 70% of this article was written by ChatGPT based on some notes and queries I gave it, so not even journalism jobs are safe.
This is the amazing and scary future we are rapidly headed into.
To ensure that A.I. is developed and used in a responsible and beneficial manner that is aligned with human values and ethical principles, we need strong guardrails and tight regulations. To address the concerns about the potential negative impact of A.I. on jobs, governments, businesses, and other stakeholders must work together to ensure that the benefits of A.I. are shared broadly and that policies are put in place to support workers who may be negatively impacted by these technologies.
Vivek Wadhwa is an academic, entrepreneur, and author. His book, From Incremental to Exponential, explains how large companies can see the future and rethink innovation.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
More must-read commentary published by Fortune:
- Will the U.S. and Europe slide into recession in 2023? Here’s how to look out when economic outlooks don’t
- Biden crowned world energy czar as diplomacy triumphs over Putin’s tantrums
- 2023 will be the year of digital assassination. Are you ready for the 2-hour internet day?
- Could Kanye West be placed under Kim Kardashian’s conservatorship?