My first attempt to use the self-checkout lane at a grocery store went poorly. It was so long ago that I don't remember all the specifics, but I'm certain there was no weighing of fruits, scanning of coupons, or anything else extraordinary about my purchase. And yet, somehow the simple four-step process—scan it, bag it, pay for it, leave—was marred by technology that failed to work properly. By the time the transaction was complete, I was sufficiently annoyed to the point that it would literally be years before I would attempt another pass through the gauntlet of the self-checkout area.
[Viewpoint: Evaluating the Office Space Identity Crisis]
These days, I will happily trot to the oasis of cashier-free kiosks. In some stores, I actually prefer it. But that initial experience with self-checkout technology left a lasting negative impression.
Which brings us to artificial intelligence.
I was fact checking an article for a past issue of SCN and double-checked a quote from someone. Turned out the writer had attributed the quote to the wrong person—and then it turned out that he used ChatGPT to assist with his research and it had incorrectly attributed that quote.
Since then, I've been more of a skeptic than Dana Scully. Though I admit I'm making progress: I asked ChatGPT to identify characters in modern pop culture who are difficult to convince, which is how I settled on the good doctor from The X-Files in the previous sentence. However, another option ChatGPT gave me was Sherlock Holmes, who was created in the late 1800s. Sure, the character is regularly reintroduced to audiences through various entertainment projects, but does that make him modern? My skepticism remains.
[Blueprint for Success: CAIO and Arrivederci]
ChatGPT is a type of generative AI (GenAI) that can create text and images, even audio and video, based on the questions you ask it. We don't use ChatGPT to write stories at SCN. I think it's unethical—and it's already been shown to be inaccurate. In fact, Future (the company that owns SCN) does not allow the use of AI tools to generate or update articles, not even first drafts. I'm really glad we agree on this.
Last August, Amanda Barrett, VP for standards and inclusion at the Associated Press (AP), declared its staff doesn't use ChatGPT to "create publishable content." But on May 10, the organization updated its use of GenAI to allow for experimentation. Barrett said it is now allowing AI to "suggest" headlines, as well as "supply" automated summaries of articles written by AP journalists (read: create publishable content).
Obviously, the use of GenAI in the content creation business is evolving. To be fair, GenAI can be a useful tool for providing background information or generating terms on a particular topic. It can also be helpful with transcriptions, translations, and even grammar. There are workflow efficiencies inherent in the use of GenAI tools that can't be denied, and those efficiencies don't stop at the newsroom door.
[Editorial: Penny Wise, Ammo Round Foolish]
Last fall, Mary Mesaglio, distinguished VP analyst at Gartner, an international research and consulting firm, argued that GenAI is "evolving from being our tools to becoming our teammates," and predicted it will "be a workforce partner for 90% of companies worldwide" by next year. Not sure I'm ready to hang around the break room with ChatGPT on a Monday morning and talk about how the Miami Dolphins did on Sunday—particularly since its last update was in January 2022. Perhaps we can collectively agree to hold off on the anthropomorphizing for a bit longer.
Still, there's plenty of work for GenAI to do behind the scenes for content creators, including those who work in Pro AV. But ChatGPT offers a potential starting point, not a finished product. Brainstorming buddy? Sure. Credible source you'd find in a bibliography? Not quite yet.