Artificial intelligence has become a disruptive force in society. Terms such as machine learning, deep learning and neural networks have become commonplace among mainstream media, eliciting visions of innovation that has the potential to change our lives.
At its core, AI attempts to mimic the capabilities of the human brain. Whether it’s computer vision, which focuses on how computers understand the visual world, or natural language processing, which focuses on how computers recognize and interpret written text, the list of possibilities for AI use continues to grow.
Take, for example, aviation security. Many people will pass through security checkpoints at airports while traveling during the holiday season. The Transportation Security Administration will process as many as 2.5 million people at airport checkpoints on some of the peak holiday travel days.
The TSA’s responsibility is to protect the nation’s air system from malicious activity. Airport security involves many layers. Screening, for instance, uses various technologies to meet several objectives, such as validating a person’s identity and detecting any items that pose a threat, which a traveler may attempt to bring onto a flight.
The output of screening devices must be read and interpreted by TSA officers, and humans make mistakes. As such, the TSA is working to use AI to improve the detection process and reduce the impact of human error.
However, the hope for AI in airport security is more far-reaching.
Employing AI to determine intent from behavior, appearance and speech could have enormous practical impact and benefits.
AI systems that could measure human intent would simplify airport security operations, effectively reducing the need for threat item detection.
The TSA already does this in offering people access to expedited screening lanes by enrolling them in TSA PreCheck. An AI system that could assess intent among all travelers would be a quantum step forward in transforming airport security operations and procedures. With such a system, screening would be limited to a small subset of travelers, with most people passing through security checkpoints with little or no physical screening.
There are several challenges with designing and implementing such an AI system for aviation security. First, creation of the models and algorithms that process data and produce the required insights. Another is how AI systems make decisions and the inevitable false alarms and false clearance that come with them. The most skilled and knowledgeable humans make such errors. No AI system will be completely immune to errors, though the source of such errors will be the design and implementation of the models and algorithms.
A third issue is privacy. If an AI system can capture traveler intent, is this a line too far to cross? Would this be classified as an invasion of personal space, even with a positive end? That is why the TSA PreCheck program is voluntary, not mandatory: Participants must subject themselves to background vetting to qualify.
Perhaps most critically, the ethics surrounding the design of AI systems must be addressed. How an AI system incorporates ethics in its creation and implementation affects how it is received, perceived and adopted.
This challenge perhaps provides the greatest headwinds for AI advancements in our nation. It could be a factor in how other countries, which have differing ethical standards, could move past the United States in this area.
Investment in AI continues across the globe. The potential competitive advantage offered by AI is enormous. Yet transitions from the research lab to practice will remain choppy and uncertain, which will help ensure that progress is measured, methodical and slow. The U.S. must persist in its pursuit, however, given the worldwide competition and the need to retain a foothold in the AI arms race.
We are not likely to find an AI system in place at airports anytime soon that will measure human intent. However, the thought that it may be possible is what makes AI the disrupter and game-changer that demands everyone’s attention.
Indeed, the AI genie is out of the bottle, and where it takes us is a story that continues to be written.
____
ABOUT THE WRITER
Sheldon H. Jacobson is a professor of computer science at the University of Illinois at Urbana-Champaign. He employs his expertise in data-driven, risk-based decision-making to evaluate and inform public policy. He has studied aviation security for more than 25 years.