A scientist at Ben-Gurion University of the Negev has designed an AI system capable of identifying “social norm violations” in text samples.
With U.S. military funding, professor Yair Neuman and engineer Yochai Cohen built the system using GPT-3, zero-shot text classification and automatic rule discovery.
They trained the system to identify 10 social emotions: competence, politeness, trust, discipline, caring, agreeableness, success, conformity, decency and loyalty. The system successfully classified texts into one of these 10 groups, and defined them as positive or negative.
The system was tested on two massive datasets of short texts and empirically proved the validity of the models, according to a statement from BGU.
The U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) commissioned The Computational Cultural Understand (CCU) program to create cross-cultural language understanding technologies to improve situational awareness and interactional effectiveness. Cross-cultural miscommunication not only derails negotiations, but also can be a contributing factor leading to war, according to DARPA’s explanation of the program.
The findings were published recently in the journal Scientific Reports.
“This is a preliminary work, but it provides strong evidence that our approach is correct and can be scaled up to include more social norms,” said Neuman, who heads The Functor Lab in the Department of Cognitive and Brain Sciences at BGU.
The development of an AI system capable of identifying “social norm violations” in text samples carries significant implications, risks, and potential benefits. This technology has the potential to enhance situational awareness and interactional effectiveness, particularly in cross-cultural settings, as demonstrated by its development under the U.S. Defense Department’s DARPA program.
Furthermore, this research could pave the way for broader applications, extending beyond cross-cultural contexts to other domains where social norms play a critical role, such as in online interactions, customer service, and even social media content moderation. By identifying norm violations, AI systems might help maintain more positive and respectful communication spaces, fostering healthier online communities.
However, this advancement also comes with risks and challenges. One significant concern is the potential for biases in the training data, which could lead to the perpetuation of existing cultural stereotypes and norms. If the AI system is trained on biased data, it may inadvertently reinforce discriminatory behaviors or even create new forms of bias. Additionally, there could be ethical implications related to privacy and data usage, as such AI systems might analyze and classify individuals’ communications without their consent or knowledge.
As the system is scaled up to include more social norms, it is crucial to ensure rigorous oversight and validation to address potential shortcomings and biases. Researchers must be committed to continuously refining the system to minimize false positives and negatives, as misidentifying social norm violations could lead to unintended consequences, such as stifling free expression or misjudging intent in communication.
In conclusion, while the AI system’s potential for identifying social norm violations offers exciting possibilities for improving cross-cultural understanding and communication, it must be developed responsibly, with a keen awareness of the risks and benefits it entails. By addressing biases, ensuring transparency, and upholding ethical principles, this technology has the potential to make positive contributions to diverse fields, but its implementation should be carefully managed to prevent misuse and protect individuals’ rights.
Produced in association with Jewish News Syndicate
(Additional reporting provided by JNS Reporter)
Edited by Suparba Sil