The latest advancements in artificial intelligence, exemplified by state-of-the-art technologies such as ChatGPT, are paving the way for groundbreaking innovations and novel applications across various industries.
These advances hold immense potential to boost productivity and stimulate creativity by revolutionizing the way we interact with information. However, they also highlight concerns about potential biases embedded within AI systems. These biases may inadvertently shape users’ perceptions, propagate misinformation and influence societal norms, posing challenges to the ethical deployment of AI technologies.
Not long after its launch, I explored ChatGPT's (January version) potential political biases by subjecting it to a variety of political orientation assessments. These evaluations are quizzes crafted to determine an individual's political ideology based on their responses to a collection of politically-charged questions.
Every question from each assessment was posed through ChatGPT's interactive prompt, utilising its responses as the answers for the evaluations. In 14 out of 15 political orientation assessments, ChatGPT's responses were categorised by the tests as displaying a preference for left-leaning perspectives. See results from each test in the table below.
ChatGPT/OpenAI’s content moderation system also displays a significant imbalance in its treatment of different demographic groups. The system classifies negative remarks about certain demographic groups as hateful, while identical comments about other groups are not flagged as hateful.
For example, demeaning statements targeting women, liberals or Democrats are more frequently categorised as hateful compared to the same comments directed at men, conservatives, or Republicans.
In these experiments, I also showed it is possible to tailor a cutting-edge AI system from the GPT-3 family to consistently produce right-leaning responses to politically charged inquiries.
This was accomplished by employing a widely-used machine learning method called fine-tuning. This approach harnesses the syntactic and semantic knowledge stored within the parameters of a Large Language Model (LLM), which has been trained on a broad corpus of text, and refines its outer layers' parameters using task-specific data – in this case, right-leaning responses to politically loaded questions.
Importantly, I developed this customised system, which I named RightWingGPT, at a computational cost of merely USD$300. This illustrates the technical feasibility and remarkably low expense involved in creating AI systems that embody specific political ideologies.
The malleability of biases in AI systems presents numerous hazards to society, as commercial and political entities may be inclined to manipulate these systems to further their own objectives.
The spread of various public-facing AI systems, each reflecting distinct political biases, could potentially exacerbate societal polarisation. This is because users are likely to gravitate towards AI systems that align with and reinforce their existing beliefs, further deepening divisions.
In already highly polarised societies, it is crucial to foster mutual understanding rather than exacerbate divisions. To be absolutely clear, AI systems should prioritise presenting accurate and scientifically sound information to their users.
Nevertheless, it is crucial to recognise and tackle inherent biases in AI systems concerning normative issues that lack objective and definite resolution and which encompass a variety of legitimate and lawful human viewpoints.
In such instances, AI systems should primarily adopt an agnostic stance and/or offer a range of balanced sources and viewpoints for users to evaluate. This approach would enable societies to maximise AI's potential to boost human productivity and innovation while also promoting human wisdom, tolerance, mutual understanding and reduced societal polarisation.
David Rozado is an Associate Professor at Te Pūkenga – New Zealand Institute of Skills and Technology. His Twitter profile is @DavidRozado
Originally published under Creative Commons by 360info™.