Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Frederik Hvilshøj

Why we can’t open-source a solution to A.I.’s ethical issues

OpenAI CEO Sam Altman (Credit: Win McNamee—Getty Images)

While open-source code has revolutionized the world of technology, recent developments like the rise of foundation models, accelerated investment into artificial intelligence, and escalating geopolitical A.I. arms races have forced the open-source community to confront the ethical issues surrounding open-source code. 

Potential intellectual property violations, the perpetuation of bias and discrimination, privacy and security risks, power dynamics, and governance issues within the community, as well as the environmental impact, are all ethical issues that need addressing.

These issues have kickstarted a debate on whether a move from an open-source movement to an ethical source one could be the solution. Many developers have advocated for licenses (much like the Hippocratic Licence) that put ethical restrictions on the use of open-source code. Others point to the potential role of government regulators as the solution.

When it comes to machine learning models, there are a lot of unknown unknowns. The developers of these models must now face the decision as to whether to open-source or not. But for developers to predict all the possible use cases for machine learning is as impossible as the mathematicians at Bletchley Park predicting all the potential use cases for computers. Most developers recognize that by making their code open, they lose control of how it’s used–and who uses it.

Licenses are increasingly heralded as a solution to this problem. However, not only does restricting the use of open-source code with additional licenses contradict the core principle of the open-source community that code should be accessible to everyone, but it could also damage the collaborative environment that’s been fundamental to the open-source community’s ability to speed up technological development.

There’s also a lot of doubt as to whether ethical licenses will actually reduce the risk of code being used for nefarious purposes. Many countries already have human rights laws, and individuals or organizations violating those laws should be prosecuted accordingly, regardless of the method or technology used to perpetrate the abuse. If such laws do not deter these violators, then it’s unlikely that a licensing agreement would have any impact on the course of their actions.

A.I. systems are complex, which makes enforcing their ethical use burdensome. The rapid advancement of A.I. technology also makes it challenging to keep up with developments and potential ethical implications. Additionally, a lack of transparency and accountability for A.I. systems can make it challenging to hold organizations responsible for ethical violations. Addressing these challenges requires ongoing collaboration, dialogue, and investment in ethical research and development to ensure that A.I. is used in a way that aligns with societal values and promotes the greater good.

The burden of ethical use should rest with those who use open-source code to build A.I. products as opposed to those who write the code. That’s why government regulation is key for ensuring the ethical use of A.I. Government regulation would enforce ethical use to be defined rigorously, and result in the creation of bureaucratic structures around evaluating A.I. systems.

Similar to how governments regulate and scrutinize medical products before approving them for public consumption, governments could also ensure that A.I. passes certain tests before it’s released to the public. The onus should be on governments, armed with ample resources to investigate these tools, to take responsibility for these tests, rather than on developers, who can instead focus on building more advanced A.I.

Any company, organization, or individual should have to provide clear details about the properties and broader impacts of the model, including the data used to train it and the code used to develop it. They should disclose any potentially harmful applications before making it available for use. This application approach is not so different from the application process used by many of the large academic conferences, which enforce a submission structure through which the applicant must transparently address the ethical issues and broader impact.

In general, evaluating models to the point where users can be certain that they operate ethically and correctly all the time remains a problem in the field of A.I. Even OpenAI has struggled to figure out how to do it with ChatGPT. The good news is that since there has been a lot of research into understanding the fairness, accountability, and transparency of A.I. models, governments already have a lot of tools from which to begin building regulatory frameworks for A.I. 

The European Union introduced a proposal for a new legal framework on A.I. in 2021, which aims to ensure that A.I. is developed and used in a way that aligns with EU values. The United States has established the National Artificial Intelligence Initiative Office (NAIIO) to coordinate federal investments in A.I. research and development. Canada has similarly established the Canadian Institute for Advanced Research (CIFAR) to fund interdisciplinary research on A.I. and to develop ethical and technical standards for A.I. Singapore has introduced the Model AI Governance Framework to provide guidance on the responsible development and use of A.I. 

By taking a multi-stakeholder approach and investing in ethical research and development, governments can ensure that A.I. is developed and used in a way that aligns with societal values and promotes the greater good.

We are moving into an era in which decisions are being made for and about individuals using algorithmic processes that do not have human involvement. Individuals have a right to an explanation about how these A.I. systems reached those decisions, and that explanation depends on having a transparent process. 

While the movement for ethical source licenses comes from a place of strong principles and positive intentions, their effects are limited due to factors such as lack of enforcement mechanisms, limited scope, lack of awareness, alternative licensing options, and lack of standardization, and their approach requires a reactive legal effort. Licensors would have to watch all the uses of their open-source code, which is impractical. Governments, on the other hand, have the opportunity to protect the public from the potential negative impact of A.I. systems–and they can do so through proactive regulation and enforcement. 

Frederik Hvilshøj is the machine learning lead at Encord.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.