Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newsroom.co.nz
Newsroom.co.nz
Technology
360info

The ethics of mind-reading

Students at a school in China were reportedly asked to wear American devices that monitored their concentration levels. Photo: Thomas Galvez

Tapping into someone’s thoughts may soon be technologically possible. Our institutions are ill-equipped to deal with the resulting human rights violations, writes Jared Genser, in part one in a series on brain-machine interfaces

It was once science fiction but brain-machine interfaces — devices that connect a person’s brain to a computer, machine, or another device such as a smartphone — are making rapid technological advances.

In science and medicine, brain-machine interfaces have revolutionised communication and mobility, helping people overcome immense mental and physical challenges. They helped a man who is paralysed and non-verbal to communicate at a rate of 18 words a minute with up to 94 percent accuracy; a person who is quadriplegic to drive a Formula One race car; and a person who is paraplegic to make the first kick of the World Cup using a mind-controlled robotic exoskeleton. And in the realm of consumer products, CTRL-Labs developed a wristband for consumers that controls your computer cursor with your mind, and Kernel’s Flow wearable helmet maps brain activity with unparalleled accuracy.

While these developments are promising, brain-machine interfaces also raise new human rights challenges. Other technology uses algorithms to extrapolate and collect data on users’ personal preferences and location, but brain-machine interfaces offer something completely different: they can directly connect the brain to machine intelligence.

Because the brain is the site of human memory, perception, and personality, brain-machine interfaces pose challenges not only for the privacy of our minds, but also for our sense of self and for free will.

In 2017, the Morningside Group, composed of 25 global experts, identified five “neurorights” to characterise how current and future neurotechnology (methods to read and record brain activity, including brain-machine interfaces) might violate human rights. These include the right to mental identity, or a “sense of self”; the right to mental agency, or “free will”; the right to mental privacy; the right to fair access to mental augmentation; and protection from algorithmic bias, such as when neurotechnology is combined with artificial intelligence. By protecting neurorights, societies can maximise the benefits of brain-machine interfaces and prevent misuse and abuse that violates human rights.

Brain-machine interfaces are already being misused and abused. For example, a US neurotechnology startup sent wearable brain activity-tracking headbands to a school in China, where they were used in 2019 to monitor students’ attention levels without consent. Further, at a Chinese factory, workers wore hats and helmets that purported to use brain signals to decode their emotions. An algorithm then analysed emotional changes affecting workers’ productivity levels.

Although the accuracy of this technology is contested, it sets a disturbing precedent. But the misuse and abuse of brain-machine interfaces could take place even in democratic societies. Some experts fear that non-invasive, or non-surgical and wearable, brain-machine interfaces may one day be used by law enforcement on criminal suspects in the US and have advocated for expanding constitutional doctrines to protect civil liberties.

The rise of consumer neurotechnology emphasises the need for laws and regulations that reflect the technology’s advancement. In the US, brain-machine interfaces that do not require implantation in the brain, such as wearable helmets and headbands, are already marketed as consumer products with claims including that they support meditation and wellness, or improve learning efficiency or enhance brain health. Unlike implantable devices, which are regulated as medical devices, “wellness” devices are consumer products and subject to minimal to no regulations.

Consumers may be unaware of the ways in which using these devices may infringe their human rights and privacy rights. The data that consumer neurotechnology collects may be insecurely stored or even sold to third parties. User agreements are long and technical, and they have concerning provisions that allow companies to indefinitely keep users’ brain scans and to sell them to third parties without the kind of informed consent that protects individuals’ human rights. Today, it is possible to interpret only some of a brain scan, but that will only increase as brain-machine interfaces evolve.

Human rights challenges posed by brain-machine interfaces must be addressed to ensure their safe and efficacious use. At the global level, the UN Human Rights Council, a 47-member state body, is poised to vote on and approve the UN’s first major study on neurorights, neurotechnology, and human rights. UN leadership on neurorights would generate international consensus on a definition of neurorights and galvanise new legal frameworks and resolutions to address them.

Expanding the interpretation of existing international human rights treaties to protect neurorights is another important path forward.

The Neurorights Foundation, a US nonprofit organisation dedicated to human rights protection and the ethical development of neurotechnology, published a first-ever report demonstrating that existing international human rights treaties are ill-equipped to protect neurorights. For example, the Convention Against Torture and the International Covenant on Civil and Political Rights were drafted before the advent of brain-machine interfaces and contain terms and legal standards, such as “pain,” “liberty and security of the person,” and “freedom of thought and conscience” which must be further interpreted with new language to address neurorights. Updating international human rights treaties would also legally obligate states that ratify them, to create domestic laws protecting neurorights.

Another important step is the development of a global code of conduct for companies which would also help create standards for the collection, storage, and sale of brain data. For instance, making privacy of brain data an ‘opt-out’ default setting for consumer neurotechnology would help protect users’ informed consent by letting them decide when their brain activity is monitored. This type of standard is easily replicated in regulations at the national and industry levels.

Simultaneous effective multilateral co-operation, national attention, and industry engagement are all needed to address neurorights and to close “protection gaps” under international human rights law. Ultimately, these approaches will help guide neurotechnology’s ethical development and, in the process, reveal the strongest paths to preventing the technology’s misuse and abuse.

Jared Genser is an adjunct professor of Law at Georgetown University Law Center and managing director of Perseus Strategies, and General Counsel of the Neurorights Foundation. This article was prepared with the assistance of Stephanie Herrmann, an international human rights lawyer at Perseus Strategies and the Neurorights Foundation. Professor Genser declares no conflicts of interest.

Originally published under Creative Commons by 360info™.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.