The Feats, Potential, and Ethical Issues of Brain-Computer Interfaces

(This paper is a copy of a research paper submitted to my high school English class. To facilitate reading, I have removed some in text citations, with the exception of text directly quoted from the source. If you would like to see the paper with the in text citation, please feel free to contact us.)

According to the World Health Organization, 250,000 to 500,000 people suffer from a spinal cord injury globally every year. Among them is Nathan Copeland, a tetraplegic, paralyzed in all four limbs, in his thirties. He can control a prosthetic device, fist-bump President Barack Obama, move objects around, and most impressively, experience sensations from his own hand. Currently, researchers are developing brain-computer interfaces (BCIs) such as these, to restore and grant the disabled the ability to interact with the world around them. However, BCIs raise several evident and speculative concerns that should be considered before they can enter common usage. Such ethical issues include the potential for BCIs to be developed and disseminated irresponsibly; to cloud the user’s sense of identity and responsibility for errors; and to be susceptible to threats that could cause harm and deepen existing inequality. Overall, this paper aims to describe the advancements made in the development of brain-computer interfaces and their path forward, particularly those implanted in the brain. Furthermore, in contrast to the various promising and impressive aspects of this rapidly emerging technology, this paper will examine several potential ethical issues surrounding all forms of BCI.

The Feats and Trajectory of Brain-Computer Interfaces

Definition and examples of brain-computer interfaces

With the help of brain-computer interfaces, a person’s own neuronal signals can directly control the behavior of an output device, like a prosthetic arm. The interfaces work to sense, interpret, and react to neuronal data. For instance when an individual imagines moving his hand, the neural activity is detected, interpreted as a certain movement, and then executed as a movement command. Bi-directional BCIs are similar, as they “write” in sensory input for example and “read” in imagined brain signals. BCIs can be invasive or noninvasive and used in the medical field or consumer market. In the medical field, they act as rehabilitative or assistive devices, and for general consumers, they can be used for gaming, entertainment, and enhancement. Noninvasive BCIs, such as electroencephalogram (EEG)-based BCIs, are safe, easy, and expensive to use. However, they lack accuracy, as electrical signals from the brain are “significantly attenuated in the process of passing through the dura, skull, and scalp” (Shih et al), where invasive, implanted, intracortical BCIs are normally held. Rehabilitative BCIs are used in cases when a person needs to relearn a movement through stimulating neuroplasticity, whereas assistive BCI acts as an “alternative for a user’s lost function” (Vlek et al).

A notable example of a BCI is the cochlear implant, which restores hearing in people with hearing loss from auditory nerve damage. With a microphone and processing system to transmit the signal to the implant, it “sends pulses of current into nearby nerves that the brain interprets as sound” (Juskalian). Another potential candidate for BCI is synthetic speech generated from brain recordings. Edward Chang, a researcher from the University of California, San Francisco, and his team, conducted an experiment on participants undergoing epileptic surgery with intact speech. Chang was able to “demonstrate that we can generate entire spoken sentences based on an individual’s brain activity” utilizing reverse engineering and machine learning algorithms that mimic human speech (Weiler). For now, according to Sharlene Flesher, a researcher at Stanford’s division of brain-computer interface development organization, this technology uses offline construction and thus is not considered BCI. The distinction here is that the data is taken and put into the model offline, and not done in real time.

This section of the paper, in particular, will mainly focus on invasive, intracortical brain-computer interfaces for assistive uses, given the many successful strides taken in this specific realm of BCI technology.

How do intracortical BCIs work in practice and in training?

In BCIs designed to control movement, for instance, the neurons of the motor cortex are primarily used to translate thoughts into movement. With precision, researchers implant Utah Arrays, microarrays of one hundred electrodes that senses the individual output activity―the action potentials―of a small population of the neurons in that area. This neuronal activity would correspond to the imagination of their movement.

To turn mere thought into action requires a device that decodes the neuronal activity and feeds it into a machine learning algorithm. The patterns are matched with the corresponding actions―the movement of a cursor or prosthetic. The result is movement done exactly as imagined. Improving the algorithm requires practice and concentration in order for the model to recognize the unique signature of the user’s brain. Each training session requires constant calibration since “as many as thirty percent of the cells could differ from the previous session” (Khatchadourian). This is often done through observational learning, whereby during training, the person watches a robot performing several functions, and his or her brain unconsciously responds as if it were doing those movements itself.

Improving these models requires knowledge about how we actually move. For instance, Andrew Schwartz, a BCI researcher and neurologist in the University of Pittsburgh, details that “when we want to do something, [...] our arms rapidly begin the gesture before our brains can make visual sense of what we are doing” (Khatchadourian). Then, the brain refines the movement, often with the help of our senses. Thus, studying how we actually move, how the brain acts in each stage of motion, and recognizing the role senses have in our movement, is critical towards developing a functional BCI for assistive uses. Copeland was also equipped with an additional Utah array embedded in his somatosensory cortex, forming “a closed circuit with the robot,” his output device (Khatchadourian). With this setup, the BCI “[channels] sensory information directly into the brain,” providing feedback to refine his movement (Khatchadourian). When grasping an object, “he can pick something up that’s soft and not squash it or drop it,” demonstrating that users could regain the sense of touch and move more gracefully and with greater dexterity (Nutt).

Trajectory and moonshot goals of brain-computer interfaces

Many achievements in BCI, however, are confined to the lab. Along with the risk of infection, invasive intracortical brain-computer interfaces could cause the buildup of scar tissue that gradually reduces the quality of the product. Leigh Hochberg, a co-director of the BrainGate’s Brown University division, claims that “a system that patients can use around the clock that reliably provides complete rapid, intuitive brain control over a computer does not yet exist” (Corbyn). Sharlene Flesher indicates that because of the fairly small population that BCIs would serve, insurance companies are reluctant to grant coverage, and many biomedical or technology companies are resistant to developing BCI technologies. It takes startups like Neuralink to jumpstart the field, not only because of the unique approaches it is taking to develop BCI, but also because its success would mean wonders for the entire neuroscientific community, particularly those working towards BCI. Neuralink, in particular, aims to have more electrodes that could record from more neurons at greater quantities, and it would require a nanorobot to sew in the electrodes so as to avoid piercing critical blood vessels.

The long-shot goal of assistive BCI is to enable a person to control their real arm wirelessly and for prolonged times, as the potential treatment for spinal cord injuries. Progress has already been made with a device that when implanted, a patient only needs to imagine the movement. The system helps to stimulate the necessary muscle groups allowing them to move. By making “a complete model of an arm, so all the muscles behave like the muscles in your arm” and hooking it up to stroke patient Cathy Hutchinson, researchers were able to have her move the animated arm (Venkataramanan), revealing impressive and profound results.

The Potential Ethical Issues Surrounding All Brain-Computer Interfaces

Having described the most promising form of brain-computer interface under present development, this paper now delves into the ethical concerns surrounding BCIs. These differ depending on the BCI’s intended use and application, but range from issues relating to ethical research and users’ informed consent, to the potential change of a user’s identity and the ambiguity in assessing accountability when issues arise. Other concerns include susceptibility to breaches of security and privacy, as well as inequality of access to BCIs that may deepen social inequities.

Ethical research and dissemination of brain-computer interfaces

A problem surrounding brain-computer interfaces is the issue of ethically and responsibly researching and disseminating them. It is important for both researchers and subjects alike to distinguish between medical treatment and research, and to ensure that adequate consent is given. The process of consent is complicated by those who are “locked-in,” or unable to connect with the world around them, who form a considerable part of the target population. The question then is, how would they accept or deny the opportunity to have a brain-computer interface?

Some patients tend to confuse medical treatment with research, viewing the opportunity to participate as a chance to better their lives, regardless of the risks involved and the chance that the brain-computer interface may not work for their case. Currently, BCIs have a 15–30% failure rate (Burwell et al). Yet, media coverage often describes the device as a “mind reading” technology and a “cure” (Burwell et al). Such portrayals make a device that is still undergoing research and development more advanced than it really is. When subjects “fail to distinguish between clinical care and research, and to understand the purpose and aim of research,” they can “[misconceive] their participation as therapeutic in nature” (Vlek et al). Oftentimes, those who have “severe disabilities” may look towards brain-computer interfaces and otherwise “[accept] risks in hopes of bettering their lives ‘out of desperation’ or ‘at last resort’” (Burwell et al). Coupled with the possibility that the technology may not work all the time, these patients may have high expectations and could be susceptible to depression when their expectations are not met. Therefore, having a clear line between research and treatment, as well as indicating and acknowledging that the technology may not work to one’s expectations, is imperative at this stage of development.

Furthermore, it is often disputed that patients have given adequate consent to participate in brain-computer interface research and treatment, especially for those patients who are locked in and cannot speak. These “non-communicative patients” have “significantly impaired capacity to consent” (Burwell et al). In such cases, a “surrogate decision maker or legal representative is needed to represent the participant” (Vlek et al). That individual differs by country, making it hard to set international agreements compatible with each nation’s laws. For example, in the United States and Netherlands, a family member would be a surrogate, whereas in Germany, the surrogate is a “neutral” person. Both approaches present potential issues. A “neutral” decision maker may not understand the individual and therefore may not accurately reflect the view of the patient. On the other hand, though a family member may know the patient better and understand what the individual wants, there may be a conflict of interest that clouds the vision of the representative. The family member may overly concern themselves about the potential frustrations the individual will endure in accepting the BCI. Conversely, having witnessed how painful the patient’s condition is, the family member might be overly desperate for the patient to accept the BCI.

Fortunately, we can draw from existing methods to resolve these potential issues, like the Belmont Report, which was created to define the boundary between research and medical practice. It outlines ethical principles including respect for the person’s views and intentions; beneficence, the duty to minimize harm and maximize benefit; and justice, the duty to fairly distribute the benefits of a treatment. These principles should be applied in the BCI context, especially to the issues of informed consent, risk-benefit analysis, and subject selection. BCI development is hybrid of both medical practice and research because it assesses “the safety and efficacy of a therapy” (“The Belmont Report”). In this case, the research methodology should be reviewed to ensure that it is ethical and that the subject is protected. During BCI trials, researchers have the responsibility to provide adequate information that the subject needs to know to continue with the study―its risk, benefits, side effects, for instance (“The Belmont Report”). When information must be withheld to avoid invalidating results, the proper debriefing procedure must be established (“The Belmont Report”). Also, researchers must do anything possible to allow the user to comprehend any given information, with the surrogate who is acting in the user’s best interest present (“The Belmont Report”). The individual must be given the opportunity to observe the procedure and have the choice to stop it at any moment (“The Belmont Report”). Ultimately, the Belmont Report has established the profound precedent of institutional review boards (IRBs) that could be used to approve research proposals for brain-computer interface development in the future.

Changing the individual and complicating accountability

A common misunderstanding about brain-computer interfaces is that they could turn individuals into cyborgs, whose “capabilities are extended beyond normal human limitations by a machine” (Burwell et al). This idea may sound extreme, yet some authors indicate that “humans are already intricately linked to their technologies” (Burwell et al). Additionally, with the tremendous benefits that BCIs may bring to someone’s life, “to engage in more technology use, or even a ‘cyborgization’ is [...] seen as an opportunity and not a threat” (Kögel et al). Nevertheless, this section brings into focus the issue of how brain-computers may affect the individual, as well as how accountability would be attributed if issues do arise.

The brain gives us our sense of self and personhood, but BCIs raise questions about autonomy because they “directly modulat[e] the brain,” says Hannah Maslen, a neuroethicist at the University of Oxford. Some authors stress that BCIs “can distort patients’ perceptions of themselves” as well as their personalities (Drew). A patient undergoing deep-brain stimulation, a similar form of therapy, “started to gamble compulsively, blowing his family’s savings and seeming not to care,” only to “understand how problematic his [behavior] was when the stimulation was turned off” (Drew). This reveals the potential for brain-computer interfaces to change the way a person acts, reflecting thoughts that do not correspond with an authentic self. This capacity for brain-computer interfaces to potentially change our identity should be examined. Others note that “identity fluctuates naturally” and can be changed by taking medication, “having a glass of wine or going on vacation” and that “the chronic illness [the patient] face[s] has already created many radical identity changes” (Burwell et al).

What should be remembered, however, is that brain-computer interfaces could restore the communication abilities of locked-in patients. This creates hope for the restoration of their identity, especially when considering that “the disorders themselves undermine the autonomy of the individual by inhibiting the ability to act on one’s own desire” (Burwell et al). Interviews with several brain-computer interface users revealed that they had begun to feel a sense of normality. Additionally, engaging in research may be a “meaningful occupation that also include[s] being part of a team, receiving social recognition for what they [are] doing and taking part in public life” (Kögel). Oftentimes, taking part allows them to become pioneers in this field, allowing them to actively contribute to society. It “[reconnects] users with their former life experiences, gives them self-esteem or empowerment,” empowering them with increased independence and leading to an improved quality of life” (Kögel). The ability to contribute to society, and be acknowledged for their efforts is a characteristic of human life that defines personhood and establishes, or restores, one’s identity.

Another concern is that when issues arise, who is to blame? Attributing responsibility to a single agent, be it the human or the interface, is problematic because the source of wrongdoing is ambiguous. Yet some authors argue that BCI users should be responsible in a similar way we cast responsibility towards drivers, “the responsibility a parent has for the actions of their child,” or “a dog-owner has for the actions of their dog” (Burwell et al). The user should be held responsible for errors caused by use of the BCI, because the user ultimately chose the BCI as their treatment. Others argue that responsibility could be attributed to the flaws of the technology. However, if people trust and rely on their devices too much, they may believe that any wrongdoing is intentional, even if it is not. According to one user, the device “became [her]” and when the company that supplied her BCI went bankrupt, she “lost [herself]” (Samuel). Another patient became depressed because “[the device] made [him] feel [he] had no control” (Samuel). However, individuals oftentimes “opt to see themselves as agents even though the situations” (Vögel), but there is a sort of “hybrid agency” (Samuel) where “part of the decision comes from the user, and [another] comes from the algorithm of the machine” (Drew). This worry is shared by one user who says, “you just wonder how much is you much of it is my thought pattern? How would I deal with this if I didn’t have the stimulation system? You kind of feel artificial” (Drew). Sometimes, when BCIs capture the intention of the central nervous system, they may be transforming actions that were merely subconscious thoughts that our “natural peripheral checks” (Burwell et al) would have “censored” (Drew). This makes “the authorship of a message or movement ...ambiguous” (Samuel). Additionally, the software of the BCI may be too complicated for authorities and individuals to understand, which can introduce an unknown and “unaccountable process” in the inner workings of a BCI (Drew). As a result, several factors widen the accountability gap and make attributing responsibility profoundly difficult.

The potential of brain-computer interfaces to cause harm

BCI use may be troubling for some, especially when they are unaware about the extent to which information is obtained from the brain, who owns that information, and what others can do with it. There is potential for some organization to “mind read” and “decode thoughts” and unearth our innermost secrets (Corbyn). Breaches in privacy may also lead to hijacking, as well as unfair use in the criminal justice system in courts or during surveillance. These possibilities will be examined below.

One case study features a flight controller named Thomas who was equipped with a BCI. His supervisor was able to see that he “most likely had been drinking alcohol” (Vlek et al) one night, resulting in Thomas receiving a stern lecture from his boss. This illustrates that in the near future, some employers may coerce their employees to utilize a BCI device, with the user unable to discern what data will be revealed to whoever has access to it. Using the device, the employer could engage in “workplace discrimination” against the knowledge of the employee. But what happens if the entity that has access to the data is a corporation? Facebook is currently developing their own BCI that measures blood flow within the brain that peers into brain activity. The information determined could reveal certain profound truths about an individual. Given the recent Cambridge Analytica scandal in which Facebook had been involved in, many are wary that a company that has historically abused user data is developing such a device. Roland Nadler, a neuroethicist at the University of British Columbia, is worried that without the manufacturer adequately revealing how the data is used, who knows what would happen to the neural data? If applied to the criminal justice system, law enforcement could be empowered to peer into the brains of suspects and use such evidence in court. This could disproportionately harm people of color, as machine learning algorithms could introduce implicit bias in yet another form. The same biases that caused “algorithms used by US law-enforcement agencies [to] wrongly predict that black defendants are more likely to reoffend than white defendants with a similar criminal record” (Yuste et al) could be embedded in neural devices, deepening existing prejudices. Furthermore, malicious actors may engage in “brainjacking,” which has happened in the past to pacemakers. These malicious actors may “cause the BCI device to malfunction or allow it to be manipulated such that it harms the user” (Burwell et al). Relating to the previous discussion about accountability, “the potential for a BCI device to be hacked, and thus perform actions created by a third party, could impede the ascription of responsibility” (Burwell et al).

Lastly is the issue of justice: in the process of research and development, training a small population of individuals to use BCI may take away resources from the public―“improving a system for a single user does not necessarily improve BCI for other users” (Vlek et al). Thus, the public has a right to experimental treatments, but that may skew the balance between individual and public welfare. Additionally, one author asks, “are [BCIs] intended for mass consumption, or should they be restricted to human users who have identifiable impairments” (Yuste and Goering). Once brain-computer interfaces enter mainstream use, the device could “enhance a healthy user’s capabilities beyond ‘normal,’” which “may create social stratification or unfairness between coworkers” (Burwell et al). Furthermore, once brain-computer interfaces become outdated, some people will have newer devices that outperform those with older BCI or none at all, especially when the costs are too high for some to afford, creating greater social inequality.

Nevertheless, the public should have a clear voice in shaping how this technology is used. BCI should be equipped with neurosecurity to prevent others from taking advantage of the users’ brain data. Similar to how organ donors need to opt in, users should “explicitly opt in to share neural data from any device” (Yuste et al). Additionally, there can be several “computational techniques...deployed to protect people’s data,” such as federated learning. Google is currently developing this type of learning, which aims to improve machine learning algorithms while keeping user data private. The “teaching process” occurs “locally on each user’s device” and those resulting “lessons[...]are sent back to Google’s servers” (Yuste et al). Additionally, governments should “set limits on the augmenting neurotechnologies that can be implemented, and to define the contexts in which they can be used” (Yuste et al). All these methods and more could help prevent BCIs from exacerbating inequalities, exploiting data, and causing harm.