• Steven Tang

The Feats, Potential, and Ethical Issues of Brain-Computer Interfaces


(This paper is a copy of a research paper submitted to my high school English class. To facilitate reading, I have removed some in text citations, with the exception of text directly quoted from the source. If you would like to see the paper with the in text citation, please feel free to contact us.)


According to the World Health Organization, 250,000 to 500,000 people suffer from a spinal cord injury globally every year. Among them is Nathan Copeland, a tetraplegic, paralyzed in all four limbs, in his thirties. He can control a prosthetic device, fist-bump President Barack Obama, move objects around, and most impressively, experience sensations from his own hand. Currently, researchers are developing brain-computer interfaces (BCIs) such as these, to restore and grant the disabled the ability to interact with the world around them. However, BCIs raise several evident and speculative concerns that should be considered before they can enter common usage. Such ethical issues include the potential for BCIs to be developed and disseminated irresponsibly; to cloud the user’s sense of identity and responsibility for errors; and to be susceptible to threats that could cause harm and deepen existing inequality. Overall, this paper aims to describe the advancements made in the development of brain-computer interfaces and their path forward, particularly those implanted in the brain. Furthermore, in contrast to the various promising and impressive aspects of this rapidly emerging technology, this paper will examine several potential ethical issues surrounding all forms of BCI.


The Feats and Trajectory of Brain-Computer Interfaces


Definition and examples of brain-computer interfaces

With the help of brain-computer interfaces, a person’s own neuronal signals can directly control the behavior of an output device, like a prosthetic arm. The interfaces work to sense, interpret, and react to neuronal data. For instance when an individual imagines moving his hand, the neural activity is detected, interpreted as a certain movement, and then executed as a movement command. Bi-directional BCIs are similar, as they “write” in sensory input for example and “read” in imagined brain signals. BCIs can be invasive or noninvasive and used in the medical field or consumer market. In the medical field, they act as rehabilitative or assistive devices, and for general consumers, they can be used for gaming, entertainment, and enhancement. Noninvasive BCIs, such as electroencephalogram (EEG)-based BCIs, are safe, easy, and expensive to use. However, they lack accuracy, as electrical signals from the brain are “significantly attenuated in the process of passing through the dura, skull, and scalp” (Shih et al), where invasive, implanted, intracortical BCIs are normally held. Rehabilitative BCIs are used in cases when a person needs to relearn a movement through stimulating neuroplasticity, whereas assistive BCI acts as an “alternative for a user’s lost function” (Vlek et al).


A notable example of a BCI is the cochlear implant, which restores hearing in people with hearing loss from auditory nerve damage. With a microphone and processing system to transmit the signal to the implant, it “sends pulses of current into nearby nerves that the brain interprets as sound” (Juskalian). Another potential candidate for BCI is synthetic speech generated from brain recordings. Edward Chang, a researcher from the University of California, San Francisco, and his team, conducted an experiment on participants undergoing epileptic surgery with intact speech. Chang was able to “demonstrate that we can generate entire spoken sentences based on an individual’s brain activity” utilizing reverse engineering and machine learning algorithms that mimic human speech (Weiler). For now, according to Sharlene Flesher, a researcher at Stanford’s division of brain-computer interface development organization, this technology uses offline construction and thus is not considered BCI. The distinction here is that the data is taken and put into the model offline, and not done in real time.


This section of the paper, in particular, will mainly focus on invasive, intracortical brain-computer interfaces for assistive uses, given the many successful strides taken in this specific realm of BCI technology.


How do intracortical BCIs work in practice and in training?

In BCIs designed to control movement, for instance, the neurons of the motor cortex are primarily used to translate thoughts into movement. With precision, researchers implant Utah Arrays, microarrays of one hundred electrodes that senses the individual output activity―the action potentials―of a small population of the neurons in that area. This neuronal activity would correspond to the imagination of their movement.


To turn mere thought into action requires a device that decodes the neuronal activity and feeds it into a machine learning algorithm. The patterns are matched with the corresponding actions―the movement of a cursor or prosthetic. The result is movement done exactly as imagined. Improving the algorithm requires practice and concentration in order for the model to recognize the unique signature of the user’s brain. Each training session requires constant calibration since “as many as thirty percent of the cells could differ from the previous session” (Khatchadourian). This is often done through observational learning, whereby during training, the person watches a robot performing several functions, and his or her brain unconsciously responds as if it were doing those movements itself.


Improving these models requires knowledge about how we actually move. For instance, Andrew Schwartz, a BCI researcher and neurologist in the University of Pittsburgh, details that “when we want to do something, [...] our arms rapidly begin the gesture before our brains can make visual sense of what we are doing” (Khatchadourian). Then, the brain refines the movement, often with the help of our senses. Thus, studying how we actually move, how the brain acts in each stage of motion, and recognizing the role senses have in our movement, is critical towards developing a functional BCI for assistive uses. Copeland was also equipped with an additional Utah array embedded in his somatosensory cortex, forming “a closed circuit with the robot,” his output device (Khatchadourian). With this setup, the BCI “[channels] sensory information directly into the brain,” providing feedback to refine his movement (Khatchadourian). When grasping an object, “he can pick something up that’s soft and not squash it or drop it,” demonstrating that users could regain the sense of touch and move more gracefully and with greater dexterity (Nutt).


Trajectory and moonshot goals of brain-computer interfaces

Many achievements in BCI, however, are confined to the lab. Along with the risk of infection, invasive intracortical brain-computer interfaces could cause the buildup of scar tissue that gradually reduces the quality of the product. Leigh Hochberg, a co-director of the BrainGate’s Brown University division, claims that “a system that patients can use around the clock that reliably provides complete rapid, intuitive brain control over a computer does not yet exist” (Corbyn). Sharlene Flesher indicates that because of the fairly small population that BCIs would serve, insurance companies are reluctant to grant coverage, and many biomedical or technology companies are resistant to developing BCI technologies. It takes startups like Neuralink to jumpstart the field, not only because of the unique approaches it is taking to develop BCI, but also because its success would mean wonders for the entire neuroscientific community, particularly those working towards BCI. Neuralink, in particular, aims to have more electrodes that could record from more neurons at greater quantities, and it would require a nanorobot to sew in the electrodes so as to avoid piercing critical blood vessels.


The long-shot goal of assistive BCI is to enable a person to control their real arm wirelessly and for prolonged times, as the potential treatment for spinal cord injuries. Progress has already been made with a device that when implanted, a patient only needs to imagine the movement. The system helps to stimulate the necessary muscle groups allowing them to move. By making “a complete model of an arm, so all the muscles behave like the muscles in your arm” and hooking it up to stroke patient Cathy Hutchinson, researchers were able to have her move the animated arm (Venkataramanan), revealing impressive and profound results.


The Potential Ethical Issues Surrounding All Brain-Computer Interfaces

Having described the most promising form of brain-computer interface under present development, this paper now delves into the ethical concerns surrounding BCIs. These differ depending on the BCI’s intended use and application, but range from issues relating to ethical research and users’ informed consent, to the potential change of a user’s identity and the ambiguity in assessing accountability when issues arise. Other concerns include susceptibility to breaches of security and privacy, as well as inequality of access to BCIs that may deepen social inequities.


Ethical research and dissemination of brain-computer interfaces

A problem surrounding brain-computer interfaces is the issue of ethically and responsibly researching and disseminating them. It is important for both researchers and subjects alike to distinguish between medical treatment and research, and to ensure that adequate consent is given. The process of consent is complicated by those who are “locked-in,” or unable to connect with the world around them, who form a considerable part of the target population. The question then is, how would they accept or deny the opportunity to have a brain-computer interface?


Some patients tend to confuse medical treatment with research, viewing the opportunity to participate as a chance to better their lives, regardless of the risks involved and the chance that the brain-computer interface may not work for their case. Currently, BCIs have a 15–30% failure rate (Burwell et al). Yet, media coverage often describes the device as a “mind reading” technology and a “cure” (Burwell et al). Such portrayals make a device that is still undergoing research and development more advanced than it really is. When subjects “fail to distinguish between clinical care and research, and to understand the purpose and aim of research,” they can “[misconceive] their participation as therapeutic in nature” (Vlek et al). Oftentimes, those who have “severe disabilities” may look towards brain-computer interfaces and otherwise “[accept] risks in hopes of bettering their lives ‘out of desperation’ or ‘at last resort’” (Burwell et al). Coupled with the possibility that the technology may not work all the time, these patients may have high expectations and could be susceptible to depression when their expectations are not met. Therefore, having a clear line between research and treatment, as well as indicating and acknowledging that the technology may not work to one’s expectations, is imperative at this stage of development.


Furthermore, it is often disputed that patients have given adequate consent to participate in brain-computer interface research and treatment, especially for those patients who are locked in and cannot speak. These “non-communicative patients” have “significantly impaired capacity to consent” (Burwell et al). In such cases, a “surrogate decision maker or legal representative is needed to represent the participant” (Vlek et al). That individual differs by country, making it hard to set international agreements compatible with each nation’s laws. For example, in the United States and Netherlands, a family member would be a surrogate, whereas in Germany, the surrogate is a “neutral” person. Both approaches present potential issues. A “neutral” decision maker may not understand the individual and therefore may not accurately reflect the view of the patient. On the other hand, though a family member may know the patient better and understand what the individual wants, there may be a conflict of interest that clouds the vision of the representative. The family member may overly concern themselves about the potential frustrations the individual will endure in accepting the BCI. Conversely, having witnessed how painful the patient’s condition is, the family member might be overly desperate for the patient to accept the BCI.


Fortunately, we can draw from existing methods to resolve these potential issues, like the Belmont Report, which was created to define the boundary between research and medical practice. It outlines ethical principles including respect for the person’s views and intentions; beneficence, the duty to minimize harm and maximize benefit; and justice, the duty to fairly distribute the benefits of a treatment. These principles should be applied in the BCI context, especially to the issues of informed consent, risk-benefit analysis, and subject selection. BCI development is hybrid of both medical practice and research because it assesses “the safety and efficacy of a therapy” (“The Belmont Report”). In this case, the research methodology should be reviewed to ensure that it is ethical and that the subject is protected. During BCI trials, researchers have the responsibility to provide adequate information that the subject needs to know to continue with the study―its risk, benefits, side effects, for instance (“The Belmont Report”). When information must be withheld to avoid invalidating results, the proper debriefing procedure must be established (“The Belmont Report”). Also, researchers must do anything possible to allow the user to comprehend any given information, with the surrogate who is acting in the user’s best interest present (“The Belmont Report”). The individual must be given the opportunity to observe the procedure and have the choice to stop it at any moment (“The Belmont Report”). Ultimately, the Belmont Report has established the profound precedent of institutional review boards (IRBs) that could be used to approve research proposals for brain-computer interface development in the future.


Changing the individual and complicating accountability

A common misunderstanding about brain-computer interfaces is that they could turn individuals into cyborgs, whose “capabilities are extended beyond normal human limitations by a machine” (Burwell et al). This idea may sound extreme, yet some authors indicate that “humans are already intricately linked to their technologies” (Burwell et al). Additionally, with the tremendous benefits that BCIs may bring to someone’s life, “to engage in more technology use, or even a ‘cyborgization’ is [...] seen as an opportunity and not a threat” (Kögel et al). Nevertheless, this section brings into focus the issue of how brain-computers may affect the individual, as well as how accountability would be attributed if issues do arise.


The brain gives us our sense of self and personhood, but BCIs raise questions about autonomy because they “directly modulat[e] the brain,” says Hannah Maslen, a neuroethicist at the University of Oxford. Some authors stress that BCIs “can distort patients’ perceptions of themselves” as well as their personalities (Drew). A patient undergoing deep-brain stimulation, a similar form of therapy, “started to gamble compulsively, blowing his family’s savings and seeming not to care,” only to “understand how problematic his [behavior] was when the stimulation was turned off” (Drew). This reveals the potential for brain-computer interfaces to change the way a person acts, reflecting thoughts that do not correspond with an authentic self. This capacity for brain-computer interfaces to potentially change our identity should be examined. Others note that “identity fluctuates naturally” and can be changed by taking medication, “having a glass of wine or going on vacation” and that “the chronic illness [the patient] face[s] has already created many radical identity changes” (Burwell et al).


What should be remembered, however, is that brain-computer interfaces could restore the communication abilities of locked-in patients. This creates hope for the restoration of their identity, especially when considering that “the disorders themselves undermine the autonomy of the individual by inhibiting the ability to act on one’s own desire” (Burwell et al). Interviews with several brain-computer interface users revealed that they had begun to feel a sense of normality. Additionally, engaging in research may be a “meaningful occupation that also include[s] being part of a team, receiving social recognition for what they [are] doing and taking part in public life” (Kögel). Oftentimes, taking part allows them to become pioneers in this field, allowing them to actively contribute to society. It “[reconnects] users with their former life experiences, gives them self-esteem or empowerment,” empowering them with increased independence and leading to an improved quality of life” (Kögel). The ability to contribute to society, and be acknowledged for their efforts is a characteristic of human life that defines personhood and establishes, or restores, one’s identity.


Another concern is that when issues arise, who is to blame? Attributing responsibility to a single agent, be it the human or the interface, is problematic because the source of wrongdoing is ambiguous. Yet some authors argue that BCI users should be responsible in a similar way we cast responsibility towards drivers, “the responsibility a parent has for the actions of their child,” or “a dog-owner has for the actions of their dog” (Burwell et al). The user should be held responsible for errors caused by use of the BCI, because the user ultimately chose the BCI as their treatment. Others argue that responsibility could be attributed to the flaws of the technology. However, if people trust and rely on their devices too much, they may believe that any wrongdoing is intentional, even if it is not. According to one user, the device “became [her]” and when the company that supplied her BCI went bankrupt, she “lost [herself]” (Samuel). Another patient became depressed because “[the device] made [him] feel [he] had no control” (Samuel). However, individuals oftentimes “opt to see themselves as agents even though the situations” (Vögel), but there is a sort of “hybrid agency” (Samuel) where “part of the decision comes from the user, and [another] comes from the algorithm of the machine” (Drew). This worry is shared by one user who says, “you just wonder how much is you anymore...how much of it is my thought pattern? How would I deal with this if I didn’t have the stimulation system? You kind of feel artificial” (Drew). Sometimes, when BCIs capture the intention of the central nervous system, they may be transforming actions that were merely subconscious thoughts that our “natural peripheral checks” (Burwell et al) would have “censored” (Drew). This makes “the authorship of a message or movement ...ambiguous” (Samuel). Additionally, the software of the BCI may be too complicated for authorities and individuals to understand, which can introduce an unknown and “unaccountable process” in the inner workings of a BCI (Drew). As a result, several factors widen the accountability gap and make attributing responsibility profoundly difficult.

The potential of brain-computer interfaces to cause harm

BCI use may be troubling for some, especially when they are unaware about the extent to which information is obtained from the brain, who owns that information, and what others can do with it. There is potential for some organization to “mind read” and “decode thoughts” and unearth our innermost secrets (Corbyn). Breaches in privacy may also lead to hijacking, as well as unfair use in the criminal justice system in courts or during surveillance. These possibilities will be examined below.


One case study features a flight controller named Thomas who was equipped with a BCI. His supervisor was able to see that he “most likely had been drinking alcohol” (Vlek et al) one night, resulting in Thomas receiving a stern lecture from his boss. This illustrates that in the near future, some employers may coerce their employees to utilize a BCI device, with the user unable to discern what data will be revealed to whoever has access to it. Using the device, the employer could engage in “workplace discrimination” against the knowledge of the employee. But what happens if the entity that has access to the data is a corporation? Facebook is currently developing their own BCI that measures blood flow within the brain that peers into brain activity. The information determined could reveal certain profound truths about an individual. Given the recent Cambridge Analytica scandal in which Facebook had been involved in, many are wary that a company that has historically abused user data is developing such a device. Roland Nadler, a neuroethicist at the University of British Columbia, is worried that without the manufacturer adequately revealing how the data is used, who knows what would happen to the neural data? If applied to the criminal justice system, law enforcement could be empowered to peer into the brains of suspects and use such evidence in court. This could disproportionately harm people of color, as machine learning algorithms could introduce implicit bias in yet another form. The same biases that caused “algorithms used by US law-enforcement agencies [to] wrongly predict that black defendants are more likely to reoffend than white defendants with a similar criminal record” (Yuste et al) could be embedded in neural devices, deepening existing prejudices. Furthermore, malicious actors may engage in “brainjacking,” which has happened in the past to pacemakers. These malicious actors may “cause the BCI device to malfunction or allow it to be manipulated such that it harms the user” (Burwell et al). Relating to the previous discussion about accountability, “the potential for a BCI device to be hacked, and thus perform actions created by a third party, could impede the ascription of responsibility” (Burwell et al).


Lastly is the issue of justice: in the process of research and development, training a small population of individuals to use BCI may take away resources from the public―“improving a system for a single user does not necessarily improve BCI for other users” (Vlek et al). Thus, the public has a right to experimental treatments, but that may skew the balance between individual and public welfare. Additionally, one author asks, “are [BCIs] intended for mass consumption, or should they be restricted to human users who have identifiable impairments” (Yuste and Goering). Once brain-computer interfaces enter mainstream use, the device could “enhance a healthy user’s capabilities beyond ‘normal,’” which “may create social stratification or unfairness between coworkers” (Burwell et al). Furthermore, once brain-computer interfaces become outdated, some people will have newer devices that outperform those with older BCI or none at all, especially when the costs are too high for some to afford, creating greater social inequality.


Nevertheless, the public should have a clear voice in shaping how this technology is used. BCI should be equipped with neurosecurity to prevent others from taking advantage of the users’ brain data. Similar to how organ donors need to opt in, users should “explicitly opt in to share neural data from any device” (Yuste et al). Additionally, there can be several “computational techniques...deployed to protect people’s data,” such as federated learning. Google is currently developing this type of learning, which aims to improve machine learning algorithms while keeping user data private. The “teaching process” occurs “locally on each user’s device” and those resulting “lessons[...]are sent back to Google’s servers” (Yuste et al). Additionally, governments should “set limits on the augmenting neurotechnologies that can be implemented, and to define the contexts in which they can be used” (Yuste et al). All these methods and more could help prevent BCIs from exacerbating inequalities, exploiting data, and causing harm.


...


As this paper illustrates, BCIs and their many applications have the potential to simplify everyone's lives, especially those with disabilities. With the help of corporations that could jumpstart development, researchers are working tirelessly to develop BCIs that could offer effective treatment for spinal cord injuries, and other disabilities. Intracortical BCIs, for example, could provide direct control to a person's movement, just through the user's imagination. Yet, these are not the only forms of BCI. Once those made for medicinal uses are deemed a success, many entities may strive to create BCIs for augmentation, which could change society drastically.


With BCI development, researchers must keep in mind issues relating to informed consent and responsible experimentation. Individuals should be warned of potential changes in their personality and identity, and society as a whole should be mindful that the link between mind and computer complicates the attribution of responsibility. As development continues, certain applications may deepen existing inequalities, so guidelines should be introduced to determine how to best serve society. Lastly, attention should be paid to safeguard the device from potential hackers, and guidelines on data use should be agreed upon by users, employers, manufacturers, and other entities involved.

Nonetheless, we have an inherent drive to innovate. It is our duty to do so responsibly and become aware of potential ethical issues, no matter how ludicrous and unlikely those complications may seem. As we anticipate issues, we can employ safeguards that could at least mitigate those issues. Brain-computer interfaces (BCIs) are no exception. Above all, it is our responsibility to build products that not only help end users and minimize complications, but also maintain a just, equitable world.

Works Cited


Burwell, Sasha, et al. “Ethical Aspects of Brain Computer Interfaces: a Scoping Review.” BMC Medical Ethics, BioMed Central, 9 Nov. 2017, www.ncbi.nlm.nih.gov/pubmed/29121942.


Collinger, Jennifer, and Robert Gaunt. “Brain-Machine Interfaces Are Getting Better and Better – and Neuralink’s New Brain Implant Pushes the Pace.” The Conversation, 19 July 2019, theconversation.com/brain-machine-interfaces-are-getting-better-and-better-and-neuralinks-new-brain-implant-pushes-the-pace-120562. Accessed 14 Mar. 2020.


Corbyn, Zoë. “Are Brain Implants the Future of Thinking?” The Guardian, Guardian News and Media, 22 Sept. 2019, www.theguardian.com/science/2019/sep/22/brain-computer-interface-implants-neuralink-braingate-elon-musk.


Drew, Liam. “The Ethics of Brain–Computer Interfaces.” Nature, vol. 571, no. 7766, July 2019, pp. S19–S21, www.nature.com/articles/d41586-019-02214-2, 10.1038/d41586-019-02214-2.


Flesher, Sharlene. BrainGate Research and Ethics (In-Person Interview). 18 Feb. 2020.


Fox, Maggie. “Brain Chip Helps Paralyzed Man Feel His Fingers.” NBCNews.com, NBCUniversal News Group, 14 Oct. 2016, www.nbcnews.com/health/health-news/brain-chip-helps-paralyzed-man-feel-his-fingers-n665881.


Goering, Sara, and Rafael Yuste. “On the Necessity of Ethical Guidelines for Novel Neurotechnologies.” Cell, vol. 167, no. 4, Nov. 2016, pp. 882–885, 10.1016/j.cell.2016.10.029. Accessed 14 Mar. 2020.


Greshko, Michael, and Maya Wei-Hass. “New Device Translates Brain Activity into Speech. Here’s How.” Nationalgeographic.Com, 24 Apr. 2019, www.nationalgeographic.com/science/2019/04/new-computer-brain-interface-translates-activity-into-speech/.


Juskalian, Russ. “A New Implant for Blind People Jacks Directly into the Brain.” MIT Technology Review, MIT Technology Review, 2 Apr. 2020, www.technologyreview.com/2020/02/06/844908/a-new-implant-for-blind-

people-jacks-directly-into-the-brain/.


Khatchadourian, Raffi. “How to Control a Machine with Your Brain.” The New Yorker, 19 Nov. 2018, www.newyorker.com/magazine/2018/11/26/how-to-control-a-machine-with-your-brain. Accessed 14 Mar. 2020.


Kögel, Johannes, et al. “What Is It like to Use a BCI? – Insights from an Interview Study with Brain-Computer Interface Users.” BMC Medical Ethics, vol. 21, no. 1, 6 Jan. 2020, 10.1186/s12910-019-0442-2. Accessed 14 Mar. 2020.


Nutt, Amy Ellis. “In a Medical First, Brain Implant Allows Paralyzed Man to Feel Again.” Washington Post, 13 Oct. 2016, www.washingtonpost.com/news/to-your-health/wp/2016/10/13/in-a-medical-first-brain-implant-allows-paralyzed-man-to-feel-again/. Accessed 14 Mar. 2020.


Samuel, Sigal. “Facebook Is Building Brain Tech That Could Read Minds and Ruin Privacy.” Vox, Vox, 5 Aug. 2019, www.vox.com/future-perfect/2019/8/5/20750259/facebook-ai-mind-reading-brain-computer-interface.


Shih, Jerry J., et al. “Brain-Computer Interfaces in Medicine.” Mayo Clinic Proceedings, vol. 87, no. 3, Mar. 2012, pp. 268–279, www.ncbi.nlm.nih.gov/pmc/articles/PMC3497935/, 10.1016/j.mayocp.2011.12.008.


“Spinal Cord Injury.” World Health Organization, World Health Organization, www.who.int/news-room/fact-sheets/detail/spinal-cord-injury.


The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Bethesda, Md.: The Commission, 1978. Print.

Venkataramanan, Madhumita. “A Chip in Your Brain Can Control a Robotic Arm. Welcome to BrainGate.” Wired UK, 1 May 2015, www.wired.co.uk/article/braingate. Accessed 14 Mar. 2020.


Vlek, Rutger J., et al. “Ethical Issues in Brain–Computer Interface Research, Development, and Dissemination.” Journal of Neurologic Physical Therapy, vol. 36, no. 2, June 2012, pp. 94–99, 10.1097/npt.0b013e31825064cc. Accessed 14 Mar. 2020.


Weiler, Nicholas. “Synthetic Speech Generated from Brain Recordings.” Synthetic Speech Generated from Brain Recordings | UC San Francisco, 24 Apr. 2019, www.ucsf.edu/news/2019/04/414296/synthetic-speech-generated-brain-recordings. Accessed 14 Mar. 2020.


Yuste, Rafael, et al. “Four Ethical Priorities for Neurotechnologies and AI.” Nature, vol. 551, no. 7679, Nov. 2017, pp. 159–163, www.nature.com/news/four-ethical-priorities-for-neurotechnologies-and-ai-1.22960, 10.1038/551159a. Accessed 10 Nov. 2019.


© 2020 Science Nation