Neuralink: The Promise and Perils of Brain-Computer Symbiosis

Neuralink is a fully integrated BCI (Brain Computer Interface) system founded by Elon Musk July of 2016. Implemented entirely on a microchip device the size of a coin, the system’s AI algorithm is able to seamlessly process and imitate motor cortex neural signals within the brain, sending them wirelessly to nearby devices. This constant exchange of information allows the Neuralink’s user to control their device merely using their mind. Initially created with the intent of helping people suffering from paralysis gain independence, this device would medically allow impared humans to communicate and explore more freely than before. However, with the exponential growth of technology, it is predicted that the power of the Neuralink system is endless. Not only would it have the ability to treat a wide range of neurological irregularities never thought curable before, the chip could theoretically connect to other cortex regions of the brain, allowing humans to alter the mind far beyond its normal capabilities. Some researchers theorize a future without parkinson and epilepsy disease, both solved by the Neuralink’s ability to predict and prevent sporadic neuron signals from occurring. Others picture a future where humans are able to train the brain’s cognitive function to enhance learning states or connect interfaces together in order to telepathically communicate. Nonetheless, the future power of the Neuralink system would create a complete “symbiosis” between the human brain and artificial intelligence: a world with no limits. But a world without limits might not be as infallible as appears. Thus, although many of the future applications surrounding Neuralink seem inherently attractive, there are many ethical dilemmas that arise alongside its power.

One vital moral issue surrounds Neuralink’s ability to enable and perpetuate inequality amongst individuals. Neuralink has heavily been promoted by Musk as an easily accessible technology, available to the general population. This, however, is viewed through the lens of a perfectly equitable society and is not reality. It is predicted that the initial cost to implant the

Neuralink will be in the upper thousands, most likely being non refundable due to the permanency and danger of the procedure. It is also important to note that, because this brand new technology is promoted as futuristic and is extremely fetishsized by the public, the starting price will ultimately be higher than predicted and most likely not covered by insurance. These theorized numbers, however, do not take into consideration the possibility of “upgrades” , “subscriptions'', or “extra packages” that could be offered for purchase after the insertion. And although these possibilities may appear voluntary, they are inevitable and will likely be required due to the maintenance of the technology and greed of the company which controls it. Due to these excessive costs, only those in high class, ultra wealthy society will reasonably be able to afford Neuralink technologies in the future. This would result in a decrease of accessibility and an increase in demand for the technology, rising debts and furthering the wealth gap in low income areas that may need this technology medically to stay alive. However, because the ownership of modern technology has been popularized as a way to display social class, it is not likely that the wealthy will use Neuralink’s abilities for its original intention. Rather, it will be used for the enhancement of their human ability. This future reality will create even more opportunities in power and influence for those already immersed in those positions, causing those without to compete against their “digitally enhanced” peers in a game they already had no chance in winning. In other words, not only will the ultra rich get richer—they will have the extra benefit of having an ultra powerful brain to help them along in life. Solidifying the struggle of those in dire need.

Another crucial moral issue surrounds the legalities of Neuralink’s privacy capabilities and the protection of its users' individual choice. As technology currently stands today, users are extremely reluctant in allowing systems to collect their data. Many are fearful that their data will

be used by companies to better understand intentions and algorithmically predict inner thoughts. Neralink’s Big Brother style of mental surveillance would undoubtedly further this psychological harm, furthering the paranoia of being watched. Because the chip’s AI algorithms are directly connected to the brain, they collect constant, copious amounts of physical and neural biodata which is frequently uploaded to the cloud. This personal data is extremely challenging to preserve and control once it is wirelessly communicated. If this data is not correctly encrypted and protected, it is exceedingly vulnerable to unwanted viruses and third parties who could gain complete access and control to the data. Data that could allow complete access and control over another's brain. This possibility could allow humans to unknowingly be tracked and surveilled by others who may have ill-natured intentions or capabilities, or perhaps even allow the complete rewiring of brain signals, reconstructing another's reality. Neuralink’s endless power could be seen, accessed, and altered anywhere, anytime, by anyone. This brings to question, if Neuralink is prone to such high levels of danger for its user, is the implementation of government regulation and protection a necessity? Protecting the inner thoughts of the mind seems absurd and foreign to most people, the concept of “reading minds” only being presented in futuristic science-fiction settings of a world far beyond ours. This world, however, is much closer to our reality than many realize. Imagining a world in which the government uses neural interfaces, such as Neuralink, to monitor its citizens is not too far-fetched. In fact, China has already started using AI algorithms similar to Neuralink to track diseases and monitor criminal activity of individuals. Although this form of surveillance already presents multiple concerns, the possibility of allowing government access over the mind poses perhaps an even more dangerous threat: the blurring of boundaries between individual freedom and government control.

Individual freedom such as bodily autonomy, speech, and independence, all susceptible to being abused and taken away.

Although the ethical debate surrounding the impact of Neuralink is important, the real concern lies within human’s obsession of a transhumanism world. It is without a doubt that future neural technology enhancements, such as Neuralink, would allow humans to reach an evolved bodily and cognitive state of post-human ability. But this ability raises into question how we, as humans, really want to live our lives. If we reach our full transhumanism vision of creating a complete “symbiosis” between humans and technology, are we still really humans at our core? Or are we simply all just vessels for superintelligence to thrive? And, maybe more importantly, once we reach this full transhumanism reality, can it ever be reversed? Even Musk has voiced concern for his technology, raising how Neuralink needs to be created and used with extreme caution in order for the survival of humanity. This concern is shared amongst many technology experts and philosophers alike, all fearing the consequences of fully fusing the human brain with super intelligent systems. alongside the basic singularity argument. And although this basic singularity argument is predicted far into the future, the growth of technology is just beginning. And these technologies can and will change everything known about life on a massive, unpredictable scale. With all this in mind, I will close with this: sometimes it is more important to weigh the direct impacts of a technology rather than fantasizing about its endless possibilities in the world. Especially if it is a world where technology is not just with us, but a part of us.

Previous
Previous

The Double-Edged Sword of Eugenics: Perfection at What Cost?

Next
Next

LSTM Networks for Automated Cardiac Arrhythmia Detection: Advancements, Challenges, and Future Directions