In some cases, patients might even want to hack into their own neural device. Unlike devices to control prosthetic limbs, which still use wires, many deep brain stimulators already rely on wireless signals. Hacking into these devices could enable patients to “self-prescribe” elevated moods or pain relief by increasing the activity of the brain’s reward centers.
Despite the risks, Kohno said, most new devices aren’t created with security in mind. Neural engineers carefully consider the safety and reliability of new equipment, and neuroethicists focus on whether a new device fits ethical guidelines. But until now, few groups have considered how neural devices might be hijacked to perform unintended actions. This is the first time an academic paper has addressed the topic of “neurosecurity,” a term the group coined to describe their field.
“The security and privacy issues somehow seem to slip by,” Kohno said. “I would not be surprised if most people working in this space have never thought about security.”
Kevin Otto, a bioengineer who studies brain-machine interfaces at Purdue Universty, said he was initially skeptical of the research. “When I first picked up the paper, I don’t know if I agreed that it was an issue. But the paper gives a very compelling argument that this is important, and that this is the time to have neural engineers collaborate with security developers.”
Hackers who commandeer your computer are bad enough. Now scientists worry that someday, they’ll try to take over your brain.