Machine-Learning Algorithms Aid In Mapping Neural Signals
By Todd Levy, The Feinstein Institute for Medical Research
Technology has changed how we do business, how we communicate, and how we share personal memories. It also has accelerated changes in how we view the treatment of many medical conditions, including conditions that have been largely unresponsive to traditional pharmacological treatments. One example of this is the emergence of bioelectronic medicine as a new medical field. This was mainly driven by the development of novel neural recording and stimulation technologies, coupled with an improved understanding of neural reflex pathways.
Bioelectronic medicine diagnoses and treats disease by intervening with the electrical signals within the nervous system. Pharmacological treatments (i.e., pills) are more systemic, affecting the whole body and causing undesirable side effects. If specific organs can be targeted and stimulated with precise electrical pulses, as is the goal in bioelectronic medicine, side effects can be minimized. Biotechnology leaders recognize bioelectronic medicine as a sector that already is presenting technological alternatives to the biochemical therapies, and the growth of interest in the field is self-evident.
Over the last three years, $1 billion has been invested in bioelectronic medicine by industry leaders — including General Electric (GE), Google, Glaxo Smith Klein (GSK), and others — as well as funding agencies, like the National Institutes of Health (NIH) and the Defense Advanced Research Projects Agency (DARPA). Further, the Feinstein Institute for Medical Research recently partnered with GE and United Therapeutics to research and develop novel bioelectronic medicine therapies.
The peripheral nervous system and its signaling code, principally communicated through the vagus nerve, is the locus of several treatment strategies. As our understanding of the body's neural signals improves, it will drive the development of external and implanted bioelectronic therapeutic devices. As such, how we approach the problem, the quality of the neural recordings, and how we understand and interpret those data through machine learning are pivotal.
Your Algorithms Are Only As Good As Your Data
High-quality neural recordings are required even before we can start to develop algorithms, because data are how algorithms "learn." Listening to the human nervous system is like trying to pick out a conversation among a few people in a crowded stadium. But, instead of a stadium, we're inside the brain and, instead of people, we're listening in on neurons communicating. We have to deduce who is speaking at any given point, and hear what they're saying.
To do that, we have to get close enough so that their voices are amplified against the background of the entire crowd. With neural brain recordings from implanted electrodes, we are extracting the neural electrical signals generated by "action potentials," also known as "spikes," from individual neurons. The spikes are grouped according to the neurons that generated them, based on their waveform shapes, in a process known as "spike sorting." Proper electrode placement is key to detecting these local electrical signals.
Implanted brain electrode recordings are necessary to train a decoder to recognize localized brain activity corresponding to various behaviors, intentions, or stimuli. In this case of invasive methods, surgery must be performed to insert the electrodes, which will need to withstand foreign body responses. Common responses that degrade the quality of the signal over time are scar tissue formation and cell death at the insertion site, resulting from implantation and micromovements.
In the peripheral nervous system, cuff electrodes that wrap around the surface of a nerve can record impulses from multiple nerve fibers that fire near synchronously, commonly known as "compound action potentials" (CAPs). Vagus nerve cuff electrodes record the CAPs propagating along the nerve fibers. Recent efforts from our group involve building decoders that can learn to relate these signals to homeostatic changes, such as different inflammatory states.
Alternatively, surface electroencephalography (EEG) electrodes can be used to record field potentials on the surface of the head. While this method is less invasive than an implantable recording, it also lacks detail. Since it is capturing an attenuated superposition of multiple spikes through the skull, it is no longer possible, going back to our stadium example, to hone in on a specific conversation or neuron; it's more like picking up the hum of multiple conversations. These grouped conversations can still provide valuable information for training a "decoder" to recognize large-scale states of the brain like attentiveness, or sleep states.
Once we understand this "conversation," or neural code, we can talk back to the nervous system using its language, and then direct the conversation in such a way that provides therapeutic benefit to the patient — such as decreasing inflammation via vagus nerve stimulation, or evoking sensation in the brain. The frequency, amplitude, duration, and polarity of the stimulation waveform, as well as the stimulation site, can be modulated to alter the response and optimize treatment outcomes.
Patterns in neural signals could be patient-specific and can change over time, so the decoder must be calibrated, or trained, via machine learning — a sub-field of computer science focusing on a set of methods that enables a computer program to learn and adapt from data, rather than being explicitly programmed.
Algorithms: Not A One-Size-Fits-All Solution
Through machine learning, we are able to determine the most appropriate algorithms to predict different neural functions. Two common types of algorithmic models used to capture the relationship between neural signals and biomarkers are classification models and regression models.
For a classification problem, the computer attempts to learn the boundary or boundaries among two or more classes. For example, to utilize the neural signals associated with diabetes, we might want to decode signals that propagate along the cervical vagus, while concurrently measuring a biomarker — specifically, blood glucose concentration. We would use machine learning to estimate the parameters of a model that describes the boundaries between the hypoglycemic, normoglycemic, and hyperglycemic states in a feature space derived from the nerve activity.
For a regression problem, using this same example, we might attempt to estimate blood glucose concentration directly, rather than binning the levels into physiological ranges. In this case, we would use machine learning to estimate the parameters of a regression model that describes the relationship, or transfer function, between the neural signals and the biomarker.
In either case, a model is only useful if it generalizes to unobserved data, as opposed to "overfitting" or memorizing the specific nuances of the data that were used to train it. To make sure our models generalize, we use regularization methods that prevent overfitting by ensuring that the model parameters remain relatively small.
Over the past 10 years, the popularity of the machine learning field has exploded, mostly due to the success of one group of machine learning models — called deep neural networks — and their application to various problems, commonly referred to as "deep learning." While this approach was inspired by the architecture of the central nervous system — more specifically, the visual cortex of the brain, and has proven extremely successful in numerous applications, such as image and speech recognition — it has not been equally efficacious in bioelectronic medicine.
The main reason for this is that these methods require large amounts of data that do not change dramatically over time, while the output of the body's neuronal networks constantly is changing on timescales too short to collect enough data to properly train a deep network. Moreover, making these algorithms fit in devices tiny enough for human implantation can prove rather challenging. Most importantly, the complexity of these algorithms occasionally prevents even the engineers that build them from understanding why they malfunction, when they do. This is a serious issue when such algorithms are used for devices that can affect the health of an individual.
Conclusion
Bioelectronic medicine holds vast potential to treat many conditions for which currently there is no therapy, or the therapy does not work for all patients. Interestingly enough, an evolving synergy has been developed between neuroscience and machine learning. Initially, brain function was used to inspire many computational methods in computer science, but now similar algorithms are used to understand and predict brain function.
The nervous system contains a wealth of information, and we are only able to record a tiny fraction of it. The quality of the data obtained from these recordings, and how we interpret the data, are critical in accelerating bioelectronic medicine device development. As electrode technology improves, we will be able to reliably record from and stimulate a larger number of neurons, which will yield higher signal specificity. Signal processing and machine learning techniques can utilize these increased capabilities, which could allow us to target peripheral organs with a higher specificity, and estimate biomarkers of interest with higher sensitivity.
About The Author
Todd Levy is an electrical engineer at The Feinstein Institute for Medical Research, where he joined the Center for Bioelectronic Medicine and the Neural Decoding and Data Analytics laboratory in 2016. He holds BS and MS degrees in electrical engineering from Case Western Reserve University, and a post-masters certificate in applied biomedical engineering from Johns Hopkins University. His master’s thesis focused on developing an automated sleep state classifier for neonates based on polysomnogram signals and using hidden Markov models.
Mr. Levy has worked as a staff scientist at L-3 Communications, where he performed adaptive signal processing for phased array direction finding systems, as well as at Johns Hopkins University Applied Physics Laboratory, where he served as an electrical engineer on the DARPA-sponsored Revolutionizing Prosthetics program, focusing on signal processing methods. Mr. Levy also worked as a research engineer at the Massachusetts Institute of Technology Lincoln Laboratory, where he worked on detection problems that used radar, and was the lead research engineer for AreteX Systems, where he developed techniques to automate and improve the quality of care in intensive care units. His interdisciplinary research interests include signal processing and machine learning techniques as applied to radio-frequency and biomedical applications.