Why We Need Brain Scan Data Guidelines

Your brain is a lot like your DNA. It is, arguably, everything that makes you uniquely you. Some types of brain scans are a lot like DNA tests. They may reveal what diseases you have (Parkinson’s, certainly; depression-possibly), what happened in your past (drug abuse, probably; trauma, maybe), or even what your future may hold (Alzheimer’s, likely; response to treatment, hopefully). Many people are aware—and properly protective—of the vast stores of information contained in their DNA. When DNA samples were collected in New York without consent, some went to great lengths to have their DNA expunged from databases being amassed by the police.



Evan D. Morris, Ph.D., is a professor of radiology and biomedical imaging at Yale. He uses PET and fMRI to study drug abuse and drug action in the brain. In August 2019, he was a visiting scholar at the Hastings Center to study the ethics of brain imaging.

Fewer people are aware of the similarly vast amounts of information in a brain scan, and even fewer are taking steps to protect it. My colleagues and I are scientists who use brain imaging (PET and fMRI) to study neuropsychiatric diseases. Based on our knowledge of the technologies we probably ought to be concerned. And yet, it is rare that we discuss the ethical implications of brain imaging. Nevertheless, by looking closely, we can observe parallel trends in science and science policy that are refining the quality of information that can be extracted from a brain scan, and expanding who will have access to it. There may be good and bad reasons to use a brain scan to make personalized predictions. Good or bad, wise or unwise, the research is already being conducted and the brain scans are piling up.

PET (Positron Emission Tomography) is commonly used, clinically, to identify sites of altered metabolism (e.g., tumors). In research, it can be used to identify molecular targets for treatment. A recent PET study of brain metabolism in patients with mild cognitive impairment predicted who would develop Alzheimer’s disease. In our work at Yale, we have used PET images of a medication that targets an opioid receptor to predict which problem drinkers would reduce their drinking while on the medication.

fMRI (functional Magnetic Resonance Imaging) detects local fluctuations in blood flow, which occur naturally. A key discovery in the 1990s found that fluctuations in different brain regions occur synchronously. The networks of synchronized regions have been shown repeatedly to encode who we were from birth (our traits) and also long term external effects on our brains (from our environment). fMRI analysis techniques are getting so powerful that the networks can be used like a fingerprint. fMRI networks may be even richer in information than PET–but also more problematic. The networks (sometimes called “functional connectivity” patterns) have been used to predict intelligence. They have been used to predict the emergence of schizophrenia or future illicit drug use by at-risk adolescents. Functional connectivity is being used to predict which adult drug abusers will complete a treatment program and who is likely to engage in antisocial behavior. Some predictions are already 80 to 90 percent accurate or better. Driven by AI and ever-faster computers, the predictive ability of the scans will improve. Most medical research using brain imaging is funded by the NIH (National Institutes of Health). At least one institute (the National Institute of Mental Health) requires that its grant recipients deposit all of their grant-funded brain scans into an NIH-maintained database. This and similar databases around the world are available for other “qualified researchers” to mine.

Some uses of brain imaging would seem to have only upsides. They might provide certainty for patients and their families who desperately need help planning for their colliding futures. They could avoid unnecessary and costly treatments that are destined to fail. But other uses of brain imaging lie in an ethical gray area. They foretell behaviors and conditions that could be stigmatizing or harmful. They generate information that an individual may wish to keep private or at least manage. In the right circumstance, the information may even be of great interest to the police or the court system.

As the New York Times recently reported, the police in New York City tricked a child into leaving his DNA on a soda can. I recognize that fMRI networks cannot be captured surreptitiously by enticing a 12-year old to drink a soda. The police will not use fMRI fingerprints solely as identifiers. It would be too much trouble. But many questions arise. Could a court order someone to undergo fMRI or PET? Could a prosecutor subpoena a brain scan that a suspect consented to in the past as a research volunteer? Forensic genealogists tracked down the Golden State Killer without ever taking a sample of his DNA. They triangulated using DNA markers he shared with unacquainted third cousins who had uploaded their DNA sequences to a public database. Could a forensic brain imager identify you as unlikely to complete drug treatment and thus a bad candidate for diversion? What if we could predict your future behavior by similarities that your fMRI networks share with those of psychopaths who had been analyzed and whose data now resides in a database? Even now, it seems plausible that a qualified scientist working with police could download the data. If that didn’t work, the police might get a warrant. Will the NIH relent and share their databases of images when the police come calling?

Leave a Reply

Discover more from Ultimatepocket

Subscribe now to keep reading and get access to the full archive.

Continue reading