Grasping the genomic palantir

3 minute read

Gina Kolata writes in the New York Times about the conundrum faced by research scientists who inadvertently discover the health risks of their research participants: “Genes Now Tell Doctors Secrets They Cant Utter”. The first case described, which is the clearest in many ways, was one in which the participant was discovered to be free of a mutation that had caused breast cancer in her female relatives:

[T]he woman, terrified by her family history, also intended to have her breasts removed prophylactically.
Her consent form said she would not be contacted by the researchers. Consent forms are typically written this way because the purpose of such studies is not to provide medical care but to gain new insights. The researchers are not the patients doctors.
But in this case, the researchers happened to know about the womans plan, and they also knew that their study indicated that she did not have her familys breast cancer gene. They were horrified.

That case is ethically straightforward compared to others, because the researchers could make a difference to an immediate medical decision. On the other hand, how many risk-free research participants went ahead with prophylactic mastectomies because researchers didn’t know about their plans?

I think the article will be a good one for prompting student discussions in my courses, and I’ll likely assign it widely. But I think the central ethical problem discussed in the article is temporary.

Basically, the problem is that researchers are coming into knowledge about simple, high-penetrance Mendelian variants, where the information about disease risk is very clear, but they are restricted in various ways by privacy agreements related to their research. There is, in other words, an information asymmetry between researchers and their subjects. The article also mentions the problems faced by researchers studying dead research subjects, who may nonetheless have surviving family members who might benefit from knowledge about the deceased’s genotypes. The problem arises because genetic sequencing is expensive and rare.

There will be a time soon when genetic sequencing is cheap and universal, and research participants will be very unlikely to have unknown Mendelian disease alleles. Non-Mendelian risks are much less actionable – some complex statistical combination of different genotypes may be interesting to a researcher, but is pretty unlikely to give rise to a specific “You must treat this NOW” ethical problem. When the actionable information available to a researcher is already part of a subject’s medical file, the information asymmetry that gives rise to the ethical problem will be gone.

In the medium term, immediacy of results makes a tremendous difference in this ethical situation. The article is pointing at researchers who are making new discoveries about 20-year-old samples. Take a look at fMRI research, another area where research participant could potentially receive information that is directly relevant to health – maybe at worst, a previously undetected tumor. Many research studies provide their subjects with an MRI image of their brain, as a routine “reward” of participation. What makes this model work is that it is done at the point of participation. An fMRI is not a cheap or easy test, but an image print can be done immediately and given to the participant. It would be super easy to do the same with genotyping data, including routine reports on ancestry and health risks as provided today by 23andMe and other providers, if the genotyping were immediate.

In fact, a 23andMe-like readout for research subjects would pretty much end the “ethical problem” of this article.

Since the ethical problem itself arises from the (relatively rare) cases where genetics give rise to actionable predictions, and actionable predictions are one plausible goal of “personalized genomics”…it is interesting to ponder whether the end of ethical problems may also be the end of productive research in this area.