Discussion about this post

User's avatar
Jocelyn's avatar

Great post and some valid points raised. However, I think several of the limitations you identify actually bias the RS estimate toward the null, not away from it.

Broad indication labels, for instance, make it harder to match genetic traits to drug indications at the 0.8 similarity threshold, so they are more likely to be classified as unsupported. Similarly, the lack of directionality of effect means that some T-I pairs classified as genetically supported actually have genetics arguing against the drug's mechanism. These pairs would be expected to fail, diluting the RS among the supported group. The true RS for directionally concordant genetic support is therefore likely even higher than 2.6x.

On L2G:

1) Minikel showed that RS increases with L2G share threshold (Figure 1c), which is what you would predict if the signal is real.

2) comparing L2G to nearest-gene at a single threshold is misleading because L2G is a continuous score and outperforms on precision at higher thresholds, which is what matters when a false positive costs millions.

3) imperfect gene mapping adds noise that, again, dilutes the RS estimate.

4) 23andMe could reproduce the results, and even showed a dose response between gene mapping confidence and clinical success, reaching up to 5x RS at the highest tier.

I think a lot of people have misunderstood what the paper claims. I don’t think Minikel never argue that genetic support should be a gate for drug development. They argue it is an enrichment signal, and a probabilistic tool for portfolio prioritization. Individual targets succeeding without genetic support does not invalidate that observation.

ScienceGrump's avatar

Whenever a paper's methodology is surprising, I get suspicious. There might be good reasons. But often, it suggests that the obvious and intuitive analysis didn't yield the results they were looking for.

5 more comments...

No posts

Ready for more?