That’s an interesting video, but the debate in psychology is considerably more complicated than it indicates, even though in this low-ranking statistician’s opinion it need not be. I wish scientists from outside psychology would point out that using statistical methods to measure reproducibility over aggregated studies is reiterating the misguided reliance on statistical assumptions that led to the reproducibility problem in the first place.
The debate in psychology began with this paper, was followed up with this commentary, which was followed up again by this one, which in effect said you both could be wrong or right; it’s too early to tell.
I stopped following this debate in 2018, but if memory serves someone or some group tried to replicate 21 studies from Science and Nature but could only replicate 13—this from two of the most prestigious journals in the world. This is about the time I stopped reading psychology journals.
It would be nice if the four horsemen she cites were the cause of the problem, and if the solutions proposed solved it. But they aren’t and they don’t. The issue goes even deeper, in that psychologists can’t even agree on what replication is, and how to, uh, “measure” it.
I am of the Feynman persuasion regarding psychology.
“It would be possible to say, if it were possible to state ahead of time, how much love is not enough and how much love is overindulgence, exactly, then there would be a perfectly legitimate theory against which you could make tests. It is usually said when this is pointed out, ‘How much love is’ and so on, “Oh, you’re dealing with psychological matters and things can’t be defined so precisely”; yes, but then you can’t claim to know anything about ‘em.” —Richard Feynman