Every single peer review that couldn't duplicate the findings of the initial experiment. Come on dude, this is pretty trivial, do you really want me to bother even linking that shit?
I dont think you're quite understanding the concept I'm asking for.
A non-replication is not an example of this, because people who publish non-replications do not
claim or
intend for them to be "evidence of absence"! Non-replications are intended to do one thing and one thing only: cast doubt on pre-existing
positive evidence published earlier, and to convince readers to return to a state of
agnosticism regarding that data, NOT to prove any actual claim or counterclaim about how the world works.
If you attempt to publish an article that has non-replication alone, and in that article claim to have therefore proven anything about the way the world works based on that, it not only
will not get published, it will not even make it past triage, and the chances are high that it would additionally significantly hurt your career reputation as well, by word of mouth.
Additionally, in my 10 years of getting paychecks exclusively to read and write papers and conduct research, guess how many publications I've come across whose only data was a non-replication? Out of
hundreds of articles a year read? Precisely ONE. I published it myself, and it wasn't even a full article (virtually NO chance of that happening in academia, unless the thing you're not replicating is like, a Higgs boson or cold fusion or something). It was a 1 page invited comment on another paper, and the only conclusion from it was "these guys don't have as much evidence for their effect as they claim they do." That is all. So not only is this evidence of what you're trying to argue, but it itself pretty much never happens, either.
99% of the time, when somebody publishes non-replicating data, it is Experiment #1 out of 7 or something, and serves only as a justification for having done the additional experiments and having found whatever
positive evidence they found later, which is what the paper is invariably actually about. Most often, it is non-replication of a competing theory's experiment, then filled in with replacement
positive evidence of one's own theory. (do you see a trend here?)
What I'm asking for (and what is necessary to prove your point as something actually in practice by the scientific community) are examples of articles in prestigious peer-reviewed journals, where:
1) The evidence is purely null result experiments
2) They use those null results to conclude something about the way the world works, NOT to conclude something about the confidence one should place in earlier researchers' (positive evidence) work.
Even if there weren't any papers about it, your argument is like some sort of crazy assumption that every time a scientist thinks up or investigates anything they are always 100% correct from the get go, which is absurd. When most people find evidence of absence, they don't write a paper about it, they come up with an alternate hypothesis.
No! You do NOT abandon theories based on null results. That's terrible practice, and anybody teaching that to new researchers should frankly be fired.
Yes, researchers are wrong all the time, of course. But you abandon theories based on
positive evidence that runs contrary to the predictions of the theory,
only.
For example, "All grass is red!" is my theory/hypothesis.
If I live in a desert and walk outside and can only find sand and end up with a null result, that is not a reason to abandon my hypothesis... I MIGHT decide this experiment isn't worth it and the plane tickets would cost to much to go find grass, or whatever, but that's not the same thing as proving something. That's just being too poor or having better things to do.
If however, I walk outside and find samples of green grass, then I have collected concrete,
positive evidence that runs contrary to my hypothesis, so I abandon it and adjust to a new one.
the vast bulk of science is going "Oh, that didn't work, I guess I should try something else."
This is also true. Null results will routinely encourage scientists to go try some other tack, for practical reasons and time constraints, etc.
That has nothing to do with it being actual evidence of a theory being wrong, however, and everything to do with efficient time management of wanting to make the most amount of discoveries about the world (all of which involve
positive evidence) as quickly as possible and as cheaply as possible.