Dread Lörd Kaolian wrote:
Sure to a degree, but not anywhere near a 1.4 degree +/- margin of error for 10,000 years ago, and not without making some assumptions along the way that are still in scientific dispute.
Story time!
My first lead-author paper got heavily cited by another paper that came out probably 3 years later. They were doing an analysis where they used some of the my data to put together a new method for quantitatively measuring protein abundance in complex mixtures; you know error bars and the like, etc.
The funny thing is they never bothered to contact me before using my data. If they had, I would have warned them we had thought of doing the same thing initially, but after looking over the data decided it was too inconsistent to be used quantitatively. In short if you want to average results from many analyses it helps if you do the experiment the same way every time...
. It was no skin off our backs at the time. We were just trying to put out a dataset for testing software, and a quantitative analysis was thought up halfway through. It was too late to really mess with the idea without starting over at that point, so we ditched the idea.
So here I am reading a paper that uses this data in a way it was never meant to be used. Worse yet,
it's in a Nature journal, so not exactly some backwater publication nobody really reads, where it could simply get passed over or ignored. They didn't do anything to correct for some of the errors that were there, and there were inconsistencies they flat-out ignored that made the data do funny things (which they, of course, didn't bother to explain). Not to mention there were other problems, but they're not of the nature I can probably explain here, basically misusing parameters and what not. In short it was a ************
I think it was somewhere around that time I lost my blind faith in anyone outside of a mathematical field doing math right, not to mention the peer-review process in general. In retrospect I partly wish I had just done the analysis with that data, it was obviously good enough to get into a nice journal and I would have at least known what errors were in the data when I reported it... At the same time I'm glad my name isn't anymore associated with that debacle than it is already.
Averaging data from different analyses isn't something that can really be done lightly. I mean you can't just do 3 different tests and take a mean or something. Error propagation is a homely little *******, and of course there's no way to combine some data (not that that stops anyone...), and those pesky systematic errors get glossed over when someone starts bragging about some p-value with an exponent large enough to make a mistake impossible within the lifespan of the universe. No one bothers to say "this measurement is perfectly accurate assuming the following conditions..."
I guess in the end I still trust the experts though, just not as blindly as before. If nothing else it made me want to learn more about doing some of these calculations correctly, and what that meant. I do have a lot less blind faith in numbers people report, but sometimes I wish I knew more about how to double-check them. Eventually these kinds of things fall apart if they're done wrong, just gotta give time for the cream to rise to the top and what not I guess. Always takes longer than we want it to.
TL:DR = Scientists are people too apparently.