Cook and co (C14) have a glossy document claiming that I made 24 errors in my recent comment on their work (C13).
Here are some responses. More in a few days:
1. See forthcoming rejoinder. Healey (2011) undermines C13.
2. C14 do not dispute the key claim: non-representativeness of the C13 sample.
3. Consensus is irrelevant in science. Cook's alleged consensus, that humans played some part in the observed warming, is irrelevant in policy.
4. I indeed cited Legates.
5. The raters knew each other, and frequently discussed their ratings with one another.
6. There are various ways to interpret C13. One is that it was a survey of Cook and his mates by Cook and his mates. Cook himself uses this interpretation, "a survey of human subjects", in his argument that the raters are entitled to their privacy (see 7).
7. The requested data are for verification and audit rather than replication.
8. C14 do not dispute the key claim: non-representativeness of the C13 sample.
9. C14 do not dispute the key claim: non-representativeness of the C13 sample.
10. C14 do not dispute the key claim: non-representativeness of the C13 sample.
11. C14 do not dispute the key claim: non-representativeness of the C13 sample. They forget that the onus is on them to demonstrate representativeness. I gave a number of examples of over- and undersampling.
12. C14 do not dispute the key claim: non-representativeness of the C13 sample.
13. Andy S complained about rating so many abstracts that he couldn't tell them apart anymore. I think that is a sign of fatigue.
15. In 7, C14 argue that the raters are interviewees entitled to privacy. In 15, C14 argue that the raters are interviewers.
16. C14 contradict the data of C13.
17. I indeed used a small, selective sample as an illustration.
18. I indeed cited Montford.
19. C14 do not dispute the key claim: C13 failed their validation test.
20: C14 do not dispute the key claim: impact and policy papers in C13 contain no evidence on the causes of warming.
21: Implicit endorsement is in the eye of the reader.
22: C14 do not dispute the key claim: C13 mistook trend in composition for trend in endorsement.
23: C14 do not dispute the key claim: C13's results are dominated by papers that contain no evidence on the causes of warming.
24. C14 refer to public opinion whereas I referred to the climate debate.
-
The Guardian has published six hatchet jobs impugning me and my work. The first four are under investigation by the Press Complaints Commission.
For hatchet job #5 and #6, the Guardian granted me the right to reply by return email. They were published together, without a clear structure and in the wrong order, with the first piece heavily edited. Here are the originals.
In response to Republican witness admits the expert consensus on human-caused global warming is real, by Dana Nuccitelli, 2 June 2014
On 29 May 2014, the Committee on Science, Space and Technology of the US House of Representatives examined the procedures of the UN Intergovernmental Panel on Climate Change.
Having been active in the Intergovernmental Panel on Climate Change since 1994, serving in various roles in all its three working groups, most recently as a Convening Lead Author for the Fifth Assessment Report of Working Group II, my testimony briefly reiterated some of the mistakes made in the Fifth Assessment Report but focussed on the structural faults in the Intergovernmental Panel on Climate Change, notably the selection of authors and staff, the weaknesses in the review process, and the competition for attention between chapters. I highlighted that the Intergovernmental Panel on Climate Change is a natural monopoly that is largely unregulated. I recommended that the septannual assessment reports be replaced by an assessment journal.
In his article of 2 June, Dana Nuccitelli ignores the subject matter of the hearing, focusing instead on a brief interaction about a paper co-authored by … Mr Nuccitelli.
Mr Nuccitelli unfortunately missed the gist of my criticism of his work. Successive literature reviews, including the ones by the Intergovernmental Panel on Climate Change, have time and again established that there has been substantial climate change over the last one and a half century and that humans caused a large share of that climate change. There is disagreement, of course, particularly on the extent to which humans contributed to the observed warming. This is part and parcel of a healthy scientific debate. There is widespread agreement, though, that climate change is real and human-made.
Mistakenly thinking that agreement on the basic facts of climate change would induce agreement on climate policy, Mr Nuccitelli and colleagues tried to quantify the consensus, and failed. Their sample is not representative of the literature. They claim to have validated their data whereas in fact their validation test fails twice. Seven per cent of their data is wrong by their own results, although spot checks suggest a much higher error rate. They mistake a trend in composition for a trend in substance. Their data show inexplicable patterns. In other words, their paper crumbles upon inspection.
In his defence, Mr Nuccitelli argues that I do not dispute their main result. Mr Nuccitelli fundamentally misunderstands research. Science is not a set of results. Science is a method. If the method is wrong, the results are worthless.
Mr Nuccitelli’s disregard for the scientific method also shows in their refusal to share all of their data for replication and audit. In his recent article, he even misrepresents his own work, which is about the number of scientific papers rather than the number of scientists.
Mr Nuccitelli’s piece is the fifth in a series of articles published in the Guardian impugning my character and my work. Mr Nuccitelli falsely accuses me of journal shopping, a despicable practice.
The theologist Michael Rosenberger recently described climate protection as a new religion, based on a fear for the apocalypse, with dogmas, heretics and inquisitors like Mr Nuccitelli. I prefer my politics secular and my science sound.
In response to Climate contrarians accidentally confirm the 97% global warming consensus, by Dana Nuccitelli, 5 June 2014
Dana Nuccitelli writes that I “accidentally confirm the results of last year’s 97% global warming consensus study”. Nothing could be further from the truth.
I show that the 97% consensus claim does not stand up.
At best, Mr Cook and colleagues may have accidentally stumbled on the right number.
Mr Cook and co selected some 12,000 papers from the scientific literature to test whether these papers support the hypothesis that humans played a substantial role in the observed warming of the Earth. 12,000 is a strange number. The climate literature is much larger. The number of papers on the detection and attribution of climate change is much, much smaller.
Cook’s a sample is not representative. Any conclusion they draw is not about “the literature” but rather about the papers they happened to find.
Most of the papers they studied are not about climate change and its causes – but many were taken as evidence nonetheless. Papers on carbon taxes naturally assume that carbon dioxide emissions cause global warming – but assumptions are not conclusions. Cook’s claim of an increasing consensus over time is entirely due to an increase of the number of irrelevant papers that Cook and co mistook for evidence.
The abstracts of the 12,000 papers were rated, twice, by 24 volunteers. Twelve rapidly dropped out, leaving an enormous task for the rest. This shows. There are patterns in the data that suggest that raters may have fallen asleep with their nose on the keyboard. In July 2013, Mr Cook claimed to have data that showed this is not the case. In May 2014, he claimed that data never existed.
The data are also ridden with error. By Cook’s own calculations, 7% of the ratings are wrong. Spot checks suggest a much larger number of errors, up to one-third.
Cook tried to validate the results by having authors rate their own papers. In almost two out of three cases, the author disagreed with Cook’s team about the message of the paper in question.
Attempts to obtain Cook’s data for independent verification have been in vain. Cook sometimes claims that the raters are interviewees who are entitled to privacy – but the raters were never asked any personal detail. At other times, Cook claims that the raters are not interviewees but interviewers.
The 97% consensus paper rests on yet another claim: the raters are incidental; it is the rated papers that matter. If you measure temperature, you make sure that your thermometers are all properly and consistently calibrated. Unfortunately, although he does have the data, Cook does not test whether the raters judge the same paper in the same way.
Consensus is irrelevant in science. There are plenty of examples in history where everyone agreed and everyone was wrong. Cook’s consensus is also irrelevant in policy. They try to show that climate change is real and human-made. It is does not follow whether and by how much greenhouse gas emissions should be reduced.
The debate on climate policy is polarized, often using discussions about climate science as a proxy. People who want to argue that climate researchers are secretive and incompetent only have to point to the 97% consensus paper.
x5View comments
-
Someone has posted a response to my comment on Cook's 97% consensus paper. The response has caused some hilarity, both among those who think that John Cook invented sliced bread and among those who understand what is going on.
The response is anonymous. Let's refer to its author as Frank Ackerman Jr, who does not work at Tufts University. Junior is unrelated to Frank Ackerman. They just have a name in common, and neither is affiliated to Tufts.
Junior's response focuses on the procedure I used to correct Cook's data. If we follow Cook's paper, then 6.7% of their data is wrong. (This may be an underestimate.) I applied a procedure to correct the erroneous data and found that the dissensus rate goes from 2% to 9%.
Junior takes issue with my procedure. The critique cuts no wood.
Junior's Equation (1) was used by Cook, but not by me. Until such day that Cook releases the survey protocol that shows that the post-hoc data correction was planned before the data were collected, there is no reason to add the term 0.005N4 to the denominator.
Equation (3) is wrong too. Junior notes I used
C=TFwhere C is the 7x1 vector of corrected ratings, T is a 7x7 matrix and F is the 7x1 vector of final ratings, withT=(1-e)I+eSJunior did not reconstruct the T that I used. This is unfortunate as my T is online.Junior posits another matrix: U = (f/e)T, where T is as above, e=6.7% is the error rate in Cook's final data and f=11.8%. Junior claims that this is error rate in Cook's raw data (which is 18.5% actually).
Junior then takes the matrix inverse to compute R=U^-1 F where R would be Cook's original ratings. Using this procedure, Junior finds a nonsensical R - certain elements are negative -and concludes that my procedure is nonsense.
That does not follow. If following my forward procedure, Junior would have corrected S rather than T (from which the new T would have followed). The correction is not a scalar multiplication. Alternatively, Junior could have solved F=UR for U (say, using a RAS procedure with T as a starting point) and computed C=UF=UUR.
Junior thus made an error and blamed it on me.
Update (5 June): Added purple text
9View comments
View comments