1. It is ironic that the left-of-centre newspaper the Guardian has withdrawn from press regulation. They appear to think that companies are in the best position to judge their own behaviour. Only people who are rich enough to afford a lawsuit are protected against any falsehoods that this newspaper may decide to print.

    The Guardian has also long abandoned the journalistic principle that both sides of a story need to be heard.

    In its latest installment of a series of hatchet jobs, the Guardian published an article by Mr Robert ET Ward BSc, Lord Stern's PR man. The central claim of the article is, simply, false.

    The figure below is as it appears in the final, published version of Chapter 10 of the Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change.
    This figure is slightly different than the one that appears in the final draft, but both show that the initial impacts of climate change may be beneficial.

    In the final version, we replaced the previous, vague "may be beneficial" (which refers to a regression curve that is in the literature but not reproduced in the IPCC report) with the more precise "17 out of 20 are negative" (from which the intelligent reader would deduce that the remaining 3 are non-negative).

    The claim still stands.

    The Guardian also refers to "faulty data". Presumably, Mr Ward refers to his single contribution, noting a minor error: The draft chapter reported the estimate of Roson and van der Mensbrugghe as an economic loss of 4.6% for a global warming of 4.9K, whereas in fact it should be for 4.8K.

    There were other errors in the draft chapter, of course, but none of them materially affects the qualitative results. We still find that the initial impacts of climate change may be beneficial; and that the impacts of climate change are small relative to such things as the Euro-crisis.

    A striking conclusion, that unfortunately did not make it into the IPCC because the paper (open access pre-print) appeared after the deadline, is that the impacts of climate change do not deviate from zero, in a statistically significant way, until about 3K warming.

    Mr Ward also claims to have been a reviewer of the IPCC WG2 AR5. There must have been an error, because a search does not return any of his comments, while the list of reviewers omits Mr Ward.

    UPDATE: Spiegel has a more balanced story, noting that the bottom-line conclusion hasn't change, but highlighting the change of tone.

    Disturbingly, it cites Chris Field to say "When these numerical errors were corrected, the statistical relationship between warming and economic impacts had a different shape." Field, I hope, speaks in his personal capacity rather than on behalf of the IPCC. If not, he has violated IPCC protocol.

    Field is wrong. The numerical errors did not affect the shape of said relationship. The new estimates do. The new estimates are the red diamonds in the figure above.

    Here is the story. The old data (the blue circles) roughly fit a parabola: first up, then down, and ever faster down.

    The new data do not fit a parabola: The initial impacts are positive, but the progression to negative impacts is linear rather than quadratic.

    If you fit a parabola to this data, you will find that the mildly negative estimate at 5.5K dominates the positive estimate at 1.0K and the sharply negative estimate at 3.2K. The parabola become essentially a straight line through the origin and the right-most observation.

    I think the appropriate conclusion from this is to fit a bi-linear relationship to the data, rather than stick with a parabolic one. This was not yet in the peer-reviewed literature when the window for AR5 closed (it is now: paper open access pre-print), so we decided to just show the data.

    Chris Field instead cites an analysis that is just inappropriate.

    UPDATE2 In a piece at WattsUpWithThat, Brandon Shollenberger get their knickers in a twist over procedural errors. Brandon is great for finding stuff on computers that no one else can, but he sometimes falls short when humans are involved.

    Chapter 10 of IPCC WG2 AR5 was changed between the acceptance of the draft and the publication of the report. Contrary to what some people claim, that is perfectly in line with IPCC procedures.

    There are three routes through which changes can be made.

    First, there is trickle-back. Governments write the Summary for Policy Makers (SPM). Trickle-back ensures that the Summary is consistent with the report it supposedly summarizes. This is the wrong way around, of course. In the case of Chapter 10, the SPM cites summary statistics that were not in the chapter. They are now.

    Second, there are errata.

    Third, if errors are found between acceptance and publication (a six month period in this case), the errors are corrected and documented. That is what happened here. Unfortunately, the documentation has yet to be uploaded.
    1

    View comments

  2. Abstract
    While earlier research had exposed severe problems with the data quality and analysis of the 97% consensus paper (Cook et al, 2013, Environmental Research Letters), this note finds the authors have contradicted themselves and that the data gathering invalidates all results.


    The 97% consensus paper (Cook et al., 2013) was hailed as the best ERL paper of 2013 (Cook, 2014). Downloaded more than 228,000 times (ERL, 2013) and with an Altmetric score of almost 1500 (Altmetric, 2013), it definitely was the most visible.

    The core result of Cook et al. is unremarkable (Montford, 2013): They find that the academic literature says that human activity is one of causes of the observed global climate change.

    Some have claimed that Cook et al. found a consensus on the dangers of climate change (Kammen, 2013) or on the need for climate policy (Davey, 2013). They investigated neither. Even some of the authors of the paper misrepresent its findings (Nuccitelli, 2014, Friedman, 2014, Henderson, 2014).

    Cook et al. took a sample of the academic literature and rated its contents. The raters were recruited through a partisan website (Cook et al., 2013) and frequently communicated with each other (Duarte, 2014). Their sample is not representative of the literature (Tol, 2014a). The sample was padded with large numbers of irrelevant papers (Tol, 2014a). For example, a paper on photovoltaics in Kenya (Acker and Kammen, 1996) was taken as evidence that climate change is caused by humans as was a paper on the coverage of climate change on US TV (Boykoff, 2008). Three-quarters of the "endorsing" abstracts offer no evidence either way (Tol, 2014a). Their attempt to validate the data failed (Tol, 2014a). An attempt to replicate part of the data failed too (Legates et al., 2013). The data show inexplicable patterns (Tol, 2014a) while the consensus rate suffers from confirmation bias (Cook et al., 2014a, Tol, 2014b).

    The problems do not stop there. It appears - no survey protocol was released - that the research team (1) gathered data (19 February to 15 April 2014), (2) studied the results, (3) gathered more data (11 May to 1 June 2012), (4) studied the results again, (5) changed the classification system, and (6) gathered more data and reinterpreted the rest. The results from step (1) and (3) are different (raw sample chi-sq(df=6)=255, p<0.001; matched sample chi-sq(df=6)=393, p<0.001). The results from step (3) and (6) are different too: The dissensus rate changes by one-half (Tol, 2014a).

    Alterations of the sampling strategy or survey design during the course of data gathering, and revealing preliminary results to the data gatherers are typically frowned upon because of the risks of (inadvertently) skewing the results. In this case, the authors had a sharp prior on the results before the first data were collected (Andrew, 2013, Montford, 2014).

    Cook et al. were slow to release part of their raw data, hindering attempts to check their analysis in contravention of journal policy (IOP, 2013). Part of their data were never released.

    Time stamps were held back because they "were not collected" (Cook et al., 2014b) although they were referred to in an earlier exchange (Cook, 2013). Time stamps were part of an unofficial data release (Shollenberger, 2014). This allowed the above reconstruction of the data gathering process.
    Time stamps corroborate my earlier hypothesis (Tol, 2014a) that some of the strange patterns in the data are due to rater fatigue. This was contradicted by (Cook et al., 2014b) even though John Cook had earlier noted that "[e]veryone's suffering rater fatigue" (Cook, 2012). Indeed, one rater read and classified 765 abstracts in the course of 72 hours. Rater fatigue implies unreliable data.

    Rater IDs were also held back, referring to a confidentiality clause in an ethics approval (UQ, 2014) that does not cover this part of the data gathering (UQ, 2012). Rater IDs were unofficially released later. The officially released data show that different people rated the same paper differently in one-third of all cases (Cook et al., 2013). The unofficial data reveal more. The survey started with a team of 24 raters, but numbers fell quickly. There is a difference between the raters that stayed and those that left (t=-6.51, p<0.001). The group of raters also changed its composition after the data were inspected (chi-sq(df=23)=7265, p<0.001).

    In sum, one of the most visible climate papers of recent years is not sound. Whereas previous critique could be interpreted as a lack of competence (Tol, 2014a), the later data release suggests that Cook et al., perhaps inadvertently, worked towards a given answer. This reflects badly on the authors, referees, editors and publisher. It also weakens the activists and politicians who cite Cook et al. in support of their position.

    The data and statistical tests underlying this note can be found at http://www.sussex.ac.uk/Users/rt220/CookERL.xlsx Jose Duarte and Andrew Montford had excellent comments on an earlier version.


    REFERENCES
    ACKER, R. H. & KAMMEN, D. M. 1996. The quiet (energy) revolution: Analysing the dissemination of photovoltaic power systems in Kenya. Energy Policy, 24, 81-111.
    ALTMETRIC. 2013. Score in context [Online]. Available: http://www.altmetric.com/details.php?citation_id=1478869&src=bookmarklet [Accessed 5/9/2014.
    ANDREW. 2013. Cook's 97% consensus study game plan revealed. Popular Technology [Online]. Available from: http://www.populartechnology.net/2013/06/cooks-97-consensus-study-game-plan.html.
    BOYKOFF, M. 2008. Lost in translation? United States television news coverage of anthropogenic climate change, 1995-2004. Climatic Change, 86, 1-11.
    COOK, J. 2012. Fatigue. Available from: http://rankexploits.com/musings/2014/sks-tcp-front/#comment-130926.
    COOK, J. 2013. Query re request for Cook et al. data [Online]. University of Queensland. Available: http://www.sussex.ac.uk/Users/rt220/Cook31July.png
    COOK, J. 2014. Skeptical Science consensus paper voted ERL's best article of 2013. SkepticalScience [Online]. Available from: http://skepticalscience.com/SkS-consensus-paper-ERL-best-article-2013.html.
    COOK, J., NUCCITELLI, D., GREEN, S. A., RICHARDSON, M., WINKLER, B., PAINTING, R., WAY, R., JACOBS, P. & SKUCE, A. 2013. Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters, 8.
    COOK, J., NUCCITELLI, D., SKUCE, A., JACOBS, P., PAINTING, R., HONEYCUTT, R., GREEN, S. A., LEWANDOWSKY, S., RICHARDSON, M. & WAY, R. G. 2014a. Reply to 'Quantifying the consensus on anthropogenic global warming in the scientific literature: A re-analysis'. Energy Policy, 73, 706-708.
    COOK, J., NUCCITELLI, D., SKUCE, A., WAY, R., JACOBS, P., PAINTING, R., HONEYCUTT, R., GREEN, S. A., LEWANDOWSKY, S. & COULTER, A. 2014b. 24 Critical Errors in Tol (2014) - Reaffirming the 97% consensus on anthropogenic global warming. Brisbane: SkepticalScience, University of Queensland.
    DAVEY, E. 2013. Climate change, acting on the science. Gov.uk [Online]. Available from: https://www.gov.uk/government/speeches/edward-davey-speech-climate-change-acting-on-the-science
    DUARTE, J. 2014. Cooking stove use, housing associations, white males, and the 97%. José Duarte [Online]. Available from: http://www.joseduarte.com/blog/cooking-stove-use-housing-associations-white-males-and-the-97
    ERL. 2013. Metrics [Online]. Institute of Physics. Available: http://iopscience.iop.org/1748-9326/8/2/024024/metrics [Accessed 5/9/2014.
    FRIEDMAN, D. 2014. A climate falsehood you can check for yourself. Ideas [Online]. Available from: http://daviddfriedman.blogspot.co.uk/2014/02/a-climate-falsehood-you-can-check-for.html.
    HENDERSON, D. R. 2014. David Friedman on the 97% Consensus on Global Warming. Library of Economics and Liberty [Online]. Available from: http://econlog.econlib.org/archives/2014/02/david_friedman_14.html.
    IOP. 2013. IOP Ethical Policy for Journals [Online]. Institute of Physics. Available: http://authors.iop.org/atom/help.nsf/0/F18C019D6808524380256F630037B3C2?OpenDocument
    KAMMEN, D. M. 2013. The story of a presidential tweet. The Berkeley Blog [Online]. Available from: http://blogs.berkeley.edu/2013/05/29/the-story-of-a-presidential-tweet/
    LEGATES, D. R., SOON, W., BRIGGS, W. M. & MONCKTON OF BRENCHLEY, C. 2013. Climate Consensus and 'Misinformation': A Rejoinder to Agnotology, Scientific Consensus, and the Teaching and Learning of Climate Change. Science and Education.
    MONTFORD, A. W. 2013. Consensus? What Consensus? London: Global Warming Policy Foundation.
    MONTFORD, A. W. 2014. Fraud, bias and public relations - The 97% consensus and its critics. London: Global Warming Policy Foundation.
    NUCCITELLI, D. 2014. Twitter profile [Online]. Available: https://twitter.com/dana1981 [Accessed 5/9/2014.
    SHOLLENBERGER, B. 2014. TCP Results! Izuru [Online]. Available from: http://www.hi-izuru.org/mirror/
    TOL, R. S. J. 2014a. Quantifying the consensus on anthropogenic global warming in the literature: A re-analysis. Energy Policy, 73, 701-705.
    TOL, R. S. J. 2014b. Quantifying the consensus on anthropogenic global warming in the literature: Rejoinder. Energy Policy, 73, 709.
    UQ. 2012. Notification of approval [Online]. Available: http://www.climateaudit.info/correspondence/foi/queensland/cook%20consensus%20Documents%20released%20under%20RTI.pdf.
    UQ. 2014. UG and climate change research. UQ News [Online]. Available from: http://www.uq.edu.au/news/article/2014/05/uq-and-climate-change-research.

    UPDATE: The comments by an editorial board member are below. I appealed this three times, but to an avail.

    This is an unsolicited Perspective on the Cook et al. paper from 2013 [1] by an author who wishes to reflect on the original study published in ERL.
    This submission follows a solicited Perspective [2] on the Cook et al. paper, which reflected on the study and the broader implications of the study internationally at the time of publication.

    This Perspective also follows another submission to ERL, by the same author, of comments on the Cook et al. study, which were rejected by ERL after peer review. These comments were subsequently published [3, 4] with a response from Cook et al. [5].

    ERL welcomes debate and, despite the fact that we have already published a Perspective on this study, the original paper continues to be highly popular and debated. In theory then, ERL could publish another Perspective, provided it contributes to healthy scientific debate, is original, timely, and advances the literature on this important theme.

    Overall, this current submission shows a different standpoint from the commissioned Perspective published in 2013, and thus could have had the potential to contribute. Unfortunately, in its current form, this piece is not of satisfactory scope or breadth for a Perspective, and is not sufficiently original. It also has problems of unsubstantiated assertion, lack of disinterested reflection, and polemic expression.

    I have the following detailed comments and suggestions for improvement:

    1. Scope: ERL guidelines suggest that a Perspective is a commentary “highlighting the impact and wider environmental implications of research appearing in ERL”. This submission does, briefly, look at the impact of the Cook et al paper internationally (particularly lines 29 to 38 on page 1 and lines 40 to 45 on page 2). Unfortunately, at present, this discussion of impacts forms a minor part of the submission, and the bulk of the discussion is a repeat assertive critique of the methodology of the original study, which is not the scope or purpose of a Perspective in ERL. Generally, there is also a missed opportunity here to discuss the one-year-on implications of the original research (particularly given its profile internationally). There is not even a reference to the previous ERL Perspective [2], which did look at implications of the work in 2013, and which I think could, very usefully, have been the basis for this follow up Perspective.

    2. Breadth: This might have been an interesting perspective had the author put this study in the context of the fairly prolific academic literature on the theme of scientific consensus around anthropogenic climate change (including the ERL commissioned Perspective[2]). In the context of this submission, it is odd that the author does not reflect on other previously published studies, including the original study, (on which Cook et al. base part of their design[6]), and other studies of climate scientists’ viewpoints on anthropogenic climate change. In fact it is remarkable, and would have been an interesting point for reflection, that the results of other studies published in peer reviewed journals in this field, conducted over different periods, and using a variety of methods (ranging from direct interviews of climate scientists to literature reviews of climate papers) all find the same narrow range of 94-98% scientific consensus on anthropogenic climate change [6-9]. In this context, it is not clear why the author focuses so much time and energy on this one study within the overall literature, as it was, essentially, simply a timely, interesting, original study, published by ERL as such, which corroborates results of other studies within this field. A good Perspective would look at the wider implications of this latest study, including how it reflects overall state of knowledge of the theme, and would discuss how this study fitted into the overall policy debate - this is a missed opportunity to do this.

    2. Originality: The author himself points us to three other published papers (two of which are his) discussing the Cook et al. paper[3-5]. The bulk of this submission goes over the same ground in less depth or justification, but with stronger and less considered assertion (Lines 42 to 61 on page 1 and lines 1 to 38 on page 2). The majority of this submission does not add any new information, and it is not clear why the author wishes to re-publish the same ideas contained in his already published methodological analysis of the Cook et al. study. At present then, the bulk of this submission is unoriginal. I would suggest that all this material is removed, as it adds nothing new, and the author would then have space to consider points more relevant to a Perspective (as discussed above).

    4. Tone of Piece: This is the most troubling aspect of this submission. I will begin by stating the obvious: there is an important space in science for debate – openness to doubt and debates about “truth”, are an essential part of science. This debate can be very lively, but is at its most healthy when it is conducted with respect, and with rigorous presentation of evidence to back a position. In the spirit of this, ERL hosts Perspectives to encourage reflection and discussion of the scientific papers that it publishes.

    Overall, the commentary could be made substantially more balanced and contemplative – for example, as proof of “truth” the author cites himself and a series of mostly social media sources, with little reference to the academic literature and with little evidence of neutrality in his selection of “evidence”.

    There is a more unfortunate and confrontational aspect to the tone of this submission when the author makes his final unsubstantiated reflections on the soundness of the original research and assertions about Cook et al.’s scientific conduct (as he does in line 42 on page 2 “Whereas previous critique could be interpreted as a lack of competence (Tol, 2014a), the later data release suggests that Cook et al., perhaps inadvertently, worked towards a given answer”). At it’s most basic, there is simply not sufficient evidence of this assertion of unsoundness: the authors of the original paper made their hypothesis clear, (and most studies would have had the same hypothesis, particularly in a context where previous studies have found overwhelmingly significant positive results looking at the same issue). The original paper also had a detailed section on study design and methodology and a discussion of interpretation issues and other study design possibilities. Cook et al have also been open to release raw data, and to discuss their study methodology in subsequent correspondence. There is nothing to suggest this is “unsound” – on the contrary, most scientists would think it suggests exemplary scientific conduct.

    Finally, it is not quite clear what the author of this submission aims to achieve when, much more problematically, he goes on to suggest that somehow the act of publication of the Cook et al paper by ERL “reflects badly” on the reviewers of the original paper, on ERL editors and on the publishers. What does the author imply with this statement and on what basis? The author has no evidence, and there is no evidence, that ERL and the IOP employed anything but normal state-of-the-art peer-review processes and publishing procedures when dealing with Cook et al.

    Overall, I welcomed this unsolicited Perspective in the interests of a constructive debate around an interesting study published in ERL. However, in its current form, I do not find it to fit ERL guidelines for a Perspective, nor to be original, nor to be of sufficient breadth or disinterested reflection to contribute to the literature, or to knowledge.

    If the author were able to do this, it would be interesting to see a substantially revised version of this perspective with these concerns addressed.

    1. Cook, J., et al., Quantifying the consensus on anthropogenic global warming in the scientific literature. Environmental Research Letters, 2013. 8(2): p. 024024.
    2. Reusswig, F., History and future of the scientific consensus on anthropogenic global warming. Environmental Research Letters, 2013. 8(3): p. 031003.
    3. Tol, R.S.J., Quantifying the consensus on anthropogenic global warming in the literature: A re-analysis. Energy Policy, 2014. 73(0): p. 701-705.
    4. Tol, R.S.J., Quantifying the consensus on anthropogenic global warming in the literature: Rejoinder. Energy Policy, 2014. 73(0): p. 709.
    5. Cook, J., et al., Reply to ‘Quantifying the consensus on anthropogenic global warming in the scientific literature: A re-analysis’. Energy Policy, 2014. 73(0): p. 706-708.
    6. Oreskes, N., Beyond the ivory tower. The scientific consensus on climate change. Science, 2004. 306(5702): p. 1686.
    7. Anderegg, W.R.L., et al., Expert credibility in climate change. Proceedings of the National Academy of Sciences, 2010. 107(27): p. 12107-12109.
    8. Doran, P.T. and M.K. Zimmerman, Examining the Scientific Consensus on Climate Change. Eos, Transactions American Geophysical Union, 2009. 90(3): p. 22-23.
    9. Farnsworth, S.J. and S.R. Lichter, The Structure of Scientific Opinion on Climate Change. International Journal of Public Opinion Research, 2011.
    1

    View comments

  3. Nick Stern was on Hard Talk, talking about the Stern2.0 report. A number of things struck me.

    Stern argues that renewables are competitive with fossil fuels, citing Al Gore as an authority. I don't believe that. Stern himself seems to have some doubt as well, as he also argues for "strong policy". If renewables are competitive, policy is not needed. But recent observations show that investment in renewable energy collapses when government support is withdrawn.

    Stern is rather dismissive of the models (see also) of Bernard & Vielle (paper), BoehringerLoeschelMoslener & Rutherford (paper) and KretschmerNarita & Peterson (paper). Stern even claims that they "assum[e] the[ir] results" (4:13), that is, that they worked backwards from their conclusions to their assumptions, one of the most serious accusations against a researcher. Fortunately, Stern told a lie. These people are solid, upright academics, honest scholars who work with integrity and let the chips fall where they may.

    Stern prefers bottom-up models for estimating the economic impact of climate policy. These models have a rich representation of energy and other technologies, but a poor representation of trade and investment. Bottom-up models get their welfare accounting wrong by construction. Research in this field has long moved away from the bottom-up/top-down controversy and nowadays relies on hybrid models that make sense from both an engineering and an economic perspective. I am surprised that the "New" Climate Economy relies on outdated methods.

    Stern makes much of the so-called secondary benefits of climate policy. That is, a switch from fossil fuels to renewables would help to slow down climate change (primary benefit) and to reduce air pollution (secondary benefit). This is a poor argument in theory. Tinbergen showed in 1952 that you need two policy instruments to solve two problems (climate change, air pollution).

    Stern's is a poor argument in practice too. We used to have an air pollution problem in Europe, Japan and North America. We solved that by putting scrubbers on smoke stacks and catalytic converters in cars. These take out the pollutants. They also use energy, so that carbon dioxide emissions increased. Filters are still the cheapest solution. If China and India want to get rid of their smog at the lowest possible cost in the shortest possible time using proven technology, they should follow the OECD example and install scrubbers.

    In an encounter earlier this year in Jordan, I discovered that Stern was unaware of the 1964 paper by Koopmans, Diamond and Williamson. That is remarkable. The paper appeared in a prominent journal shortly before Stern started work on his PhD in a related area. Koopmans was already a big name, and Diamond would soon become one. The Koopmans, Diamond and Williamson paper, of course, shows that arguments for a zero pure rate of time preferences, as put forward by Stern1.0, violate Strong Pareto. In other words, Stern argues that social welfare is improved by hurting individuals.

    Finally, Stern expressed concern about the impact of sea level rise, apparently unaware that coastal protection is a mature technology.
    3

    View comments

  4. There is a new Stern Review. Colloquially known as Stern2.0, the Global Commission on the Economy and Climate released its report last Tuesday. Since his 2006 review, Nick Stern has been regularly in the news, claiming it is worse than we thought. The new report fits the mould.

    The summary was released before the main report. Rejecting the Scottish Enlightenment, we are invited to believe its findings without inspecting the evidence. It seems, though, that Lord Brentford has produced another work of far-fetched fiction. Stern2.0 makes three claims, none of which stand up: Climate policy stimulates economic growth; climate change is a threat to economic growth; and an international treaty is the way forward.


    “Well-designed policies […] can make growth and climate objectives mutually reinforcing”

    The original Stern Review argued that it would cost about one percent of Gross Domestic Product to stabilise the atmospheric concentrations of greenhouse gases around 525 ppm CO2e. The Intergovernmental Panel on Climate Change puts the costs twice as high. Stern2.0 advocates a more stringent target, 450 ppm, and finds that this would accelerate economic growth.

    This is implausible. Renewable energy is more expensive than fossil fuels. The rapid expansion of renewables is because they are heavily subsidised rather than because they are commercially attractive. The renewables industry collapsed in countries where subsidies werewithdrawn. Raising the price of energy does not make people better off. Higher taxes, to pay for subsidies, are a drag on the economy.

    Climate policy need not be expensive. Study after study had shown that it is possible to decarbonize at a modest cost. Stern2.0 missed an opportunity to point out that climate policy may be cheap, but this is not guaranteed. Climate policy can also be very, very expensive. Europe has adopted a jumble of regulations that pose real costs on companies and households without doing much to reduce emissions.

    The subsidies and market distortions that typify climate policy do, of course, create opportunities for the well-connected to enrich themselves at the expense of the rest of society. Perhaps Stern2.0 mistook rent seeking for wealth creation.

     “[I]f climate change is not tackled, growth itself will be at risk”

    The new report claims that climate change would be a threat to economic growth. The original Stern Review argued that the damages would be 5-20% of income. In the worst case, we would not be 4 times as rich by the end of the century, but only 3.8 times. The Intergovernmental Panel on Climate Change reckons Stern1.0 exaggerated the impacts by a factor 10 or more. The new Stern agrees that the old Stern was off by an order of magnitude, but in the opposite direction.

    Over the last two decades, economists have re-investigated the relationship between economic development and geography. This has not led to a revival of the climate determinism of Ellsworth Huntington. On the contrary, most research has shown that climate plays at most a minor role in economic growth, and that the impact of climate is moderated by technology and institutions. Just consider Iceland and Singapore. Stern2.0 goes against the grain of a large body of literature.

     “A strong […] international agreement is essential”

    The new Stern Review calls for an international treaty with legally binding targets. Albert Einstein defined insanity as doing the same thing over and over again and expecting a different result. Since 1995, the parties to the United Nations Framework Convention on Climate Change have met year after year to try and agree on legally binding targets – and failed every time. The reasons are simple. It is better if others reduce their emissions but you do not. No country likes to be bound by UN rules for its industrial, agricultural and transport policies. The international climate negotiations have been successful in creating new bureaucracies, but not in cutting emissions.

    Stern also argues that “[d]eveloped countries will need to show leadership”. The EU has led international climate policy for two decades, but without winning any followers. The broken record that is Stern2.0 is unlikely to inspire enthusiasm for more expensive energy.

    A way forward

    The Stone Age did not end because we ran out of stones, but because we found something better: bronze. The fossil fuel age will end when we find an alternative. The current renewables are simply not good enough – except for the happy few who profit from government largesse. The environmental movement’s aversion to nuclear power and shale gas increases emissions and creates an impression of Luddism whereas climate policy should focus on accelerating technological change in energy. The unfounded claims by Stern2.0 do not build the confidence that investors and inventors need to take a punt on a carbon-free future. Exaggeration is great for headlines, but sober analysis is more convincing in the long run.

    An edited version appeared in The Conversation

    0

    Add a comment

  5. Frank Ackerman wrote a new piece. It was covered here and here.

    Ackerman makes three points, all relating to my 2013 JEDC paper. The points are red herrings.

    Red herring #1

    Ackerman notes that only 16 studies were used to calibrate the total impact function. Ackerman insinuates that the sample is selective and that the results suffer from selection bias, but he fails to identify a single study I overlooked.

    It is unfortunate that Ackerman missed my 2014 CompEcon paper. It identifies a number of impact studies (mostly recent ones) that were omitted from my earlier work. These new studies substantially affect the estimates of the impacts of more profound climate change. There is a notable shift away from concern about climate change.

    Ackerman is concerned about dependence between estimates, but he did not download the data to demonstrate that this matters. My 2014 paper does correct for dependence between studies, and finds that it does not affect the results.

    Red herring #2

    Ackerman repeats his concerns about independence for the estimates of the marginal impacts. Although the data are freely available (and the link is provided in the paper), Ackerman does not bother to test whether this has any effect. He also fails to refer to my 2011 ARRE paper where I did the test for him (and found that it does not really matter).

    Ackerman makes much of how I treat his study. As explained in my 2005 EnPol paper, republishing an estimate increases its pedigree. At the same time, results are weighted based on whether the authors regard an estimate to be a core result, a sensitivity analysis, or a previous estimate they disagree with.

    Ackerman thus protests against a design choice that was established 10 years ago and has been through peer-review six times (2005 EnPol, 2007 EconEjrn2009 JEP, 2010 PWP2011 ARRE, and 2013 JEDC). Peer-review is fallible, of course, and repeated peer-review is fallible too. However, Ackerman fails to establish that his concerns have any effect - indeed, he does not even try. One would suspect, though, that omitting 1 out of 588 estimates would have a minimal effect.

    Red herring #3

    Ackerman rails against the procedure used to correct for selection bias in abatement cost estimates, established by our 2010 ClCh paper. He cites Barker and Crawford-Brown, but not our rejoinder. Particularly, Ackerman highlights a frequentist objection, but fails to realize that these methods are fundamentally Bayesian in nature.

    Ackerman also protests against my choice of the EMF22 database. As noted in the JEDC paper, other databases were either not yet available (e.g., EMF27) or of poor quality. The IPCC AR5 database, for instance, requires many hours of cleaning and reorganization, and contains infeasible and incomparable results. As explained in the paper, including such data may not increase information - and neither Ackerman nor Barker and Crawford-Brown show the opposite, or indeed try.

    Postscript

    Ackerman revisits an earlier paper, in which he claimed to have found an error in our work. In fact, Ackerman's claim did not get beyond "different assumptions imply different results", and in their rejoinder Ackerman indeed admitted that they never tested for errors, but only for the effect of different assumptions. The Associate Editor notes that Ackerman suppressed evidence that the alleged error is not in fact an error.
    1

    View comments

  6. My rejoinder to Cook's response to my comment to Cook's paper is out at last.

    I had expected this to be my final contribution, but that was before Brandon Shollenberger found part of the hidden data and Simon Turnill's FOI request revealed that Cook has perhaps not been entirely truthful.

    Almost every abstract was rated twice. The rater IDs allow for a comparison of raters i and j rating the same abstract, and thus test whether one was more inclined to find endorsement. Using the distance between average ratings as a metric, the pairwise comparison is turned into an index of tendency to endorsement.

    The figure below plots the result against the number of completed ratings. The relationship does not appear to be random.

    Note that the sample was restricted to raters who completed more than 100 ratings.

    UPDATE: I omitted the scale of the horizontal axis. On a scale of 1 to 7, the maximum difference between raters is 1.6 (22% 26%). See:
    ID #ratings TtE
    873 3791 -0.73
    677 2671 -0.41
    1375 1945 -0.39
    3364 3968 -0.32
    1439 2940 -0.27
    1 2208 -0.12
    4194 2191 -0.09
    1802 1266 0.09
    2103 1739 0.14
    71 966 0.15
    1683 1707 0.17
    6319 111 0.26
    1227 615 0.27
    2178 314 0.42
    2001 155 0.84
    ID = abstract rater ID
    #ratings = number of abstract ratings completed
    TtE = tendency to endorse: average across raters of average distance in ratings

    Between individual raters, the difference is up to 2.0. That is, one rater may have read "implicit endorsement" where the other found "explicit endorsement with quantification". Or one may have found "implicit rejection" where the other read "implicit endorsement".
    8

    View comments

  7. In the first days of rater bias, I note that Cook's ratings were done over two periods with a break in between. Ratings are different before and after the break, and raters had the opportunity to inspect the results of the first period during the break. This would invalidate Cook's data.

    However, Steve McIntyre and Brandon Shollenberger protested that the second rating period was dominated by tie-break ratings. This is a particular part of the data sample. Results should be different. Indeed, if we plot the chi-squared statistic against time, testing first/second/third ratings on a particular day against all first/second/third ratings, nothing untoward appears in the later period of active rating.
    First ratings:
    Second ratings:
    Third (tie-break) ratings:
    That said, the tie-break ratings are not without blemish (apart from the fact that 7 out of 46 days are above the 99%ile). Comparing the first ratings that were not challenged against those that were, I find that their distributions differ (chi2=80, p<0.01). Ditto for the second ratings (chi2=29, p<0.01). This is as it should be. However, comparing the unchallenged ratings to the tie-breaks, a large difference appears (chi2=393, p<0.01). That is, the tie-breaks (in the second period) moved away from the original ratings (in the first period). Indeed, in 74 cases, the third rating lies outside the bracket of the first and second rating. And some abstracts were re-rated even though the first two ratings agreed.

    Particularly, the tie-break rating counted 44% and 25% fewer rejections of the hypothesis of anthropogenic warming compared to the first and second ratings, respectively; but 8% fewer and 16% more endorsements than in the first and second ratings, respectively.

    Recall that the tie-break ratings took place after the raters had had the opportunity to look at their results.


    0

    Add a comment

  8. I wrote earlier about the latest data release from the Consensus project, highlighting the frantic ratings by one of Cook's helpers, the lack of inter-rater reliability, and the systematic differences in ratings between days. I explored the latter a little further.

    Cook's original ratings are 1 to 7, with 4 neutral. I rescaled these 3 to -3, with 0 neutral. Adding up all rescaled scores, we find 11594. The number is positive because Cook et al. found that more papers support (3 to 1) than reject (-1 to -3) the hypothesis that human activity contributed to the observed global warming.

    Now that we have date stamps, we can compute the same score per day. This is shown in the figure below. The number goes up and down with the number of abstracts rated on that particular day.

    I bootstrapped the daily data, computed the same score, and its 95% confidence interval. This is shown too in the figure below, with negative deviations in brown (a bias towards rejection) and positive deviations in green (a bias towards endorsement).

    For the first period, the observed scores move about in their confidence intervals, sometimes higher than expected and sometimes lower. Then there is a period in which few papers were rated -- followed by a third period in which paper ratings systematically deviated towards endorsement.

    The second figure confirms this. It shows the histogram of ratings for the first period (January-April, including the quiet month of April) and the third period (May-June). UPDATE: The hypothesis that the two distributions are the same, or identical to the joint distribution, is rejected at the 1% significance level.
    UPDATE 2: The story continues.

    0

    Add a comment

  9. The saga of the 97% consensus continues. My re-analysis was based on a partial data release. Notably, rater IDs and time stamps were missing. The former are needed to test inter-rater reliability, the latter to test for fatigue.

    Thanks to Brandon Shollenberger, we now have rater IDs and date stamps.

    The rater IDs may or may not be protected by a confidentiality agreement. If so, U Queensland has a problem with internet security. If not, U Queensland has a problem with telling the truth.

    The table below shows a test for inter-rater reliability. The first column has the number of abstracts rated (which ranges from 2 to almost 4000). The second column has the fraction of ratings ignored in the final assessment (which ranges from 0 to 78%). The next seven columns have the fraction of endorsements by level and rater. The second column from the right has the Chi-squared statistic for the test of equality of proportions between the respective rater and all raters. The right most column has the level of significance: The null hypothesis that a particular rater equals the average rater is rejected at the 1% level (***) for 15 raters; it is rejected at the 5% level (**) for 2 raters; and at the 10% level (*) for 1 rater. That is, only 6 raters do not deviate from the norm.

    Endorsement
    Number Rejection 1 2 3 4 5 6 7 Chi2
    2208 3.62% 0.91% 7.38% 31.39% 59.74% 0.50% 0.00% 0.09% 51.251 ***
    966 6.73% 0.83% 18.74% 28.05% 51.24% 0.93% 0.21% 0.00% 155.690 ***
    2 50.00% 0.00% 0.00% 0.00% 100.00% 0.00% 0.00% 0.00% 1.084
    2671 12.32% 1.05% 7.00% 22.69% 68.44% 0.49% 0.26% 0.07% 21.049 ***
    31 6.45% 3.23% 16.13% 22.58% 58.06% 0.00% 0.00% 0.00% 4.947
    3791 7.62% 0.29% 4.46% 15.77% 79.00% 0.40% 0.03% 0.05% 338.960 ***
    615 7.48% 1.95% 11.22% 23.25% 63.09% 0.33% 0.00% 0.16% 18.684 ***
    60 0.00% 0.00% 1.67% 23.33% 73.33% 0.00% 1.67% 0.00% 12.296 *
    1945 6.53% 0.41% 6.22% 25.81% 66.89% 0.21% 0.31% 0.15% 24.090 ***
    2940 3.84% 0.44% 7.35% 27.11% 64.18% 0.61% 0.20% 0.10% 14.519 **
    1707 5.98% 0.88% 11.19% 30.40% 56.65% 0.53% 0.18% 0.18% 54.096 ***
    1266 9.48% 0.95% 10.58% 22.27% 65.64% 0.32% 0.16% 0.08% 12.646 **
    22 13.64% 0.00% 9.09% 13.64% 77.27% 0.00% 0.00% 0.00% 2.042
    9 77.78% 0.00% 77.78% 22.22% 0.00% 0.00% 0.00% 0.00% 57.291 ***
    155 5.81% 1.29% 13.55% 36.13% 48.39% 0.65% 0.00% 0.00% 19.696 ***
    1739 3.05% 0.92% 10.01% 35.02% 53.48% 0.46% 0.06% 0.06% 110.504 ***
    314 11.78% 1.91% 7.96% 42.99% 45.22% 1.59% 0.32% 0.00% 70.223 ***
    93 5.38% 0.00% 20.43% 21.51% 58.06% 0.00% 0.00% 0.00% 18.502 ***
    6 50.00% 0.00% 16.67% 33.33% 50.00% 0.00% 0.00% 0.00% 0.947
    3968 3.33% 0.76% 7.06% 22.28% 68.88% 0.68% 0.25% 0.10% 33.751 ***
    17 29.41% 0.00% 17.65% 35.29% 47.06% 0.00% 0.00% 0.00% 3.525
    2 0.00% 0.00% 0.00% 0.00% 100.00% 0.00% 0.00% 0.00% 1.084
    2191 21.82% 1.60% 12.78% 23.82% 60.43% 0.96% 0.37% 0.05% 84.415 ***
    111 47.75% 0.90% 13.51% 50.45% 32.43% 2.70% 0.00% 0.00% 59.313 ***
    26829 7.67% 0.81% 8.44% 25.07% 64.85% 0.56% 0.18% 0.09%

    Time stamps are still unavailable. John Cook has written both that they do exist and that they don't exist. Date stamps are less informative, but useful nonetheless. The heat map of number of ratings per day and per rater is shown below. One rater read and classified 765 abstracts in the course of 72 hours.

    Combining date stamps and abstract ratings, the figure below shows the chi-squared test statistic for whether the ratings on a particular day deviate from the average. Abstracts were rated on 76 days. For 16 days, the null hypothesis that ratings are average is rejected at the 1% level of significance; we would expect this on 1 day only. For 9 days, the null hypothesis is rejected at the 5% level. And for another 4 days, the null is rejected at the 10% level. Peculiar rating days are more common towards the end of the rating period. Recall that raters, survey designers, and analysts are the same people.

    0

    Add a comment

  10. To Peter Sutherland, Chairman of the London School of Economics and Political Science


    Dear Mr Sutherland,

    One of the employees of the London School of Economics, Mr Robert ET Ward BSc, has been waging a smear campaign against me. The campaign consists of a insinuations, half-truths and outright lies, and takes the form of tweets, blog posts, and letters to journal editors, civil servants, and elected politicians. This campaign has been going on since October 2013.

    I have repeatedly asked Mr Ward to end his campaign and suggested that he instead focus on his job, which is to promote the research of the Grantham Research Institute on Climate Change and the Environment.

    When that failed to produce the desired result, I contacted Professor Dr Nicholas Stern, Lord Brentford. Lord Stern denied any responsibility for Mr Ward's behaviour, even though the LSE website lists Lord Stern as the chair of the institute that employs Mr Ward.

    I then contacted the Director of the LSE, Professor Dr Craig Calhoun. I never reached Professor Calhoun, but was stonewalled by his Chief of Staff, Mr Hugh Martin.

    I last tried to contact Professor Calhoun on 2 May 2014.

    Even though I failed to contact the LSE, Mr Ward fell silent so I assumed that I had reached one of my goals and I let the matter rest.

    However, on 8 July 2014, Mr Ward resumed his campaign with a letter to the Rt Hon Lamar Smith, chair of the US House of Representatives Committee on Science, Space and Technology.

    I therefore hereby request that you
    1. inform Professor Calhoun that complaints about the behaviour of LSE staff do require his attention;
    2. stop Mr Ward's campaign of smear and character assassination; and
    3. publicly distance the London School of Economics from Mr Ward's campaign and apologize for the damage and distress caused.

    Looking forward to your timely reply, I remain,

    Yours sincerely



    Richard Tol
    0

    Add a comment

Blog roll
Blog roll
Translate
Translate
Blog Archive
About Me
About Me
Subscribe
Subscribe
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.