1. A journalist asked me about the latest report by the Task Force on Climate-Related Financial Disclosures. Her questions are in blue, my answers in black.

    In my view, the TFCRFD is primarily a vehicle for Mr Bloomberg to stay in the limelight, and for Mr Carney to save his marriage and promote his future political career.

    My main questions are:- How important is it for companies to have data about climate change risks?

    The TFCRFD distinguishes between two risks.

    The risk of climate change is small for the companies involved. The more severe impacts of climate change are decades or more into the future; and these impacts are concentrated among the poor who are barely served by these companies.

    The risk of climate policy are small too, because emission reduction targets will be relaxed if climate policy turns out to be expensive.

    - Would it change the way they do business, in your view?

    No. But there will be photo opportunities.

    - Would suddenly releasing all this data be unproblematic, or might it cause the kind of destabilising effect Mark Carney has explicitly said he hopes to avoid?

    No. Emissions data for companies are already available if you know where to look. Climate change will impact on future, rather than current assets, and projections of future exposure are contingent and uncertain.

    - If countries do not incorporate these guidelines into law, would reporting data on climate impacts still give these companies an advantage?

    The companies involved hope to curry favour with green consumers, or at least avoid boycotts. 

    This is what she wrote.

    0

    Add a comment

  2. Dear Ms Caulfield,

    Yesterday you voted against a motion that would guarantee the right of EU citizens already in the UK to continue to live and work here.

    I am one of a family of four of such EU citizens. My wife builds sewage treatments plants, a vital if often underappreciated service, for Southern Water. I teach economics at the University of Sussex, probably one of the largest exporters in the area. Our alumni quickly find well-paid and secure jobs. Our children attend the local primary school. We pay our taxes. My wife volunteers in the local PTA. I regularly volunteer my expertise in energy and environment to the Houses of Parliament. We spend most of our income in the local economy. Frequent visits by friends and family from abroad support the local tourist industry. We love Sussex and its people. To the dismay of their grandmother, our children speak English in a Southeastern accent.

    I can interpret yesterday’s vote in one of two ways. Either you think it is acceptable to play politics with other people’s lives, or you would like to see us leave this country. Could you kindly explain why you voted as you did?

    Best regards


    Richard

    -----------------

    Dear Richard

    I did not vote against EU citizens staying in the EU. This was an opposition debate that has no bearing on Government business. It would be wrong to quantify, as in the opposition motion yesterday, what movement of people will be allowed under our negotiated settlement which has only just started. That said the PM made it very clear yesterday that all existing EU citizens will be able to stay in the UK and that work is being done to ensure as many EU workers are able to move freely here once we leave.

    The vote yesterday was the SNP playing politics and deliberately undermining our ability to negotiate the best deal for Britain and ensuring we are able to have free movement of people from the EU. My family are also from the EU and so I have a particular interest in ensuring free movement continues.

    I hope that reassures you.

    Maria

    --------------

    Dear Maria,

    No, this does not reassure me at all.

    You argue that the motion reflects your position and the position of your party leader. Yet you voted against it. Please forgive me for finding that rather odd.

    Please also forgive me for taking offense that you intend to treat the rights of my children and my wife, and many others in similar positions, as cards to be negotiated with.

    Best regards

    Richard

    -------------


    It was an opposition day debate that has no legislative bearing but would undermine the position of the Government to negotiate if it had been seen as a fixed position of Parliament.

    If this had been a Gov motion which would have actually changed the laws in this country then I would have voted against it. It was just a debate. Opposition debates never hold more weight than just being a debate and never hold any legislative power.


    Maria 

    -------------

    Dear Maria,

    Thank you for clearing that up.

    To you and your friends in Westminster, this is just politics. To me, this is about the rights and future of my family.

    Best

    Richard

    -------------

    Dear Richard

    I am not a fan of opposition debates as they are just political debates that have no substance in terms of outcome but I appreciate that they send a message to constituents that does not reflect what will be the outcome of our negotiations but does in fact cause unnecessary anxiety and distress.

    I am hoping to go on to the Brexit select committee where we hold the Government to account on this process and I will very much be ensuring that EU residents who are here have the protection and reassurance they need when the repeal bill comes before parliament.

    Best wishes


    Maria

    ----------

    Dear Maria,

    May I point out that opposition is a crucial part of any democracy?

    Best

    Richard
    1

    View comments

  3. The news that the government is considering turning the Nissan plant into a bonded warehouse -- essentially ceding part of Sunderland to France, much like part of Calais is governed from the United Kingdom -- so that Single Market rules continue to apply, reminded me of a more radical but ultimately easier proposal.

    The Brexit vote was primarily about immigration. The Single Market has four Freedoms of Movement, for goods, capital, services and workers. Brexiteers want to end the FoM for workers. The EU says that the four Freedoms are inseparable.

    The EU is wrong. Liechtenstein and Georgia, Moldova & Ukraine* have three of the four Freedoms, Liechtenstein because it does not want its houses to be bought up by rich foreigners, Georgia, Moldova & Ukraine because the EU does not want another influx of workers willing to accept low wages.

    More pertinently, the Crown Dependencies (Guernsey, Isle of Man, Jersey) also have three of the four Freedoms.

    If those parts of the UK that want to leave the EU -- Mercia, Northumbria, East Anglia, Kent, Cornwall, Wessex, Wales -- are turned into Crown Dependencies, they will be free to control immigration. Article 50 does not need to be invoked, and the rest of the UK -- London, Scotland, Northern Ireland, Sussex -- remains in the EU.

    This proposal implies devolution. It is therefore unlikely that any Westminster politician will support this proposal.

    Update (16 Oct 2016):
    There are no border checks between the UK and the current Crown Dependencies, nor should there be for the proposed Crown Dependencies. With three Freedoms and free travel for tourism and business, border checks are not required. However, residency checks will need to be put in place for buying and renting properties, and for labour contracts.

    The Scottish National Party under Nicola Sturgeon have suggested that the powers to negotiate international treaties be devolved to Scotland. Constitutionally, that proposal is at least as far-reaching as the one above.

    *Update (18 Oct 2016)
    Added Georgia and Moldova. Note that the Deep and Comprehensive Free Trade Area applies to selected sectors only. Note that the Association Agreement with Ukraine has yet to enter into force.

    Update (20 Oct 2016)
    London is now considering London-only visas, and giving serious thought to it.

    Update (23 Nov 2016)
    Sign the petition!

    Update (29 Nov 2016)
    MoneyWeek on exemptions to the Four Freedoms.

    Update (5 June 2018)
    I'm not alone, although some want the whole of the UK to be like Jersey.
    1

    View comments

  4. The IPCC has published an erratum for our chapter in the Fifth Assessment Report.

    Four data points were changed. Two relate to a paper by Roberto Roson and Dominique van der Mensbrugghe. Roson's key contribution was to introduce the impact of climate change on labour productivity into the analysis of the total cost of climate change. The concluding section of that paper presents two estimates per scenario: the total impact, and the share of labour productivity in that total. Michael Mastandrea, the co-head of the Technical Support Unit double-checking the numbers in our IPCC chapter, thought that Roson instead presents the impact of labour productivity and its share in the total. Mastrandrea checked his reading with Roson, who confirmed, and the estimates were changed. This is one of the discrepancies between the IPCC chapter and my paper in the Journal of Economic Perspectives (and the forthcoming paper in REEP).

    Later on, Robert Kopp was checking the numbers again, asked Roson for the underlying data, and found that my original reading was correct. Mastandrea and Roson were wrong. JEP was right, IPCC was wrong. The erratum sets the record straight: The correct estimates by Roson are lower than the incorrect ones.

    The other two changes relate to a paper by Robert Mendelsohn, Michael Schlesinger and Larry Williams. Mendelsohn presents his results for population-weighted temperature changes. Everybody else in this literature uses area-weighted temperature changes, and Mendelsohn duly reports those numbers as well. Double-checking our results, Mastandrea insisted that the population-weighted temperatures are used -- these show positive impacts at a lower temperature because the world population is concentrated in the tropics which are projected to warm more slowly than the globe. Violating IPCC procedure, Mastandrea ignored our protests. This is another of the discrepancies between JEP and IPCC. The erratum sets the record straight. The numbers shown for global warming for different studies are comparable to one another. Mendelsohn's estimates show benefits at a greater warming.

    In sum, the Technical Support Unit of IPCC WG2 introduced four errors into the Fifth Assessment Report. All four errors exaggerate the impact of climate change.

    Update (12 Oct 2016): Chris Field, former chair of IPCC WG2, submitted a call for an erratum to the erratum, reverting the changes made to the Mendelsohn estimates. Field's argument is that Mendelsohn's area-weighted temperatures are land-only. There is no dispute there. Field overlooks, however, that Mendelsohn's populated-weighted temperatures are land-only too (as rather few people live in the ocean). Mendelsohn's area-weighted temperatures are therefore less incomparable to other studies than his population-weighted temperatures.

    Update (16 Dec 2016): Two months later, we're still going back and forth. Field continues to dispute our reading of Mendelsohn. We offered to show both estimates in an amendment to the erratum, but Field just wants the erratum gone.

    Update (29 Aug 2017): Unable to find agreement between the co-Coordinating Lead Authors and the former Working Group Chair, the current Chairs appointed a committee of three to adjudicate. They found against Field's interpretation: Both estimates will be shown in an amendment to the erratum. However, in an apparent attempt to save Field's face, there will be a vaguely worded footnote that is likely to cause confusion rather than clarity.
    0

    Add a comment

  5. Nick Stern produced another review, this time about the Research Excellence Framework (REF).

    In REF2014, and in the preceeding Research Assessment Exercises, research output was evaluated by an individual's 4 best papers in the last 6 years. This emphasizes quality over quantity, which is a good thing, but punishes people who do decent applied work. Not everybody can be a top researcher. Not all students need to be educated by top researchers. The exclusive focus on quality has lead to many applied economists moving to business or geography, where standards are lower. Although 66 universities offer an economics degree, only 28 submitted their economists to the economics panel in REF2014.

    The Stern Review recognizes this problem. It notes that only 1/3 of faculty were submitted to REF2014. Strangely, the right-wing tabloids have failed to pick this up. There are 100,000 lecturers and professors employed by UK universities who are paid for 2 days a week to do research -- but do not actually produce any research of note.

    The Stern Review also proposes a solution: All shall be included. The threat of exclusion from REF2021 spurs people on. Stern suggests to take away an incentive to do well.

    There is a second proposal in the Stern Review. Instead of assessing individuals, collectives will be assessed. The number of papers for individuals will range between 0 and 6, with the average number at 2. As top researchers compensate for the poor performance of their weaker colleagues, the latter have even less of a reason to do anything.

    The third proposal is to go from the best 4 papers to the best 2 papers in 6 years. The bias towards quality over quantity grows.

    The fourth proposal is to end portability. At the moment, publications are assigned to the university that employs the author at the census date. This leads to substantial mobility of top researchers towards the better endowed institutions in the year leading up to the REF -- and a wage premium for the best. In the future, publications will be assigned to the affiliation listed on the paper. This will change rather than end the game. Mobility will be at the start of the assessment period rather than at the end, and it will be based on the promise to deliver rather than the delivery. Ex ante mobility leads to more mismatches than ex post mobility. The labour market becomes less efficient.

    The end of portability brings another game. Talented researchers will delay submitting the final revisions of their papers until they have negotiated a move. Footloose ones will hop from university to university, selling their conditional acceptances to the highest bidder.

    Combined with Stern's second proposal, it is conceivable that a department submits only papers by people no longer on the payroll.


    Stern also missed an opportunity. The REF is a large effort and therefore done only every six years or so. The REF determines reputation and research funds for half a decade, so universities understandably invest considerable effort into their submissions. The Stern Review does not change this. In fact, the number of papers to be read by the panels increases as 4 papers by 1/3 of all faculty is less than 2 papers by all faculty.

    The Stern Review rejects an assessment based on metrics. Peer review may be superior if done well, but as the panels are overwhelmed -- one panel member bragged about "reading" 10 papers before breakfast -- papers are likely to be judged by their cover and people and institutions by their reputation.

    A scientometric assessment is more objective, and can be done more frequently. A less frequent peer-review could then focus on the more qualitative aspects.

    UPDATE 11 August: A colleague points to the dynamics of deciding whether the wonderful paper by Professor A should be included in lieu of the magnificent paper by Dr B.

    UPDATE 12 August: Over at EJMR, someone points out that it no longer pays to hire a foreign big-shot on a part-time contract (as intended by Stern). Instead, you want to hire the bigshot on a visiting contract.

    UPDATE 12 August (2): EJMR is at its charming best. Two further things occurred to me. Under the proposed rules, hiring will be based on uncertain future returns. The uncertainty about the average is, of course, smaller in larger departments. Stern thus favours the bigger departments.

    On the other hand, a smaller, poorer department can now better afford to take a punt on someone. Under the current rules, you would hire someone promising, see her flourish, and disappear just before the REF. Under the proposed rules, she would be snapped up later, and her papers count towards your submission.
    2

    View comments

  6. Cook 2016 includes 14 previous studies, and omits 2. For 10 of the previous studies, Cook 2016 shows the consensus rate including don't know / no position. For 3 (Cook 2013, Verheggen, Rosenberg), Cook 2016 excludes don't know / no position. For 1 (Oreskes) Cook 2016 shows the consensus rate excluding no position, and the sample size including no position. Following Cook's majority position, I changed all results and sample sizes to include don't know / no position.

    For the Carlton study, Cook 2016 copies a small error in that paper, and inflates the sample size from 38 to 306.

    For the Stenhouse study, Cook 2016 changes definition. In the other studies, agreement is with the hypothesis that humans are responsible for more than half of the observed warming. Although Stenhouse reports the rate of agreement with this hypothesis, Cook 2016 replaces it with the weaker hypothesis that humans contributed to warming.

    The graph below shows the impact of this lack of consistency. In black, it shows the rate of consensus as estimated in the literature and as reproduced by Cook. In gray, it shows estimates omitted by Cook. In red, it shows estimates that were replaced by Cook. In green, it shows the replacements.
    The graph below omits the excluded studies, so that replaced and replacements can be more readily compared.


    0

    Add a comment

  7. Cook's latest paper claims that there is a consensus on the consensus. I will let the data speak for themselves.
    Fig. 1: Fraction agreement by sample size. Large, dark dots refer to the complete samples of the underlying studies, small light dots to subsamples.
    Fig. 2: Fraction agreement, excluding don't know / no opinion, by sample size. Large, dark dots refer to the complete samples of the underlying studies, small light dots to subsamples.
    Fig. 3: Fraction agreement by year. The size of the dots denote sample size.
    Fig. 4: Fraction agreement, excluding don't know / no opinion, by year. The size of the dots denote sample size.

    The graphs are similar to those in my comment. The key difference is that I have now included previously overlooked surveys by Gallup, Rosenberg and Harris (courtesy of Cook) and the latest Bray/Storch.
    1

    View comments

  8. My comment on Cook 2013 was published at last, together with a reply. I responded earlier to Cook's responses to my substantive critiques: In a nutshell, Cook evades three out of five critiques, including that the data collection was not blind. For the remaining two, Cook 2016 admit that Cook 2013 misled the reader. This would normally imply a retraction.

    Cook 2016 claims that I "misrepresent" results. Misrepresentation is a big word. Earlier consensus studies claim to have found a very high degree of agreement with the notion that the global warming observed in the instrumental record is at least partly caused by humans.* However, these high rates of consensus are only found if the sample is restricted in a way that is superficially plausible but ultimately arbitrary.

    I show that the full sample shows different results than the subsamples. I also note that, for every subsample above the mean, there is a subsample below the mean.

    If this is misrepresentation, then I hope that everyone will misrepresent their data in the future.

    There is a more subtle thing going on. Cook 2016 underline that Cook 2013 agrees with other consensus studies. However, the other consensus studies find high consensus rates in exclusive subsamples. Cook 2013 finds the same in the whole sample, which is numerically dominated by papers that would have been excluded in the earlier consensus studies. Indeed, if I restrict the Cook 2013 sample to geoscience journals, the consensus rate falls.

    In other words, Cook 2013 not only disagrees with other studies on the level of consensus, it also disagrees on the pattern of consensus.




    * The truly remarkable finding is that there is no universal agreement with an hypothesis that follows trivially from the 19th century science of Fourier, Tyndall and Arrhenius, and has been tested many times since.
    1

    View comments

  9. Frank Ackerman found another outlet for his tired and wrong claims. Here's my response.

    Ackerman and Munitz (2016) offer a critique of estimates of the economic impact of climate change and the social cost of carbon in general, and the FUND model in particular. I am grateful for the opportunity to reply. In this response, I note that (i) their concerns are not new; (ii) they highlight strengths of FUND rather than its weaknesses; and (iii) they revisit their old mistakes. I conclude with a few improvements to FUND prompted by Messrs Ackerman and Munitz.

    Incremental contribution
    There is little if anything new in Ackerman and Munitz (2016). They note that FUND’s estimates of the social cost of carbon are highly sensitive to assumptions about (i) carbon dioxide fertilization and (ii) vulnerability to climate change. Anthoff, Tol, and Yohe (2009) and Waldhoff et al. (2014) previously report a strong sensitivity to carbon dioxide fertilization. Tol (1996) and Anthoff and Tol (2012b) previously highlight the importance of development and vulnerability. It is unfortunate that these papers were not referred to by Messrs Ackerman and Munitz.

    Highlighting FUND’s strengths
    That said, I am grateful to Messrs Ackerman and Munitz for highlighting two of FUND’s main strengths. Other integrated assessment models attribute all impacts of climate change to global warming. FUND, on the other hand, separates climate change, sea level rise, ocean acidification, and carbon dioxide fertilization. This is key because the dynamics of these processes are quite distinct.

    Although it is generally acknowledged that poorer countries are more vulnerable to climate change, other integrated assessment models assume that growing richer leaves vulnerability unaffected. Instead, FUND assumes that societies will become less vulnerable in the future if they grow richer.

    Repeating past mistakes
    A third concern is that Ackerman and Munitz (2016) revisit an earlier paper (Ackerman and Munitz 2012a) but omit key details. Having downloaded the source code, Messrs Ackerman and Munitz altered the code, and claimed there was an error and that this error was due to Anthoff and Tol. Ackerman and Munitz (2012b) withdraws some of the more egregious claims by Ackerman and Munitz (2012a), particularly that the alleged error was made by Anthoff and Tol. Stern (2012) notes that Ackerman and Munitz had suppressed evidence that contradicts their claim of an error. Anthoff and Tol (2012a) show that the Ackerman and Munitz test for errors is inconclusive. In other words, Ackerman and Munitz (i) claimed an error had been made without evidence, (ii) ignored evidence that there was no error, and (iii) blamed the error-that-wasn’t on the wrong people.

    Improvements to FUND
    Upon reflection, we changed access to the model code. FUND can still be freely downloaded and used by anybody, but changes in code or data are now attributed to specific users. This prevents a repetition of Ackerman and Munitz (2012a): Any alteration is tied to a particular programmer and therefore no one can blame someone else for an error they themselves made.

    We also changed the model specification. Reading the agricultural impact function as a univariate probability distribution, a reader may conclude that, in FUND3.6 and prior, there is a risk of dividing by zero. There is not. The probability distribution is bivariate, not univariate, so that the risk is minimal – and indeed unobserved in the many Monte Carlo experiments run with the model. Furthermore, the code has safeguards at three levels against numerical errors. (These issues were pointed out to Mr Ackerman before Ackerman and Munitz (2012a) was submitted for publication.) Nevertheless, in order to avoid further misinterpretation, we reformulated these equations.

    At the end of the day, I am grateful to Messrs Ackerman and Munitz for prompting these improvements, although I would wish for more nuanced and rigorous analysis in the future. At code school, we learned that a user interface has to be robust to anything. Our software engineering lecturer used the metaphor of a chimp typing random keys. That metaphor does not apply here. When putting FUND in the public domain, I overlooked that I created a new interface, one prone to interpretation and reinterpretation. Messrs Ackerman and Munitz usefully remind us that interfaces have to be robust to the unexpected.

    References
    Ackerman, Frank, and Charles Munitz. 2012a. "Climate damages in the FUND model: A disaggregated analysis." Ecological Economics 77 (0):219-224.
    Ackerman, Frank, and Charles Munitz. 2012b. "Reply to Anthoff and Tol." Ecological Economics 81:43. doi: 10.1016/j.ecolecon.2012.06.023.
    Ackerman, Frank, and Charles Munitz. 2016. "A Critique of Climate Damage Modeling: Carbon fertilization, adaptation, and the limits of FUND." Energy Research and Social Science.
    Anthoff, David, and Richard S. J. Tol. 2012a. "Climate damages in the FUND model: A comment." Ecological Economics 81:42. doi: 10.1016/j.ecolecon.2012.06.012.
    Anthoff, David, and Richard S. J. Tol. 2012b. "Schelling's Conjecture on Climate and Development: A Test." In Climate Change and Common Sense -- Essays in Honour of Tom Schelling, edited by Robert W. Hahn and Alistair M. Ulph, 260-274. Oxford: Oxford University Press.
    Anthoff, David, Richard S. J. Tol, and Gary W. Yohe. 2009. "Risk Aversion, Time Preference, and the Social Cost of Carbon." Environmental Research Letters 4 (2-2):1-7.
    Stern, David I. 2012. "Letter from the Associate Editor concerning the comments from Anthoff and Tol and Ackerman and Munitz." Ecological Economics 81:41. doi: 10.1016/j.ecolecon.2012.06.007.
    Tol, Richard S. J. 1996. "The Damage Costs of Climate Change Towards a Dynamic Representation." Ecological Economics 19:67-90.
    Waldhoff, Stephanie, David Anthoff, Steven K. Rose, and Richard S. J. Tol. 2014. "The marginal damage costs of different greenhouse gases: An application of FUND." Economics 8. doi: 10.5018/economics-ejournal.ja.2014-31.
    1

    View comments

  10. While ERL is taking its time type-setting my paper, Brandon Shollenberger has uncovered Cook's draft (?) response. It is an interesting read. Just like the journal did not want me to talk about Cook's paper, Cook's responses to the questions raised are hidden in an appendix.

    I raised five points.

    1. "Cook et al. (2013) do not show tests for systematic differences between raters. Abstract rater IDs may or may not be confidential (Queensland, 2012, 2014), but the authors could have reported test results without revealing identities."

    I could not find a response. Raters systematically deviate from each other, as shown here.

    2. "The paper argues that the raters were independent. Yet, the raters were drawn from the same group. Cook et al. (2013) are unfortunately silent on the procedures that were put in place to prevent communication between raters."

    Cook replies that "[r]aters had access to a private discussion forum" and notes that they "are able to identify potential cross-discussion of 0.26% of the sample" but admit that "some discussion may have been missed in this manual search".

    Recall that the original paper had that "[e]ach abstract was categorized by two independent [...] raters."

    3. "The paper states that “information such as author names and affiliations, journal and publishing date were hidden” from the abstract raters. Yet, such information can easily be looked up. Unfortunately, Cook et al. (2013) omit the steps taken to prevent raters from gathering additional information, and for disqualifying ratings based on such information."

    Cook replies that "raters conducted further investigation by perusing the full paper on only a few occasions". A few is unquantified.

    Recall that the original paper had that "[a]ll other information such as author names and affiliations, journal and publishing date were hidden."

    4. "Cook et al. (2013) state that 12,465 abstracts were downloaded from the Web of Science, yet their supporting data show that there were 12,876 abstracts. A later query returned 13,458, only 27 of which were added after Cook ran his query (Tol, 2014a). The paper is silent on these discrepancies."

    To the first point, Cook replies that "[d]uring the process of importing entries into the database, some papers were accidentally added twice and subsequently duplicate entries were deleted."

    The original paper has that "[i]n March 2012, [they] searched the ISI Web of Science [...] [t]he search was updated in May 2012". Between March and May, most papers from the first download had been rated, and there are significant differences between the first and second period of rating. See here and here. It is not clear from Cook's response whether the duplicate entries are from the same download or from different downloads.

    To the second point, Cook replies that "these databases and search algorithms are dynamic". That is, of course, true. At the same time, the Web of Science includes a field that has the date of data entry. This enables the reconstruction of historical queries.

    5. "The date stamps, which may or may not have been collected (Cook, 2013; Cook et al., 2014b), reveal that the abstracts were originally rated in two disjoint periods (mid-February to mid-April; second half of May). There was a third period of data collection, in which neutral abstracts were reclassified. Unfortunately, Cook et al. (2013) do not make clear what steps were taken to ensure that those who rated abstracts in the second and third periods did not have access to the results of the first and second periods."

    Cook replies that "the only thing that distinguished the first and second rating periods was that one was before and the other after the hacking event." He does not explicitly say that no data were analysed, and he does not provide evidence that the ratings are the same before and after. See here and here for the tests that show the contrary.
    0

    Add a comment

Blog roll
Blog roll
Translate
Translate
Blog Archive
About Me
About Me
Subscribe
Subscribe
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.