Saturday, February 10, 2018

A Multicointegration Model of Global Climate Change

We have a new working paper out on time series econometric modeling of global climate change. We use a multicointegration model to estimate the relationship between radiative forcing and global surface temperature since 1850. We estimate that the equilibrium climate sensitivity to doubling CO2 is 2.8ºC – which is close to the consensus in the climate science community – with a “likely” range from 2.1ºC to 3.5ºC.* This is remarkably close to the recently published estimate of Cox et al. (2018).

Our paper builds on my previous research on this topic. Together with Robert Kaufmann, I pioneered the application of econometric methods to climate science – Richard Tol was another early researcher in this field. Though we managed to publish a paper in Nature early on (Kaufmann and Stern, 1997), I became discouraged by the resistance we faced from the climate science community. But now our work has been cited in the IPCC 5th Assessment Report and recently there is also a lot of interest in the topic among econometricians. This has encouraged me to get involved in this research again.

We wrote the first draft of this paper for a conference in Aarhus, Denmark on the econometrics of climate change in late 2016 and hope it will be included in a special issue of the Journal of Econometrics based on papers from the conference. I posted some of our literature review on this blog back in 2016.

Multicointegration models, first introduced by Granger and Lee (1989), are designed to model long-run equilibrium relationships between non-stationary variables where there is a second equilibrium relationship between accumulated deviations from the first relationship and one or more of the original variables. Such a relationship is typically found for flow and stock variables. For example, Granger and Lee (1989) examine production, sales, and inventory in manufacturing, Engsted and Haldrup (1999) housing starts and unfinished stock, Siliverstovs (2006) consumption and wealth, and Berenguer-Rico and Carrion-i-Silvestre (2011) government deficits and debt. Multicointegration models allow for slower adjustment to long-run equilibrium than do typical econometric time series models because of the buffering effect of the stock variable.

In our model there is a long-run equilibrium between radiative forcing, f, and surface temperature, s:
The equilibrium climate sensitivity is given by 5.35*ln(2)/lambda. But because of the buffering effect of the ocean, surface temperature takes a long time to reach equilibrium. The deviations from equilibrium, q, represent a flow of heat from the surface to (mostly) the ocean. The accumulated flows are the stock of heat in the Earth system, Q. Surface temperature also tends towards equilibrium with this stock of heat:
where u is a (serially correlated but stationary) random error. Granger and Lee simply embedded both these long-run relations in a vector autoregressive (VAR) time series model for s and f. A somewhat more recent and much more powerful approach (e.g. Engsted and Haldrup, 1999) notes that:
where F is accumulated f and S is accumulated s. In other words, S(2) = s(1)+s(2), S(3) = s(1)+s(2)+s(3) etc. This means that we can estimate a model that takes into account the accumulation of heat in the ocean without using any actual data on ocean heat content (OHC) ! One reason that this is exciting, is because OHC data is only available since 1940 and data for the early decades is very uncertain. Only since 2000 is there a good measurement network in place. This means that we can use temperature and forcing data back to 1850 to estimate the heat content. Another reason that this is exciting is that F and S are so-called second order integrated variables (I(2)) and estimation with I(2) variables, though complicated, is super-super consistent – it is easier to get an accurate estimate of a parameter despite noise and measurement error issues in a relatively small sample. The I(2) approach combines the 2nd and 3rd equations above into a single relationship which it embeds in a VAR model that we estimate using Johansen's maximum likelihood method. The CATS package which runs on top of RATS can estimate such models as can the Oxmetrics econometrics suite. The data we used in the paper is available here.

This graph compares our estimate of OHC (our preferred estimate is the partial efficacy estimate) with an estimate from an energy balance model (Marvel et al., 2016) and observations of ocean heat content (Cheng et al, 2017):

We think that the results are quite good, given that we didn't use any data on OHC to estimate it and that the observed OHC is very uncertain in the early decades. In fact, our estimate cointegrates with these observations and the estimated coefficient is close to what is expected from theory. The next graph shows the energy balance:

The red area is radiative forcing relative to the base year. This is now more than 2.5 watts per square meter – doubling CO2 is equivalent to a 3.7 watt per square meter increase. The grey line is surface temperature. The difference between the top of the red area and the grey line is the disequilibrium between surface temperature and radiative forcing according to the model. This is now between 1 and 1.5 watts per square meter and implies that, if radiative forcing was held constant from now on, that temperature would increase by around 1ºC to reach equilibrium.** This gap is exactly balanced by the blue area, which is heat uptake. As you can see, heat uptake kept surface temperature fairly constant during the last decade and a half despite increasing forcing. It's also interesting to see what happens during large volcanic eruptions such as Krakatoa in 1883. Heat leaves the ocean, largely, but not entirely, offsetting the fall in radiative forcing due to the eruption. This means that though the impact of large volcanic eruptions on radiative forcing is short-lived, as the stratospheric sulfates emitted are removed after 2 to 3 years, they have much longer-lasting effects on the climate as shown by the long period of depressed heat content after the Krakatoa eruption in the previous graph.

We also compare the multicointegration model to more conventional (I(1)) VAR models. In the following graph, Models I and II are multicointegration models and Models IV to VI are I(1) VAR models. Model IV actually includes observed ocean heat content as one of its variables, but Models V and VI just include surface temperature and forcing. The graph shows the temperature response to permanent doubling of radiative forcing:

The multicointegration models have both a higher climate sensitivity and respond more slowly due to the buffering effect. This mimics, to some degree, the response of a general circulation model. The performance of Model IV is actually worse than the bivariate I(1) VARs. This is because it uses a much shorter sample period than Models V and VI. In simulations that are not reported in the paper, we found that a simple bivariate I(1) VAR estimates the climate sensitivity correctly if the time series is sufficiently long - much longer than the 165 years of annual observations that we have. This means that ignoring the ocean doesn't strictly result in omitted variables bias as I previously claimed. Estimates are biased in a small sample, but not in a sufficiently large sample. That is probably going to be another paper :)

* "Likely" is the IPCC term for a 66% confidence interval. This confidence interval is computed using the delta method and is a little different to the one reported in the paper.
** This is called committed warming. But, if emissions were actually reduced to zero, it's expected that forcing would decline and that the decline in forcing would about balance the increase in temperature towards equilibrium.


Berenguer-Rico, V., Carrion-i-Silvestre, J. L., 2011. Regime shifts in stock-flow I(2)-I(1) systems: The case of US fiscal sustainability. Journal of Applied Econometrics 26, 298—321.

Cheng L., Trenberth, K. E., Fasullo, J., Boyer, T., Abraham, J., Zhu, J., 2017. Improved estimates of ocean heat content from 1960 to 2015. Science Advances 3(3), e1601545.

Cox, P. M., Huntingford, C., Williamson, M. S., 2018. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 553, 319–322.

Engsted, T. Haldrup, N., 1999. Multicointegration in stock-flow models. Oxford Bulletin of Economics and Statistics 61, 237—254.

Granger, C. W. J., Lee, T. H., 1989. Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4, S145—S159.

Kaufmann R. K. and D. I. Stern (1997) Evidence for human influence on climate from hemispheric temperature relations, Nature 388, 39-44.

Marvel, K., Schmidt, G. A., Miller, R. L., Nazarenko, L., 2016. Implications for climate sensitivity from the response to individual forcings. Nature Climate Change 6(4), 386—389.

Siliverstovs, B., 2006. Multicointegration in US consumption data. Applied Economics 38(7), 819–833.

Saturday, February 3, 2018

Data and Code for "Modeling the Emissions-Income Relationship Using Long-run Growth Rates"

I've posted on my website the data and code used in our paper "Modeling the Emissions-Income Relationship Using Long-run Growth Rates" that was recently published in Environment and Development Economics. The data is in .xls format and the econometrics code is in RATS. If you don't have RATS, I think it should be fairly easy to translate the commands into another package like Stata. If anything is unclear, please ask me. I managed to replicate all the regression results and standard errors in the paper but some of the diagnostic statistics are different. I think only once does that make a difference, and then it's in a positive way. I hope that providing this data and code will encourage people to use our approach to model the emissions-income relationship.

Tuesday, January 16, 2018

Explaining Malthusian Sluggishness

I'm adding some more intuition to our paper on the Industrial Revolution. I have a sketch of the math but still need to work out the details. Here is the argument:

"In this section, we show that both Malthusian Sluggishness and Modern Economic Growth are characterized by a strong equilibrium bias of technical change (Acemoglu, 2009). This means that in both regimes the long-run demand curve for the resource is upward sloping – higher relative prices are associated with higher demand. Under Malthusian Sluggishness the price of wood is rising relative to coal and technical change is relatively wood-augmenting. At the same time, wood use is rising relative to coal use. Because when the elasticity of substitution is greater than one the market size effect dominates the price effect, technical change becomes increasingly wood-augmenting. As a result, the economy increasingly diverges from the region where an industrial revolution is possible or inevitable. Modern Economic Growth is the mirror image. The price of coal rises relative to the price of wood, but coal use increases relative to wood use and technical change is increasingly coal-augmenting."

Saturday, January 6, 2018

How to Count Citations If You Must

That is the title of a paper in the American Economic Review by Motty Perry and Philip Reny. They present five axioms that they argue a good index of individual citation performance should conform to. They show that the only index that satisfies all five axioms is the Euclidean length of the list of citations to each of a researcher's publications – in other words, the square root of the sum of squares of the citations to each of their papers.* This index puts much more weight on highly cited papers and much less on little cited papers than simply adding up a researcher's total citations would. This is a result of their "depth relevance" axiom. A citation index that is depth relevant always increases when some of the citations of a researcher's less cited papers are instead transferred to some of the researcher's more cited papers. In the extreme, it rewards "one hit wonders" who have a single highly cited paper, over consistent performers who have a more extensive body of work with the same total number of citations.

The Euclidean index is an example of what economists call constant elasticity of substitution, or CES, functions. Instead of squaring each citation number, we could raise it to a different power, such as 1.5, 0.5, or anything else. Perry and Reny show that the rank correlation between the National Research Council peer-reviewed ranks of the top 50 U.S. economics departments and the CES citation indices of the faculty employed in those departments is at a maximum for a power of 1.85:

This is close to 2 and suggests that the market for economists values citations in a similar way to the Euclidean index.

RePEc acted unusually quickly to add this index to their rankings. Richard Tol and I have a new working paper that discusses this new citation metric. We introduce an alternative axiom: "breadth relevance", which rewards consistent achievers. This axiom states that a citation index always increases when some citations from highly cited papers are shifted to less cited papers. We also reanalyze the dataset of economists at the top 50 U.S. departments that Perry and Reny looked at and a much larger dataset that we scraped from CitEc for economists at the 400 international universities ranked by QS. Unlike Perry and Reny, we take into account the fact that citations accumulate over a researcher's career and so junior researchers with few citations aren't necessarily weaker researchers than senior researchers with more citations. Instead, we need to compare citation performance within each cohort of researchers measured by the years since they got their PhD or published their first paper.

We show that a breadth relevant index that also satisfies Perry and Reny's other axioms is a CES function with exponent of less than one. Our empirical analysis finds that the distribution of economists across departments is in fact explained best by the simple sum of their citations, which is equivalent to a CES function with exponent of one, that favors neither depth nor breadth. However, at lower ranked departments – departments ranked by QS from 51 to 400 – the Euclidean index does explain the distribution of economists better than does total citations.

In this graph, the full sample is the same dataset that Perry and Reny used in their graph. The peak correlation is for a lower exponent – tau or sigma** – simply because we take into account cohort effects by computing the correlation for a researcher's citation index relative to the cohort mean.*** While the distribution across the top 25 departments is similarly to the full sample, with a peak at a slightly lower exponent that is very close to one, we don't find any correlation between citations and department rank for the next 25 departments. It seems that there aren't big differences between them.

Here are the correlations for the larger dataset that uses CitEc citations for the 400 universities ranked by QS:

For the top 50 universities, the peak correlation is for an exponent of 1.39 but for the next 350 universities the peak correlation is for 2.22. The paper also includes parametric maximum likelihood estimates that come to similar conclusions.

Breadth per se does not explain the distribution of researchers in our sample, but the highest ranked universities appear to weight breadth and depth equally, while lower-ranked universities do focus on depth, giving more weight to a few highly cited papers.

A possible speculative explanation of behavior across the spectrum of universities could be as follows. Lowest-ranked universities, outside of the 400 universities ranked by QS, might simply care about publication without worrying about impact. Having more publications would be better than having fewer at these institutions, suggesting a breadth relevant citation index. Our exploratory analysis that includes universities outside of those ranked by QS supports this. We found that breadth was inversely correlated with average citations in the lower percentiles.

Middle-ranked universities, such as those ranked between 400 and 50 in the QS ranking, care about impact; having some high-impact publications is better than having none and a depth-relevant index describes behavior in this interval. Finally, among the top-ranked universities such as the QS top 50 or NRC top 25, hiring and tenure committees wish to see high-impact research across all of a researcher's publications and the best-fit index moves towards. Here, adding lower-impact publications to a publication list that contains high-impact ones is seen as a negative.

* As monotonic transformations of the index also satisfy the same axioms, the simplest index that satisfies the axioms is simply the sum of squares.

** In the paper, we refer to an exponent of less than one as tau and an exponent greater than one as sigma.

*** The Ellison dataset that Perry and Reny use, uses Google Scholar data and truncates each researcher's publication list at 100 papers. With all working paper variants, it's not hard to exceed 100 items. This could bias the analysis in favor of depth rather than breadth. We think that the correlation computed for researchers with 100 papers or less only is a better way to test whether depth or breadth best explains the distribution of economists across departments. The correlation peaks very close to one for this dataset.

Thursday, December 28, 2017

The Impact of Electricity on Economic Development: A Macroeconomic Perspective

I have a new working paper out, coauthored with Paul Burke and Stephan Bruns. The paper is one of those commissioned for the first year of the Energy for Economic Growth program, which is funded by the UK Department for International Development and directed by Catherine Wolfram.  The paper was actually completed in January 2017, but there has been a lot of delay in getting approval for publication. The project and the paper focuses on the role of electricity in economic development in Africa and South Asia, the two regions of the world where electricity is least accessible.

Access to and consumption of electricity varies dramatically around the world. Access is lowest in South Sudan at 4.5% of households, while consumption ranges from 39 kWh per capita – this includes all uses of electricity not just household use – in Haiti to 53,000 kWh per capita in Iceland – driven by aluminum smelting. Consumption is 13,000 kWh per capita in the US. Electricity use and access are strongly correlated with economic development, as theory would suggest:

Access and consumption have increased strongly in many poorer countries in recent years – will this have beneficial effects on development? The specific questions that DFID asked us to answer were:

• How serious do electricity supply problems have to be in order to constitute a serious brake on economic growth?

• To what degree has electrification prolonged or accelerated economic growth?

• What can be learned from the development experience of countries that have invested successfully in electrification?

In principle, it should be easier to find evidence for causal effects using more disaggregated micro level data as some variables can more easily be considered exogenous, and randomized trials and other field experiments are possible. On the other hand, growth is an economy-wide, dynamic, and long-term process with effects that cannot usually be captured in micro studies. Therefore, macroeconomic analysis is also needed. Our paper is complemented by a paper covering the microeconomic aspects of these questions.

Despite large empirical literatures – such as that on testing for Granger casuality between electricity use and economic growth – and suggestive case evidence, we found few methodologically strong studies that establish causal effects for electricity use, access, infrastructure, or reliability on an economy-wide basis. The best such study that we found is a paper by Calderon et al. in the Journal of Applied Econometrics. But this paper actually tests the effects of an aggregate of different types of infrastructure on growth.

We propose that future research focuses on identifying the causal effects of electricity reliability, infrastructure, and access on economic growth; testing the replicability of the literature; and deepening our theoretical understanding of how lack of availability of electricity can be a constraint to growth.

Annual Review 2017

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. As I mentioned in last year's review, I am still struggling with work-life balance. It feels like that there is never enough time to get the work done I need to do and I am always making excuses for not getting things done. So, stopping and looking at what I did get done can help provide some perspective.

I was IDEC (Crawford's economics program) director till the end of the year. Ligang Song will take over as IDEC director in 2018. We continued to work on developing and seeking approval for new programs. We made some progress, but the final outcome will only be known in 2018 (hopefully). There was also quite a lot of work on the review of the Crawford School, the future of Asia-Pacific economics at ANU, economics at Crawford and ANU etc. The Arndt-Corden Department of Economics is officially a separate organizational unit from IDEC. Our plan is that going forward Arndt-Corden will represent the research, outreach, and PhD program components of all the economics activity at Crawford and IDEC will continue as the masters teaching program. This too is a work in progress. 

ANU environment and resource economists, Paul Burke, Frank Jotzo, Quentin Grafton, Jack Pezzey, and me

I made two international trips - one to Singapore and one to Europe and two short trips in Australia to Brisbane and Melbourne.  I went to the Singapore meeting for the IAEE international conference. My wife, Shuang, and son, Noah, came along too and we extended our stay to spend time in Singapore. We took the new direct flight from Canberra to Singapore, which is very convenient. From February there will also be Qatar Airways flights from Canberra, but apparently they will stop in Sydney before continuing to Singapore. That will just save time (maybe) on going between terminals in Sydney. To get to Europe I flew to Adelaide and then took Emirates via Dubai.

I was in Brisbane for the AARES conference. I have always found that the conference is much more dominated by agricultural economics than the journal but almost everything at the conference this time was agriculture related. Most of the environment papers dealt with agricultural impacts. I decided not to go in 2018, though the program is looking more balanced.

In December I traveled to Spain, Germany, and Israel. I gave a seminar at ICTA at the Autonomous University of Barcelona on the role of energy in modern economic growth. This was part of a series of seminars funded by the Maria de Maeztu program.

Speaking at ICTA, UAB, Barcelona

From there, I went on to Germany to work with Stephan Bruns on our ARC project and climate change paper. Alessio Moneta also visited from Pisa for a couple of days. Totally by coincidence, I arrived in Göttingen on the same day as Paul Burke who was touring Germany as part of his Energy Transition Hub activities:

Stephan Bruns, Krisztina Kis-Katos, Paul Burke, and me in Göttingen

We made quite good progress on both projects while I was there, but there is still much to do. We are just over the halfway point with the ARC DP16 project. One short paper is already published in Climatic Change, which discusses the accuracy of projections of future energy intensity. We also have another working paper on the restructuring of the US electricity generation industry and energy efficiency and have a paper under review on aircraft fuel economy.

We also completed and submitted our paper on the macroeconomic aspects of electricity and economic development for the DFID funded EEG project. Publication of the working papers and announcement of the next stages of the project have been much delayed, but there should be news on the latter soon. Together with our PhD student Akshay Shanker, I made a lot of progress on our contribution to a Handelsbanken Foundation funded project headed by Astrid Kander. Well, Akshay did most of the work... The paper  – about why energy intensity declines over time – will now be part of Akshay's PhD thesis.

I published fewer papers than last year, which isn't a surprise, as last year was a record year. There were five articles with a 2017 date:

Stern D. I., R. Gerlagh, and P. J. Burke (2017) Modeling the emissions-income relationship using long-run growth rates, Environment and Development Economics 22(6), 699-724. Working Paper Version | Blogpost

Stern D. I. (2017) How accurate are energy intensity projections? Climatic Change 143, 537-545. Working Paper Version | Blogpost

Zhang W., D. I. Stern, X. Liu, W. Cai, and C. Wang (2017) An analysis of the costs of energy saving and CO2 mitigation in rural households in China, Journal of Cleaner Production 165, 734-745. Working Paper Version | Blogpost
Stern D. I. and J. van Dijk (2017) Economic growth and global particulate pollution concentrations, Climatic Change 142, 391-406. Working Paper Version | Blogpost

Stern D. I. (2017) The environmental Kuznets curve after 25 years, Journal of Bioeconomics 19, 7-28. Working Paper Version | Blogpost

and there is one in press at the moment:

Bruns S. B. and D. I. Stern (in press) Overfitting bias and p-hacking in Granger-causality testing: Meta-evidence from the energy-growth literature, Empirical Economics. Working Paper Version | Blogpost

I also published a comment on a paper in Scientometrics:

Stern D. I. (2017) Comment on Bornmann (2017): Confidence intervals for journal impact factors, Scientometrics 113(3), 1811-1813. Blogpost

Follow the links to the blogposts to find out more about each paper.

I also published between 1 and 3 book chapters. It's often hard to work out when exactly a book chapter is published or not! This one is definitely published and it's open access for now. I only do book chapters where I can update an existing survey paper for the purpose. I posted 5 working papers, two of which have already been published and two are in the review process. In total, 6 papers are currently submitted, resubmitted, or in revision for resubmission.

Citations almost reached 14,000 on Google Scholar (h-index: 45) and will be well in excess of that for the end of 2017 when all this year's citations are finally included in Google's database.

I became an editor at PeerJ as part of their expansion into the environmental sciences. So far, I haven't actually handled a paper but I'm sure there will be some relevant submissions soon.

On the teaching side, I convened Masters Research Essay for the first time in the 1st Semester and taught Energy Economics for the last time for now in the second semester. My first PhD student here at Crawford, Alrick Campbell, received his PhD at the July graduation ceremony. He is currently a lecturer at the University of the West Indies in Jamaica.

I have been blogging even less this year than last. This will be the 19th post for 2017, whereas last year there were 35. Lack of time and increased use of Twitter are to blame. My Twitter followers now number more than 750, up from over 500 last year. The most popular blogpost this year was "Confidence Intervals for Journal Impact Factors".

Looking forward to 2018, it is easy to predict a couple of things that will happen that are already organized:

1. As mentioned above, I am ending my term as director of our economics program, IDEC, at the end of this calendar year. I am hoping to be able to focus a bit more on my research and get more balance in the coming year.

2. I will be the convener for Masters Research Essay and teach Environmental Economics in the first semester. I last taught environmental economics 10 years ago at RPI, so it will be quite a lot of work. I was getting a bit tired of teaching Energy Economics and if I did this course, Paul Burke could teach one of our compulsory first year masters microeconomics courses, so I decided to take it on. Both these courses are in the 1st semester and so I won't be teaching in the 2nd semester.

3. Early in the new year we will put out a working paper for our time series analysis of global climate change. We are currently revising the paper to resubmit to the Journal of Econometrics.

Nothing came of the job I applied for last year beyond the Skype interview, but I applied for another one this year...

Friday, November 24, 2017

Data and Code for "Energy and Economic Growth: The Stylized Facts" and an Erratum

Following a request for our estimation code we have now completed a full replication package for our 2016 Energy Journal paper and uploaded it to Figshare.

While we were putting this together we noticed some minor errors in the tables in the published paper. The reported standard errors of the coefficients of lnY/P in Tables 2 and 3 for the results without outliers are incorrect. We accidentally pasted the standard errors from Table 5 into Tables 2 and 3. The correct versions of Tables 2 and 3 should look like this:

The standard errors for unconditional convergence in Tables 4 and 6 are also incorrect. The reported standard errors are not robust and one was completely wrong. The tables should look like:

None of these errors results in the significance level in terms of 1%, 5% etc. changing.

Thursday, November 9, 2017

Distribution of SNIP and SJR

My colleague Matt Dornan asked me about the distribution of the journal impact factors SNIP and SJR. Crawford School has used SNIP to compare journal rankings across disciplines. It is a journal impact factor that is normalized for the differences in citation potential in different fields. This makes it reasonable to compare Nature and the Quarterly Journal of Economics, for example. Nature looks like it has much higher impact using the simple Journal Impact Factor that just counts how many citations articles in a journal get. But taking citation potential into account, these journals look much more similar. SJR is an iterative indicator that takes into account how highly cited the journals which cite the journal of interest are. It is similar to Google's pagerank algorithm. It also can be compared across disciplines. SJR is more an indicator of journal prestige or importance rather than simple popularity. I've advocated SJR as a better measure of journal impact as some journals in my area have high citations but those are not in the really highly cited journals.

I assumed the distribution of these indicators was highly skewed with most journals having low impact. But I also assumed that the log of these indicators might be normally distributed as citations to individual papers is roughly log-normally distributed. It turns out that the log of SNIP is still a bit skewed, but not that far from normal:

On the other hand the log of SJR remains highly non-normal:

There is a small tail of high prestige journals and then a big bulk of low prestige journals and a huge spike at SJR = 0.1. It makes sense that it is harder to be prestigious rather than just popular, but still I am surprised by how extreme the skew is. The distribution appears closer to an exponential distribution than a normal distribution.

Thursday, October 19, 2017

Barcelona Talk

I'll be giving a presentation in the "distinguished speakers series" at ICAT, Autonomous University of Barcelona on 5th December. I just wrote the abstract:

The Role of Energy in the Industrial Revolution and Modern Economic Growth

Abstract: Ecological and mainstream economists have debated the importance of energy in economic growth. Ecological economists usually argue that energy plays a central role in growth, while mainstream economists usually downplay the importance of energy. Using the (mainstream) theory of directed technological change, I show how increasing scarcity of biomass could induce coal-using innovation in Britain, resulting in the acceleration in the rate of economic growth known as the Industrial Revolution. Paradoxically, industrialization would be delayed in countries with more abundant biomass resources. However, as energy has become increasingly abundant, the growth effect of additional energy use has declined. Furthermore, both directed technological change theory and empirical evidence show that innovation has increasingly focused on improving the productivity of labor rather than that of energy. This explains the focus of mainstream economic growth models on labor productivity enhancing innovation as the driver of economic growth.

The paper will draw on my 2012 paper with Astrid Kander – it shares the same title after all – my recent working paper with Jack Pezzey and Yingying Lu, and maybe my ongoing work with Akshay Shanker on understanding trends in energy intensity in the 20th and 21st Centuries. The talk is for an interdisciplinary audience, so that will be challenging, but I think I can do it :)

Political Bias in My Teaching?

I've long been curious about what students in my classes think about my political position. So, I finally decided to ask them. I added a bonus question for 1 point on top of the 100 points available for the final exam in my Energy Economics course. The question read:

Bonus question (1 point): 
This question relates to potential political bias in my presentation of the course material. Based on the content of the course, which political party do you think I voted for in the last Federal senate election?

a. Greens
b. Labor
c. Liberal
d. Liberal Democrats
e. Australian Sex Party
f. Christian Democratic Party

Actually, ten parties ran at the last election for the two available senate seats representing the ACT, but I thought it would be better to keep the list a little more manageable.

The distribution of answers was as follows:

Greens: 2
Labor: 5
Liberal: 5
Liberal Democrats: 5
Australian Sex Party: 0
Christian Democrats: 0

Assuming that everyone who picked Liberal Democrats knows what it is - a libertarian party - there is a perceived rightward bias. But there are a lot of foreign students who might assume it is a more centrist party. Or people might have assumed that if I listed a bunch of parties they hadn't heard of, one of those must be the right answer.

What would no bias look like? Maybe something more like Green 2, Labor 7, Liberal 7, Liberal Democrat 1 or 3,7,6,1, which is closer to the voting pattern. Or maybe even further to the left as most academics including economists in Australia probably vote for Labor, so that would be the default assumption unless they perceived a strong bias in my teaching?

Tuesday, October 10, 2017

What Do Crawford School Economists Do?

I'm doing quite a bit of background work for our School Review, a review of the Future of Asia-Pacific Economics etc. The following table is based on the self-identified "Fields of Research" of core Crawford economics faculty. Most people chose more than one field. If, for example, someone chose three fields, then I attributed 1/3 of an FTE to each for that person. The result looks like this:

Our research foci are economic development and growth, environmental and resource economics, and international economics and finance. The (non-geographical) fields that we rank best in globally in RePEc are: Environment 7, Energy 7, Resources 11, Agriculture 23, Growth 29, International Trade 32, Development 39. So, this focus also is where we perform well.

Most Crawford economists have countries that they focus on. Using a similar approach I put together this table:

Naturally, Australia is number one, then follow China, Japan, Indonesia, and Vietnam. In RePEc, we rank 4th in the SE Asia ranking, 18th in Central/Western Asia (which actually includes South Asia), and 39th in the China subject ranking. This reflects more of our historical focus, while the current faculty is more focused on NE Asia. We don't have any current faculty with a professed interest in Thailand, for example! Of course, there is also less competition in research on SE Asia than on China and so that will also affect our ranking.

Friday, October 6, 2017

Impact Factors for Public Policy Schools

As part of our self-evaluation for the upcoming review of the Crawford School, I have been doing some bibliometric analysis. One thing I have come up with is calculating an impact factor for the School and some comparator institutions. This is easy to do in Scopus. It's the same idea as computing one for an individual or a journal, of course. I am using a 2016, 5 year impact factor. Just get total citations in 2016 to all articles and reviews published in 2011-2015. Divide by the number of articles. Here are the results with 95% confidence intervals:

The main difficulty I had was retrieving articles for some institutions such as the School of Public Affairs at Sciences Po. Very few articles came back for various variants of the name that I tried. I suspect that faculty are using departmental affiliations. I had a similar problem with IPA at LSE. So, I report the whole of LSE in the graph. It is easy to understand this metric in comparison to journal impact factors. As an individual metric the confidence interval will usually be large, though my 2016 impact factor was 5.9 with a 4.2 to 7.5 confidence interval. That's more precise than the estimate for SIPA.

Friday, September 22, 2017

More on Millar et al.

Millar and Allen have an article in the Guardian explaining what their paper really says. They say that existing ESMs assume too little cumulative emissions by the 2020's when atmospheric carbon dioxide will be higher than now and so temperature higher than now. But we have already reached that level of cumulative emissions and so we need to do an adjustment to the graph of cumulative emissions vs temperature. But no change to the graph of temperature vs. current concentration of CO2. The discrepancy arises because of uncertainty in cumulative emissions. Models have backfilled this estimate from other variables. Then they say that human induced warming is 0.93C and so that is the temperature baseline of their shifted frame of reference:

The argument of critics like Gavin Schmidt, Zeke Hausfather, and me that the estimate of current human-induced warming is too low still stands. And this means that the remaining carbon budget is smaller than argued by Millar et al. Any likely path to 1.5 degrees will require exceeding that temperature and then bringing radiative forcing down again.

Thursday, September 21, 2017

Is the Carbon Budget for 1.5 Degrees Much Larger than We Thought?

An article in Nature Geoscience by Millar et al. on carbon budgets has attracted a lot of attention and debate.* A blogpost by the lead author explains that current human-induced warming has increased in the 2010's by 0.93C over the the 1861-1880 period, while the mean CMIP5 climate model run projected that given cumulative carbon emissions to date the temperature should be 0.3C warmer than that. They argue that that means that the remaining carbon emissions budget allowed for staying within a 1.5C increase in temperature is larger than previously thought as we have 0.6C to go at this point rather than 0.3C. I think there a number of issues with this claim.

First, the value for human-induced warming is based on averaging the orange line in this graph:

The orange line is derived by fitting estimated radiative forcing to observed temperature given by the HADCRUT4 dataset by regression. HADCRUT4 shows less increase in surface temperature than either the GISS or Berkeley Earth datasets because of how it covers the polar regions, in particular.
Using the Berkeley Earth dataset, the temperature increase from the 1861-80 mean to the 2010's mean – shown by black lines in this graph:

is 1.1C. As you can see, even that increase is assuming that conditions during the "hiatus" are more usual than those during the post-2014 increase in temperature. 0.93C is a very conservative estimate of warming to date. Though the recent period was affected by El Nino conditions, it's possible that it represents catching up to the long term trend rather than an excursion above the trend. Throughout the hiatus period ocean heat content was increasing. I do think it is likely that the jump in temperature in the last two years is a recoupling of surface temperature to this more fundamental trend. We have a paper under review that supports this view.**

Also, I think that averaging the orange trend line in the previous graph definitely is too conservative given the strongly non-stationary behavior of the trend. The most recent estimate of the trend would be a better guess.

Second, I think there are a few reasons*** why we might update the carbon budget (as measured from the beginning of the Industrial Revolution):

1. Our estimate of the transient climate sensitivity changes – we think that the short-run temperature for a given concentration of carbon dioxide in the atmosphere is higher or lower than we previously thought.

2. Our estimate of the airborne fraction changes – our estimate of the amount by which the carbon dioxide in the atmosphere increases in reaction to a given amount of emissions changes. CO2 in the atmosphere has increased by about half cumulative emissions.

3. Our estimate of non-CO2 forcing changes. There are important other sources of radiative forcing such as methane and sources of negative forcing such as sulfate aerosols.

Observations of warming to date, isn't one of these. So the paper is implicitly saying that these observations lead them to reduce their estimate of the climate sensitivity.

Third, though the paper says that Earth System Models overestimated warming to date, it seems that the authors use the same models to estimate the remaining carbon budget.

* I have extensively revised this post following a comment from Myles Allen, one of the paper's authors. Also, I realized that the second part of the post didn't really make much sense, so I deleted it.

** The paper has been in review since February, but we haven't posted a working paper, as one of my coauthors didn't want to do so before receiving referee comments.

*** The emissions path also affects the carbon budget as we can see from the mean values for the various RCP paths in the graph below from Millar et al. and the difference between the red plume of RCP paths and the grey plume which are paths where emissions grow at a constant 1% per annum rate. The slower we release carbon, the bigger the budget.

Tuesday, September 5, 2017

Confidence Intervals for Journal Impact Factors

Is the Poisson distribution a short-cut to getting standard errors for journal impact factors? The nice thing about the Poisson distribution is that the variance is equal to the mean. The journal impact factor is the mean number of citations received in a given year by articles published in a journal in the previous few years. So if citations followed a Poisson distribution it would be easy to compute a standard error for the impact factor. The only additional information you would need besides the impact factor itself, is the number of articles published in the relevant previous years.

This is the idea behind Darren Greenwood's 2007 paper on credible intervals for journal impact factors. As he takes a Bayesian approach things are a little more complicated in practice. Now, earlier this year Lutz Bornmann published a letter in Scientometrics that also proposes using the Poisson distribution to compute uncertainty bounds - this time, frequentist confidence intervals. Using the data from my 2013 paper in the Journal of Economic Literature, I investigated whether this proposal would work. My comment on Bornmann's letter is now published in Scientometrics.

It is not necessarily a good assumption that citations follow a Poisson process. First, it is well-known that the number of citations received each year by an article, first increases and then decreases (Fok and Franses, 2007; Stern, 2014) and so the simple Poisson assumption cannot be true for individual articles. For example, Fok and Franses argue that for articles that receive at least some citations, the profile of citations over time follows the Bass model. Furthermore, articles in a journal vary in quality and do not all each have the same expected number of citations. Previous research finds that the distribution of citations across a group of articles is related to the log-normal distribution (Stringer et al., 2010; Wang et al., 2013).

Stern (2013) computed the actual observed standard deviation of citations in 2011 at the journal level for all articles published in the previous five years in all 230 journals in the economics subject category of the Journal Citation Reports using the standard formula for the variance
where Vi is the variance of citations received in 2011 for all articles published in journal i between 2006 and 2010 inclusively, Ni is the number of articles published in the journal in that period, Cj is the number of citations received in 2011 by article j published in the relevant period, and Mi is the 5-year impact factor of the journal. Then the standard error of the impact factor is √(Vi/Ni ).

Table 1 in Stern (2013) presents the standard deviation of citations, the estimated 5-year impact factor, the standard error of that impact factor, and a 95% confidence interval for all 230 journals. Also included are the number of articles published in the five year window, the official impact factor published in the Journal Citation Reports and the median citations for each journal.

The following graph plots the variance against the mean for the 229 journals with non-zero impact factors:

There is a strong linear relationship between the logs of the mean and the variance but it is obvious  that the variance is not equal to the mean for this dataset. A simple regression of the log of the variance of citations on the log of the mean yields:

where standard errors are given in parentheses. The R-squared of this regression is 0.92. If citations followed the Poisson distribution, the constant would be zero and the slope would be equal to one. These hypotheses are clearly rejected. Using the Poisson assumption for these journals would result in underestimating the width of the confidence interval for almost all journals, especially those with higher impact factors. In fact, only four journals have variances equal to or smaller than their impact factors. As an example, the standard error of the impact factor estimated by Stern (2013) for the Quarterly Journal of Economics is 0.57. The Poisson approach yields 0.2.

Unfortunately, accurately computing standard errors and confidence intervals for journal impact factors appears to be harder than just referring to the impact factor and number of articles published. But it is not very difficult to download the citations to articles in a target set of journals from the Web of Science or Scopus and compute the confidence intervals from them. I downloaded the data and did the main computations in my 2013 paper in a single day. It would be trivially easy for Clarivate, Elsevier, or other providers to report standard errors.


Bornmann, L. (2017) Confidence intervals for Journal Impact Factors, Scientometrics 111:1869–1871.

Fok, D. and P. H. Franses (2007) Modeling the diffusion of scientific publications, Journal of Econometrics 139: 376-390.

Stern, D. I. (2013) Uncertainty measures for economics journal impact factors, Journal of Economic Literature 51(1), 173-189.

Stern, D. I. (2014) High-ranked social science journal articles can be identified from early citation information, PLoS ONE 9(11), e112520.

Stringer, M. J, Sales-Pardo, M., Nunes Amaral, L. A. (2010) Statistical validation of a global model for the distribution of the ultimate number of citations accrued by papers published in a scientific journal, Journal of the American Society for Information Science and Technology 61(7): 1377–1385.

Wang, D., Song C., Barabási A.-L. (2013) Quantifying long-term scientific impact, Science 342: 127–132.

Sunday, August 13, 2017

Interview with Western Cycles Blog

Alejandro Puerto is a 20 year old who lives in Cuba. He has written: "Western Cycles: United Kingdom" a book that covers the economic and political history of the UK from 1945 onwards. He maintains a website of the same name that showcases his writing. You can also follow him on Twitter. He asked me whether I would I would do an interview for his blog. Here it is:

When did you became interested in the energy and the environment on economics?

I was interested in the environment from an early age and so I studied geography, biology (and chemistry) in the last 2 years of high school in England (1981-3) and then went on to study geography at university (in Israel). I had to pick another field and initially chose business as something practical but quickly switched to economics. I then realised that economics could explain a lot of geography and environmental trends. It was only when I went to do my PhD starting in 1990 that the faculty at Boston University at the Center for Energy and Environmental Studies which was linked to the Geography Department there were really focused on the role of energy in the economy and environmental trends that I became interested in understanding the role of energy. So I got a PhD in geography officially but had quite a lot of economics training and over time drifted closer to economics, so now I am even director of the economics program at the Crawford School of Public Policy at ANU.

I think that my generation is more informed on climate change because of the work of people like you. Do you think the same? Describe us some of your research.

Well, I think it has just become a much bigger and obvious issue as the global temperature has increased. The awareness of what is happening has been driven by people in the natural sciences. I have done some research applying time series models used in macroeconomics to modelling the climate system and though our first paper was published in Nature in 1997 and we have been cited on that in IPCC reports it has largely been on the fringes of climate science. My view of that research is that it takes an entirely different approach to modelling the system than most climate scientists use (mostly they use big simulation models called GCMs) and finds similar results which strengthens their conclusions. Most of my research has been on the role of energy in economic growth and the effect of economic growth on emissions and concentrations of pollutants. The effect of energy on growth is much more complicated than many people think – it seems that energy is more important as growth driver in the past in the developed world – adding energy when you have little has more effect than when you already have a lot. On pollution I’ve argued that the idea of the environmental Kuznets curve – that as countries get richer eventually growth will actually be good for the environment and reduce pollution is either outright wrong or too simplified. Instead in fast growing countries like China, growth overwhelms efforts to reduce pollution, while in slower growing developed economies clean up can happen faster than growth.

The Paris Summit filled your expectations as an environmental economist?

It was probably better than expected give the lack of success in getting agreement before then. Countries pledges are too little to reach the goal of limiting warming to 2C and we will probably have to remove carbon from the atmosphere in a big way later in this Century. The real question is whether countries will actually fulfil their voluntary pledges. OTOH low-carbon technology is developing fast and that is a positive that is making achieving the goals looking more possible.

How dangerous would be the environmental policy of the United States under the Trump administration on climate change?

It will delay action, unclear how much effect it will really have. Encouraging the development of new technology is important and having the largest and leading economy not focused on that is a negative. The US can’t actually leave till late 2020 and Trump has left the door open to submitting a weaker INDC in the interim and claiming victory. The US will still be involved in UNFCCC talks etc.

What do you think about the emissions of developing countries as they become industrious?

Developing country emissions are now larger than developed country emissions. But there is a big difference between China which now has higher per capita emissions than the European Union and say India which has still very low per capita emissions. China needs to take action and has made a moderately strong pledge. We should expect much less from India say. India is, though, strongly encouraging renewables development. Hopefully, technology is advancing fast enough that the poorest countries will end up going down a lower carbon path anyway as fossil fuel technologies gradually phase out.

Since 2006 China has become the greatest global polluter and emissions still growing continuously. China has no plans for decrease these emissions until 2030. What do you think about the attitude of this country?

They say they will peak emissions by 2030. In terms of reduction in emissions intensity per dollar of GDP their goal is quite strong. In the last 3 years Chinese CO2 emissions have been constant. Some argue they are already peaking now. I am a bit more skeptical. We need to see a few more years. There are several reasons why China is pursuing a fairly strong climate policy including energy security, encouraging innovation and reducing local air pollution as well as realising that they can benefit a lot from reducing their own emissions because they are such a large part of the problem.

In the long term, which kind of renewable energy would be the first to think about? Solar? Wind?

Solar – it has a greater potential total resource and looks like eventually prices will be below wind. Wind of course is strong in places without much sunshine like the Atlantic Ocean off NW Europe. I’m concerned though about the environmental impact of lots of wind power. In the long-run I’m still hoping for fusion to work out :)

Tell us about one of your favorite posts published by you on Stochastic Trend.

I’ve done less blogging recently as I now use Twitter for short things. Most of the posts are excerpts from papers or discussions of new papers. The most popular blogpost this year with visitors is:

Where I discuss our working paper on the role of coal in the Industrial Revolution. The research and writing of this paper took a very long time and I was really happy to be able to announce to the world that it was ready.

Do you drive an electric car?

No, I don’t have a driving licence. My wife drives and we have a car but it is a large petrol-engined car that is not very efficient. We don’t drive it much though. We’ve driven less than 30,000 km since buying it in 2007.

Have you ever visited Cuba? Are you interested? There are a lot of 1950s cars, but there are places with tropical nature.

No, I haven’t been to Cuba. The only place I’ve been in Latin America is Tijuana, Mexico. I’m not travelling that much recently as we now have a 1 1/2 year old child. But Cuba probably wouldn’t be high on my agenda. I travel mostly to either visit family or go to academic conferences and work with other researchers. The only time I flew somewhere outside the country I was living in just to go on vacation was when I flew from Ethiopia to Kenya. I was at an IPCC meeting in Ethiopia.

Thursday, July 27, 2017

Error in 2014 Energy Journal Paper

I hate looking at my published papers because there are often typos in them which I didn't catch at the proofs stage...This time my coauthor, Stephan Bruns, found one in our 2014 paper in the Energy Journal: "Is There Really Granger Causality Between Energy Use and Output?" In Table 1, which describes the details of the studies in our meta-analysis, the control variable labeled "energy production" should actually be labeled "energy price". Energy production would be a weird control variable...

After digging into our draft files, it turns out I changed "energy pr." to "E Prod." systematically in this table just before we resubmitted it to the journal. I also added a footnote: "E Prod.=Energy Production". I don't know why I did this. I would have thought I would have asked, Christian Gross, who made the original version of the table, before doing this, but I can't find any email to prove that...

Sunday, July 9, 2017

Robots, Artificial Intelligence, and the Future of Work

I think that robots/artificial intelligence and the future of work is a hugely important topic. This is a very active research field but it seems to me that people (some informed by this research but most without referring to the research) are rushing to one of two conclusions. The first of these conclusions is that up till now economic growth has resulted in rising wages and full employment and so it surely will in the future too. The other is that robots must mean structural unemployment and so the solution is to introduce universal basic income or some similar redistributive policy.

I don't think either is necessarily true. In the past, the elasticity of substitution between labor and capital seems to have been less than one - both inputs were essential in production. Also, the two inputs are q-complements - an increase in capital per worker results in an increase in the marginal product and, therefore, wage of a worker. But it's possible that now, or in the future, that the elasticity of substitution between labor and capital is or will be greater than unity so that labor is not an essential input. Or that there are techniques that are designed just to use machines. Acemoglu and Restrepo (2016) assume that as some low-skilled tasks become automated other new high-skilled tasks are introduced. But there may be limits to people's cognitive ability. Most people aren't intelligent enough to be engineers and scientists. And the people that are intelligent enough now, might be worse than artificial intelligences in the future.

The other premature conclusion is that, definitely things are different now, and robots will result in structural unemployment or immiserizing growth, so that government intervention is needed. Usually, universal basic income is mentioned. Sachs et al. (2015) argue immiserizing growth is possible. This is one of the better papers out there I think, but still the framework is quite limited. as technological change is exogenous. It is possible that there is some self-correcting mechanism similar to that in Acemoglu's (2003) paper on capital- and labor-augmenting technical change. In that model, capital-augmenting technical change is possible for a while, but it introduces forces that return the economy to a pure labor-augmenting technical change path. Another important question is whether people have a preference to have at least some of the goods and services they consume produced by humans. Sachs et al. assume that utility is a Cobb-Douglas function of automatable and non-automatable goods. That means that consumption of human made goods could become infinitesimally small in theory.

I think we need to consider a range of models as well as empirical evidence before we can say what kind of policy, if any, is needed.

I wanted to do research on this and started doing some research on this but concluded that it is not realistic given my limited time for research because of my administrative - I am still director of our International and Development Economics Program - and parenting roles and my existing research commitments. Twitter isn't the only reason that I haven't updated this blog in 2 1/2 months. Comparative advantage suggests to me that I remain focused on energy economics. However, I think that the Crawford School of Public Policy should be looking at these kind of issues, and I am trying to encourage that. This is going to be one of the key policy questions going forward I think.

Thursday, April 27, 2017

How Accurate are Projections of Energy Intensity?

A new short working paper about how accurate projections of future energy intensity are. It's an extension of comments I made at Energy Update 2016 here at the ANU.

Energy intensity is one of the four factors in the Kaya Identity, which is often used to understand changes in greenhouse gas emissions. It is one of the two most important factors together with the rate of economic growth. The 2014 IPCC Assessment Report shows that less than 5% of models included in the assessment project that energy intensity will decline slower than the historic rate under business as usual:*

Is this likely? In the paper, I evaluate the past performance of the projections implied by the World Energy Outlook (WEO) published annually (except in 1997) by the International Energy Agency (IEA). The following graph shows the average annual difference between the projected and actual rate of change in energy intensity in subsequent years** for each WEO since 1994:

Positive errors mean that energy intensity declined slower than projected in the following years while negative errors mean it declined faster. So, for example, the error of -0.4% for 2000 means that over the years 2001-2015, on average energy intensity declined by 0.4% a year faster than was projected in the 2000 WEO.

It turns out that these errors are strongly negatively correlated (r = -0.8) with the error in projecting the rate of economic growth, which IEA outsources. Csereklyei et al. (2016), similarly, find that reductions in energy intensity tend to only occur in countries with growing economies. If we divide and multiply the growth rate of energy intensity g(E/Y) by the growth rate of GDP g(Y) we get the following identity:

The first term on the right hand side can be seen as the elasticity of energy intensity with respect to GDP.*** The following graph plots the elasticity as projected and as subsequently realized for each WEO:

The two seem to have tracked each other quite well. But there is a complication. The 1994 to 96 WEOs only projected future energy use up to 2010. 2010 is the only recent year when global energy intensity actually increased. This end point reduces (in absolute value) the actual elasticities for these three WEOs. From 1998 on, the difference between the projected and actual rate of change in energy intensity is calculated up to 2015. But through the 2011 WEO, 2010 is one of the years in the projection period. From 2012, 2010 is no longer include in the projection period and there is a sharp step down in the actual elasticity over the projection period. I think that the elasticities for 2012-16 probably under-estimate the true long-run elasticities and that the relatively stable values from 1998-2011 are more representative of what the future elasticities will be over the full projection horizon to 2030 or 2040.

If that is the case, then the projected elasticity of -0.6 in the 2016 WEO probably over-estimates the the elasticity that will be realized in the long run. Why would this be the case?

Early WEOs largely modeled energy intensity trends based on historical trends. This is not the case for recent WEOs. Over time, the IEA has endogenized more variables in their model of the world energy system and included more and more explicit energy policies. It is likely that the model under-estimates the economy-wide rebound effect. It's also possible that energy efficiency policies are not implemented as effectively as expected.

As part of our ARC funded DP16 project, we hope to contribute to improving future projections of energy intensity by empirically estimating the economy-wide rebound effect.

* The light grey area indicates the projections between the 95th and 100th percentile of the range for the default scenario.
** The base year for each WEO is 2-3 years before the publication date. Therefore, we can already assess the 2015 and 2016 WEO's.
*** We can use the identity to decompose the projection errors:

Over time the contribution of errors in the projected growth rate has increased relative to the contribution from errors in the elasticity. But I think that if we revisit this experiment in 2030 we will find a larger contribution from errors in the elasticity for what are currently recent issues of the WEO.

P.S. 23 June 2017

The paper is now published in Climatic Change.