Sunday, December 23, 2018

Annual Review 2018

I've been doing these annual reviews since 2011. They're mainly an exercise for me to see what I accomplished and what I didn't in the previous year. This year was a bit of a struggle at times, so it's a good idea to remind myself of what I did manage to accomplish.

Me and my mother holding my brother in 1967

Going into this year, I had high expectations for getting more research done, as I finished my term as economics program director at Crawford at the end of 2017. In the first semester, I was teaching a new course, or rather a subject I last taught more than a decade ago – environmental economics – but I thought that should be manageable and had three weeks of class prepared at the beginning of the semester. I definitely don't have a comparative advantage in teaching, it takes me a lot of time and effort to prepare. Then my mother died in the week that class began. This was quite expected – she was not doing well when I visited in December – but of course the exact timing is never known. I didn't travel to Israel for the funeral. I had already agreed with my brother up front to travel for the "stone-setting", which in Israel is 30 days after the death. It is the custom to bury someone on the day they die, if possible, so I didn't want to delay that. After I got back, I got ill with a flu/cold, which resulted in me completely losing my voice so I couldn't teach at all. So this was a difficult semester. In October/November I again got ill with flu/lung infection of some sort and lost a month of research time.

Noah and me in Sweden 

But there were also happier travels during the year. In June and July, I traveled with my wife, Shuang, and son, Noah, to the Netherlands, Finland, Sweden, and Japan. I went to three conferences: the IAEE meeting in Groningen and the IEW and World Congress in Gothenburg. Shuang also attended the World Congress. The visits to Finland and Japan were just for fun. Stephan Bruns was also at the IAEE meeting and actually presented our paper on rebound, which got very positive feedback. Stephan and Alessio Moneta did most of the econometric work on the paper, which we are about to submit now.

In September I went to Rome and Singapore for two workshops. At the Villa Mondragone, near Frascati, outside of Rome, was the Climate Econometrics Conference. I presented a paper that compared different estimators of the climate sensitivity. This produced some unexpected results, and it looks like it needs a lot more work some time! I met lots of people including meeting my coauthor Richard Tol for the first time.

Villa Mondragone near Frascati

In Singapore, I attended the 5th Asian Energy Modelling Workshop, which mostly focuses on integrated assessment modeling. By then, I was confident enough to present the rebound paper myself.

I also went to the Monash Environmental Economics Workshop in Melbourne in November. This is a small meeting with just one stream of papers, but they are all focused on environmental economics, whereas the larger annual AARES conference mostly focuses on agriculture.

Akshay Shanker and I finally put out a working paper that was our contribution to a Handelsbanken Foundation funded project headed by Astrid Kander. We are also branding this as part of our ARC funded DP16 project, as we have also been using ARC funding on it. We also completed work this year on the major part of the work on rebound that was part of the DP16 proposal. Zsuzsanna Csereklyei, who was working on the DP16 project, moved to a lecturer position at RMIT.

The Energy Change Institute at ANU won the annual ANU Grand Challenge Competition with a proposal on Zero-Carbon Energy for the Asia-Pacific. Actually, the project already received several hundred thousand dollars of interim funding from the university in 2018 and I have been working with Akshay on the topic of electricity markets as part of this project. We'll continue research on the topic during 2019.

ECI Grand Challenge Presentation Team: Paul Burke, Kylie Catchpole, and Emma Aisbett

We only managed to publish two papers with a 2018 date:

Burke P. J., D. I. Stern, and S. B. Bruns (2018) The impact of electricity on economic development: A macroeconomic perspective, International Review of Environmental and Resource Economics 12(1) 85-127. Working Paper Version | Blogpost

Csereklyei Z. and D. I. Stern (2018) Technology choices in the U.S. electricity industry before and after market restructuring, Energy Journal 39(5), 157-182. Working Paper Version | Blogpost

But we have several papers in press:

Bruns S. B., J. König, and D. I. Stern (in press) Replication and robustness analysis of 'Energy and economic growth in the USA: a multivariate approach', Energy Economics. Working Paper Version | Blogpost

Bruns S. B., Z. Csereklyei, and D. I. Stern (in press) A multicointegration model of global climate change, Journal of Econometrics. Working Paper Version | Blogpost

Bruns S. B. and D. I. Stern (in press) Lag length selection and p-hacking in Granger causality testing: Prevalence and performance of meta-regression models, Empirical Economics. Working Paper Version | Blogpost

We posted five new working papers, three of which haven't been published yet:

Flying More Efficiently: Joint Impacts of Fuel Prices, Capital Costs and Fleet Size on Airline Fleet Fuel Economy Blogpost
November 2018. With Zsuzsanna Csereklyei.

Energy Intensity, Growth and Technical Change
September 2018. With Akshay Shanker. Blogpost

How to Count Citations If You Must: Comment
January 2018. With Richard Tol. Blogpost

Google Scholar citations approached 16,000 with an h-index of 51.

The trend to fewer blogposts continued – this is only the 9th blogpost this year. Twitter followers rose from 750 to 950 over the year.

Akshay Shanker – his primary adviser was Warwick McKibbin and I was on his supervisory panel – received his PhD with very positive feedback from the examiners. He has a part time position at ANU working on the Grand Challenge Project and I am supervising him on that.

There doesn't seem to have been any major progress on the issues surrounding economics at ANU, that I mentioned in last year's post. Arndt Corden seems to be heading towards being a specialist program dealing with developing Asia and there is no overall identity for economics at Crawford. I increasingly identify with the Centre for Applied Macroeconomic Analysis.

On a related theme, I applied for three jobs on three different continents. One of these – the one in Australia – went as far as an onsite interview, but the more I learnt about the job the less enthusiastic I was, and I wasn't offered it. It was a 50/50 admin/leadership and research position.

Looking forward to 2019, a few things can be predicted:
  • We're about to submit our first paper on the rebound effect and should also put out a working paper or two on the topic.
  • I'll continue research with Akshay on the Grand Challenge project.
  • I'm not planning to go to any conferences this year. I have one seminar presentation lined up at Macquarie University in the second half of the year.
  • My PhD student Panittra Ninpanit will submit her thesis at the beginning of the year, and I have a new student, Debasish Kumar Das, starting. The plan is for him to work on electricity reliability.
  • I'll be teaching environmental economics and the masters research essay course again in the first semester.
 Trying to understand the menu in Finland

Monday, November 26, 2018

Flying More Efficiently

I have another new working paper out, coauthored with Zsuzsanna Csereklyei on airline fleet fuel economy. Zsuzsanna worked as research fellow here at the Crawford School on my Australian Research Council funded DP16 project on energy efficiency and the rebound effect. This paper reports on some of our research in the project. We also looked at energy efficiency in electric power generation in the US.

The nice thing about this paper is that we have plane level data on the aircraft in service in 1267 airlines in 174 countries. This data is from the World Airliner Census from Flight Global. We then estimated the fuel economy of 143 aircraft types using a variety of data sources. We assumed that the plane would fly its stated range with the maximum number of passengers and use all its fuel capacity. This gives us litres of fuel per passenger kilometre. Of course, many flights are shorter or are not full, and so actual fuel consumption per passenger kilometre will vary a lot, but this gives us a technical metric which we can use to compare models.


The graph shows that the fuel economy of new aircraft has steadily improved over time. One of the reasons for the scatter around the trendline is that large aircraft with longer ranges tend to have better fuel economy than small aircraft:


This is also one of the reasons why fuel economy has improved over time. Still, adjusted for size, aircraft introduced in earlier decades had (statistically) significantly worse fuel economy than more recent models. We used these regressions to compute age and size adjusted measures of fuel economy, which we used in our main econometric analysis.

The main analysis assumes that airlines choose the level of fuel economy that minimizes costs given input prices and the type of flying that they do. There is a trade off here between doing an analysis with very wide scope and doing an analysis with only the most certain data. We decided to use as much of the technical aircraft data as we could, even though this meant using less certain and extrapolated data for some of the explanatory variables.

We have data on wages in airlines and on the real interest rates in each country. The wage data is very patchy and noisy and we extrapolated a lot of values from the observations we had in the same way that, for example, the Penn World Table extrapolates from surveys. There are no taxes on aircraft fuel for international travel and the price of fuel reported by Platts does not vary a lot around the world. But countries can tax fuel for domestic aviation. We could only find data on these specific taxes for a small number of countries in a single year. So, we used proxies, such as the price of road gasoline and oil rents, for this variable. We proxy the type of flying airlines do using the characteristics of their home countries.

The most robust results from the analysis – that hold whether we use crude fuel economy or fuel economy adjusted for size and age – are that – all things constant – larger airlines select planes with higher fuel economy, higher interest rates are associated with poorer fuel economy, higher fuel prices are associated with higher fuel economy (but the elasticity is small), and fuel economy is worse in Europe and Central Asia than other regions.

It seems that for a given model age and size, more fuel efficient planes cost more. This would explain why, even holding age and size factors constant, higher interest rates are correlated with worse fuel economy. Also, if larger airlines have more access to finance or a lower cost of capital they will be able to afford the more fuel efficient planes.

What effect could carbon prices have on fleet fuel economy? The most relevant elasticity is the response of unadjusted fuel economy to the price of fuel. This allows airlines to adjust the size and model age of planes in response to an increase in the price of fuel. We estimate that this elasticity is -0.09 to -0.13, which suggests the effect won't be very big. Because we use proxies for the price of fuel, we expect that the true value of this elasticity is actually higher. The elasticity also assumes that there is a given set of available aircraft models. Induced innovation might result in more efficient models being developed. There might also be changes in the types of airlines and flights. So the effect could be quite a bit larger in the long run.

Wednesday, October 3, 2018

Energy Intensity, Growth, and Technical Change

I have a new working paper out, coauthored with Akshay Shanker. Akshay recently completed his PhD at the Crawford School and is currently working on the Energy Change Institute's Grand Challenge Project among other things. This paper was one of the chapters in Akshay's thesis. Akshay originally came to see me a few years ago about doing some research assistance work. I said: "The best thing you could do is to write a paper with me – I want to explain why energy intensity has declined using endogenous growth theory." This paper is the result. Along the way, we got additional funding from the College of Asia and the Pacific, the Handelsbanken Foundation, and the Australian Research Council.

World and U.S. energy intensities have declined over the past century, falling at an average rate of approximately 1.2–1.5 percent a year. As Csereklyei et al. (2016) showed, the relationship has been very stable. The decline has persisted through periods of stagnating or even falling energy prices, suggesting the decline is driven in large part by autonomous factors, independent of price changes.

In this paper, we use directed technical change theory to understand the autonomous decline in energy intensity and investigate whether the decline will continue. The results depend on whether the growing stock of knowledge makes R&D easier over time – known as state-dependent innovation – or whether R&D becomes harder over time.

Along a growth path where real energy prices are constant, energy use increases, energy-augmenting technologies – technologies that improve the productivity of energy ceteris paribus – advance, and the price of energy services falls. The fall in the price of energy services reduces profitability and incentives for energy-augmenting research. However, since the use of energy increases, the "market size" of energy services expands, improving the incentives to perform research that advances energy-augmenting technologies. In the scenario with no state dependence, the growing incentives from the expanding market size are enough to sustain energy-augmenting research. Energy intensity continues to decline, albeit at a slower rate than output growth, due to energy-augmenting innovation. There is asymptotic convergence to a growth path where energy intensity falls at a constant rate due to investment in energy-augmenting technologies. Consistent with the data, energy intensity declines more slowly than output grows, implying that energy use continues to increase.

This graph shows two growth paths – for countries that are initially more or less energy intensive – that converge to the balanced growth path G(Y) as their economies grow:


This is very consistent with the empirical evidence presented by Csereklyei et al. (2016).

However, the rate of labor-augmenting research is more rapid along the balanced growth path and there will be a shift from energy-augmenting research to labor-augmenting research for a country that starts out relatively energy intensive. This explains Stern and Kander's (2012) finding that the rate of labor-augmenting technical change increased over time in Sweden as the rate of energy-augmenting technical change declined.

The following graph shows the ratio of the energy-augmenting technology to the labor-augmenting technology over time in the US, assuming that the elasticity of substitution between energy and labor services is 0.5:

Up till about 1960, energy-augmenting technical change was more rapid than labor-augmenting technical change and the ratio rose. After this point labor-augmenting technical change was more rapid, but the rise in energy prices in the 1970s induced another period of more rapid energy-augmenting technical change.

In an economy with extreme state-dependence, energy intensity will eventually stop declining because labor-augmenting innovation crowds out energy-augmenting innovation. Our empirical analysis of energy intensity in 100 countries between 1970 and 2010 suggests a scenario without extreme state dependence where energy intensity continues to decline.

Tuesday, April 24, 2018

Replicating Stern (1993)

Last year, Energy Economics announced a call for papers for a special issue on replication in energy economics. Together with Stephan Bruns and Johannes König we decided to do a replication of my 1993 paper in Energy Economics on Granger causality between energy use and GDP. That paper was the first chapter in my PhD dissertation. It is my fourth most cited paper and given the number of citations could be considered "classic" enough to do an updated robustness analysis on it. In fact, another replication of my paper has already been published as part of the special issue. The main results of my 1993 paper were that in order to find Granger causality from energy use to GDP we need to  use both a quality adjusted measure of energy and control for capital and labor inputs.

It is a bit unusual to include the original author as an author on a replication study, and my role was a bit unusual. Before the research commenced, I discussed with Stephan the issues in doing a replication of this paper, giving feedback on the proposed design of the replication and robustness analysis. The research plan was published on a website dedicated to pre-analysis plans. Publishing a research plan is similar to registering a clinical trial and is supposed to help reduce the prevalence of p-hacking. Then, after Stephan and Johannes carried out the analysis, I gave feedback and helped edit the final paper.

Unfortunately, I had lost the original dataset and the various time series I used have been updated by the US government agencies that produce them. The only way to reconstruct the original data would have been to find hard copies of all the original data sources. Instead we used the data from my 2000 paper in Energy Economics, which is quite similar to the original data. Using this close to original data, Stephan and Johannes could reproduce all my original results in terms of the direction of Granger Causality and the same qualitative significance levels. In this sense, the replication was a success.

But the test I did in 1993 on the log levels of the variables is inappropriate if the variables have stochastic trends (unit roots). The more appropriate test is the Toda-Yamamoto test. So, the next step was to redo the 1993 analysis using the Toda-Yamamoto test. Surprisingly, these results are also very similar to those in Stern (1993). But, when Stephan and Johannes used the data for 1949-1990 that are currently available on US government websites, the Granger causality test of the effect of energy on GDP was no longer statistically significant at the 10% level. Revisions to past GDP have been very extensive, as we show in the paper:

Results were similar when they extended the data to 2015. However, when they allowed for structural breaks in the intercept to account for oil price shocks and the 2008-9 financial crisis, the results were again quite similar to Stern (1993) both for 1949-1990 and for 1949-2015.

They then carried out an extensive robustness check using different control variables and variable specifications and a meta-analysis of those tests to see which factors had the greatest influence on the results.

They conclude that p-values tend to be substantially smaller (test statistics are more significant) if energy use is quality adjusted rather than measured by total joules and if capital is included. Including labor has mixed results. These findings largely support Stern’s (1993) two main conclusions and emphasize the importance of accounting for changes in the energy mix in time series modeling of the energy-GDP relationship and controlling for other factors of production.

I am pretty happy with the outcome of this analysis! Usually it is hard to publish replication studies that confirm the results of previous research. We have just resubmitted the paper to Energy Economics and I am hoping that this mostly confirmatory replication will be published. In this case, the referees added a lot of value to the paper, as they suggested to do the analysis with structural breaks.

Thursday, April 5, 2018

Buying Emissions Reductions

This semester I am teaching environmental economics, a course I haven't taught since 2006 at RPI. Last week we covered environmental valuation. I gave my class an in-class contingent valuation survey. I tried to construct the survey according to the recommendations of the NOAA panel. Here is the text of the survey:

Emissions Reduction Fund Survey

In order to meet Australia’s international commitments under the Paris Treaty, the government is seeking to significantly expand the Emissions Reduction Fund, which pays bidders such as farmers to reduce carbon emissions. To fully meet Australia’s commitment to reduce emissions by 26-28% below 2005 levels by 2030 the government estimates that the fund needs to be expanded to $2 billion per year. The government proposes to fund this by increasing the Medicare Levy.

1. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.125% increase in the Medicare levy, which is equivalent to $100 per year in extra tax for someone on average wages. This is expected to only meet half of Australia’s commitment, reducing emissions to 13-14% below 2005 levels or by a cumulative 370 million tonnes by 2030.

Yes No

2. Considering other things you need to spend money, and other things the government can do with taxes do you agree to a 0.25% increase in the Medicare levy, which is equivalent to $200 per year in extra tax for someone on average wages. This is expected to meet Australia’s commitment, reducing emissions to 26-28% below 2005 levels or by a cumulative 740 million tonnes by 2030.

Yes No

3. If you said yes to either 1 or 2, why? And how did you decide on whether to agree to the 0.125% or 0.25% tax?

4. If you said no to both 1. and 2. why?

***********************************************************************************


85% voted in favour of the 0.125% Medicare tax option and 54% voted in favour of 0.25% - So both would have passed. A few people voted against 0.125 and for 0.25, so I changed their votes to for 0.125 as well as 0.25.


Reasons for voting for both:

  • $200 not much, willing to do more than just pay that tax 
  • We should meet the target
 
  • Tax is low compared to other taxes - can reduce government spending on health in future
 
  • Can improve my health
 
  • Benefit is much greater than cost to me
 
  • I pay low tax as I'm retired, so can pay more
 
  • I'm willing to pay so Australia can meet commitment
 
  • Only $17 a month
 
  • Tax is small
 
  • Because reducing emissions is the most important environmental issue
 
 

Reasons for voting for 0.125 but against 0.25:

  • Can afford 0.125 but not 0.25
  • Government can cover the rest with other measures like incentives
 
 

Reasons for voting against both:

  • There are other ways to reduce emissions - give incentives to firms rather than tax the middle class... 
  • Government should tax firms
  • Don't believe in emissions reduction fund because it is inefficient

  • I prefer to spend my money rather than pay tax and reduction in emissions is not very big for tax paid


Mostly the reasons for voting for both are ones we would want to see if we are really measuring WTP - can afford to pay and it is a big issue. Those thinking it will increase their personal health or reduce health spending were made to think about health by the payment vehicle. I chose the Medicare Levy as the payment vehicle as the Australian government has a track record of increasing the Medicare Levy for all kinds of things, like repairing flood damage in Brisbane!
 I chose the emissions reduction fund because it actually exists and actually buys emissions reductions.

Most people who voted for 0.125% but against 0.25% have valid reasons - they can't afford the higher tax. However, one person said the government should cover the rest by other means. So that person may really be willing to pay 0.25% if the government won't do that.


When we get to the people who voted against both tax rates, most are against the policy vehicle rather than not being willing to pay for climate change mitigation. So, from the point of view of measuring WTP these votes would result in an under estimate. These "protest votes" are a big problem for CVM. Only one person said they weren't willing to pay anything given the bang for the buck.

Saturday, February 10, 2018

A Multicointegration Model of Global Climate Change

We have a new working paper out on time series econometric modeling of global climate change. We use a multicointegration model to estimate the relationship between radiative forcing and global surface temperature since 1850. We estimate that the equilibrium climate sensitivity to doubling CO2 is 2.8ºC – which is close to the consensus in the climate science community – with a “likely” range from 2.1ºC to 3.5ºC.* This is remarkably close to the recently published estimate of Cox et al. (2018).

Our paper builds on my previous research on this topic. Together with Robert Kaufmann, I pioneered the application of econometric methods to climate science – Richard Tol was another early researcher in this field. Though we managed to publish a paper in Nature early on (Kaufmann and Stern, 1997), I became discouraged by the resistance we faced from the climate science community. But now our work has been cited in the IPCC 5th Assessment Report and recently there is also a lot of interest in the topic among econometricians. This has encouraged me to get involved in this research again.

We wrote the first draft of this paper for a conference in Aarhus, Denmark on the econometrics of climate change in late 2016 and hope it will be included in a special issue of the Journal of Econometrics based on papers from the conference. I posted some of our literature review on this blog back in 2016.

Multicointegration models, first introduced by Granger and Lee (1989), are designed to model long-run equilibrium relationships between non-stationary variables where there is a second equilibrium relationship between accumulated deviations from the first relationship and one or more of the original variables. Such a relationship is typically found for flow and stock variables. For example, Granger and Lee (1989) examine production, sales, and inventory in manufacturing, Engsted and Haldrup (1999) housing starts and unfinished stock, Siliverstovs (2006) consumption and wealth, and Berenguer-Rico and Carrion-i-Silvestre (2011) government deficits and debt. Multicointegration models allow for slower adjustment to long-run equilibrium than do typical econometric time series models because of the buffering effect of the stock variable.

In our model there is a long-run equilibrium between radiative forcing, f, and surface temperature, s:
The equilibrium climate sensitivity is given by 5.35*ln(2)/lambda. But because of the buffering effect of the ocean, surface temperature takes a long time to reach equilibrium. The deviations from equilibrium, q, represent a flow of heat from the surface to (mostly) the ocean. The accumulated flows are the stock of heat in the Earth system, Q. Surface temperature also tends towards equilibrium with this stock of heat:
where u is a (serially correlated but stationary) random error. Granger and Lee simply embedded both these long-run relations in a vector autoregressive (VAR) time series model for s and f. A somewhat more recent and much more powerful approach (e.g. Engsted and Haldrup, 1999) notes that:
where F is accumulated f and S is accumulated s. In other words, S(2) = s(1)+s(2), S(3) = s(1)+s(2)+s(3) etc. This means that we can estimate a model that takes into account the accumulation of heat in the ocean without using any actual data on ocean heat content (OHC) ! One reason that this is exciting, is because OHC data is only available since 1940 and data for the early decades is very uncertain. Only since 2000 is there a good measurement network in place. This means that we can use temperature and forcing data back to 1850 to estimate the heat content. Another reason that this is exciting is that F and S are so-called second order integrated variables (I(2)) and estimation with I(2) variables, though complicated, is super-super consistent – it is easier to get an accurate estimate of a parameter despite noise and measurement error issues in a relatively small sample. The I(2) approach combines the 2nd and 3rd equations above into a single relationship which it embeds in a VAR model that we estimate using Johansen's maximum likelihood method. The CATS package which runs on top of RATS can estimate such models as can the Oxmetrics econometrics suite. The data we used in the paper is available here.

This graph compares our estimate of OHC (our preferred estimate is the partial efficacy estimate) with an estimate from an energy balance model (Marvel et al., 2016) and observations of ocean heat content (Cheng et al, 2017):


We think that the results are quite good, given that we didn't use any data on OHC to estimate it and that the observed OHC is very uncertain in the early decades. In fact, our estimate cointegrates with these observations and the estimated coefficient is close to what is expected from theory. The next graph shows the energy balance:


The red area is radiative forcing relative to the base year. This is now more than 2.5 watts per square meter – doubling CO2 is equivalent to a 3.7 watt per square meter increase. The grey line is surface temperature. The difference between the top of the red area and the grey line is the disequilibrium between surface temperature and radiative forcing according to the model. This is now between 1 and 1.5 watts per square meter and implies that, if radiative forcing was held constant from now on, that temperature would increase by around 1ºC to reach equilibrium.** This gap is exactly balanced by the blue area, which is heat uptake. As you can see, heat uptake kept surface temperature fairly constant during the last decade and a half despite increasing forcing. It's also interesting to see what happens during large volcanic eruptions such as Krakatoa in 1883. Heat leaves the ocean, largely, but not entirely, offsetting the fall in radiative forcing due to the eruption. This means that though the impact of large volcanic eruptions on radiative forcing is short-lived, as the stratospheric sulfates emitted are removed after 2 to 3 years, they have much longer-lasting effects on the climate as shown by the long period of depressed heat content after the Krakatoa eruption in the previous graph.

We also compare the multicointegration model to more conventional (I(1)) VAR models. In the following graph, Models I and II are multicointegration models and Models IV to VI are I(1) VAR models. Model IV actually includes observed ocean heat content as one of its variables, but Models V and VI just include surface temperature and forcing. The graph shows the temperature response to permanent doubling of radiative forcing:


The multicointegration models have both a higher climate sensitivity and respond more slowly due to the buffering effect. This mimics, to some degree, the response of a general circulation model. The performance of Model IV is actually worse than the bivariate I(1) VARs. This is because it uses a much shorter sample period than Models V and VI. In simulations that are not reported in the paper, we found that a simple bivariate I(1) VAR estimates the climate sensitivity correctly if the time series is sufficiently long - much longer than the 165 years of annual observations that we have. This means that ignoring the ocean doesn't strictly result in omitted variables bias as I previously claimed. Estimates are biased in a small sample, but not in a sufficiently large sample. That is probably going to be another paper :)

* "Likely" is the IPCC term for a 66% confidence interval. This confidence interval is computed using the delta method and is a little different to the one reported in the paper.
** This is called committed warming. But, if emissions were actually reduced to zero, it's expected that forcing would decline and that the decline in forcing would about balance the increase in temperature towards equilibrium.

References

Berenguer-Rico, V., Carrion-i-Silvestre, J. L., 2011. Regime shifts in stock-flow I(2)-I(1) systems: The case of US fiscal sustainability. Journal of Applied Econometrics 26, 298—321.

Cheng L., Trenberth, K. E., Fasullo, J., Boyer, T., Abraham, J., Zhu, J., 2017. Improved estimates of ocean heat content from 1960 to 2015. Science Advances 3(3), e1601545.

Cox, P. M., Huntingford, C., Williamson, M. S., 2018. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 553, 319–322.

Engsted, T. Haldrup, N., 1999. Multicointegration in stock-flow models. Oxford Bulletin of Economics and Statistics 61, 237—254.

Granger, C. W. J., Lee, T. H., 1989. Investigation of production, sales and inventory relationships using multicointegration and non-symmetric error correction models. Journal of Applied Econometrics 4, S145—S159.

Kaufmann R. K. and D. I. Stern (1997) Evidence for human influence on climate from hemispheric temperature relations, Nature 388, 39-44.

Marvel, K., Schmidt, G. A., Miller, R. L., Nazarenko, L., 2016. Implications for climate sensitivity from the response to individual forcings. Nature Climate Change 6(4), 386—389.

Siliverstovs, B., 2006. Multicointegration in US consumption data. Applied Economics 38(7), 819–833.

Saturday, February 3, 2018

Data and Code for "Modeling the Emissions-Income Relationship Using Long-run Growth Rates"

I've posted on my website the data and code used in our paper "Modeling the Emissions-Income Relationship Using Long-run Growth Rates" that was recently published in Environment and Development Economics. The data is in .xls format and the econometrics code is in RATS. If you don't have RATS, I think it should be fairly easy to translate the commands into another package like Stata. If anything is unclear, please ask me. I managed to replicate all the regression results and standard errors in the paper but some of the diagnostic statistics are different. I think only once does that make a difference, and then it's in a positive way. I hope that providing this data and code will encourage people to use our approach to model the emissions-income relationship.

Tuesday, January 16, 2018

Explaining Malthusian Sluggishness

I'm adding some more intuition to our paper on the Industrial Revolution. I have a sketch of the math but still need to work out the details. Here is the argument:

"In this section, we show that both Malthusian Sluggishness and Modern Economic Growth are characterized by a strong equilibrium bias of technical change (Acemoglu, 2009). This means that in both regimes the long-run demand curve for the resource is upward sloping – higher relative prices are associated with higher demand. Under Malthusian Sluggishness the price of wood is rising relative to coal and technical change is relatively wood-augmenting. At the same time, wood use is rising relative to coal use. Because when the elasticity of substitution is greater than one the market size effect dominates the price effect, technical change becomes increasingly wood-augmenting. As a result, the economy increasingly diverges from the region where an industrial revolution is possible or inevitable. Modern Economic Growth is the mirror image. The price of coal rises relative to the price of wood, but coal use increases relative to wood use and technical change is increasingly coal-augmenting."

Saturday, January 6, 2018

How to Count Citations If You Must

That is the title of a paper in the American Economic Review by Motty Perry and Philip Reny. They present five axioms that they argue a good index of individual citation performance should conform to. They show that the only index that satisfies all five axioms is the Euclidean length of the list of citations to each of a researcher's publications – in other words, the square root of the sum of squares of the citations to each of their papers.* This index puts much more weight on highly cited papers and much less on little cited papers than simply adding up a researcher's total citations would. This is a result of their "depth relevance" axiom. A citation index that is depth relevant always increases when some of the citations of a researcher's less cited papers are instead transferred to some of the researcher's more cited papers. In the extreme, it rewards "one hit wonders" who have a single highly cited paper, over consistent performers who have a more extensive body of work with the same total number of citations.

The Euclidean index is an example of what economists call constant elasticity of substitution, or CES, functions. Instead of squaring each citation number, we could raise it to a different power, such as 1.5, 0.5, or anything else. Perry and Reny show that the rank correlation between the National Research Council peer-reviewed ranks of the top 50 U.S. economics departments and the CES citation indices of the faculty employed in those departments is at a maximum for a power of 1.85:



This is close to 2 and suggests that the market for economists values citations in a similar way to the Euclidean index.

RePEc acted unusually quickly to add this index to their rankings. Richard Tol and I have a new working paper that discusses this new citation metric. We introduce an alternative axiom: "breadth relevance", which rewards consistent achievers. This axiom states that a citation index always increases when some citations from highly cited papers are shifted to less cited papers. We also reanalyze the dataset of economists at the top 50 U.S. departments that Perry and Reny looked at and a much larger dataset that we scraped from CitEc for economists at the 400 international universities ranked by QS. Unlike Perry and Reny, we take into account the fact that citations accumulate over a researcher's career and so junior researchers with few citations aren't necessarily weaker researchers than senior researchers with more citations. Instead, we need to compare citation performance within each cohort of researchers measured by the years since they got their PhD or published their first paper.

We show that a breadth relevant index that also satisfies Perry and Reny's other axioms is a CES function with exponent of less than one. Our empirical analysis finds that the distribution of economists across departments is in fact explained best by the simple sum of their citations, which is equivalent to a CES function with exponent of one, that favors neither depth nor breadth. However, at lower ranked departments – departments ranked by QS from 51 to 400 – the Euclidean index does explain the distribution of economists better than does total citations.


In this graph, the full sample is the same dataset that Perry and Reny used in their graph. The peak correlation is for a lower exponent – tau or sigma** – simply because we take into account cohort effects by computing the correlation for a researcher's citation index relative to the cohort mean.*** While the distribution across the top 25 departments is similarly to the full sample, with a peak at a slightly lower exponent that is very close to one, we don't find any correlation between citations and department rank for the next 25 departments. It seems that there aren't big differences between them.

Here are the correlations for the larger dataset that uses CitEc citations for the 400 universities ranked by QS:


For the top 50 universities, the peak correlation is for an exponent of 1.39 but for the next 350 universities the peak correlation is for 2.22. The paper also includes parametric maximum likelihood estimates that come to similar conclusions.

Breadth per se does not explain the distribution of researchers in our sample, but the highest ranked universities appear to weight breadth and depth equally, while lower-ranked universities do focus on depth, giving more weight to a few highly cited papers.

A possible speculative explanation of behavior across the spectrum of universities could be as follows. Lowest-ranked universities, outside of the 400 universities ranked by QS, might simply care about publication without worrying about impact. Having more publications would be better than having fewer at these institutions, suggesting a breadth relevant citation index. Our exploratory analysis that includes universities outside of those ranked by QS supports this. We found that breadth was inversely correlated with average citations in the lower percentiles.

Middle-ranked universities, such as those ranked between 400 and 50 in the QS ranking, care about impact; having some high-impact publications is better than having none and a depth-relevant index describes behavior in this interval. Finally, among the top-ranked universities such as the QS top 50 or NRC top 25, hiring and tenure committees wish to see high-impact research across all of a researcher's publications and the best-fit index moves towards. Here, adding lower-impact publications to a publication list that contains high-impact ones is seen as a negative.

* As monotonic transformations of the index also satisfy the same axioms, the simplest index that satisfies the axioms is simply the sum of squares.

** In the paper, we refer to an exponent of less than one as tau and an exponent greater than one as sigma.

*** The Ellison dataset that Perry and Reny use, uses Google Scholar data and truncates each researcher's publication list at 100 papers. With all working paper variants, it's not hard to exceed 100 items. This could bias the analysis in favor of depth rather than breadth. We think that the correlation computed for researchers with 100 papers or less only is a better way to test whether depth or breadth best explains the distribution of economists across departments. The correlation peaks very close to one for this dataset.