a9648ce7cc9340ec9baedb2343524ccd

1. Scenario categorization

cea31628e95a40a5ababa3f6a090f84a

Licensed under the MIT License.

This notebook is part of a repository to generate figures and analysis for the manuscript

Keywan Riahi, Christoph Bertram, Daniel Huppmann, et al.  Cost and attainability of meeting stringent climate targets without overshoot Nature Climate Change, 2021 doi: 10.1038/s41558-021-01215-2

The scenario data used in this analysis should be cited as

ENGAGE Global Scenarios (Version 2.0) doi: 10.5281/zenodo.5553976

The data can be accessed and downloaded via the ENGAGE Scenario Explorer at https://data.ece.iiasa.ac.at/engage. Please refer to thelicenseof the scenario ensemble before redistributing this data or adapted material.

The source code of this notebook is available on GitHub at https://github.com/iiasa/ENGAGE-netzero-analysis. A rendered version can be seen at https://data.ece.iiasa.ac.at/engage-netzero-analysis.

[1]:
from pathlib import Path
from itertools import product
import pandas as pd
import numpy as np

import pyam
from pyam import IamDataFrame

Import the scenario snapshot used for this analysis

[2]:
data_folder = Path("../data")
[3]:
df = IamDataFrame(data_folder / "ENGAGE_snapshot_selected.csv")
pyam - INFO: Running in a notebook, setting up a basic logging at level INFO
pyam.core - INFO: Reading file ../data/ENGAGE_snapshot_selected.csv

Set categories and meta indicators

Set scenario type (budget logic) and family (NPi vs. INDCi)

[4]:
scenario_type = {
    "0": "peak_budget",
    "f": "full_century_budget",
    "p": "sensitivity",
}

def assign_type(i):
    if i in ["EN_NPi2100", "EN_INDCi2100", "EN_NoPolicy"]:
        return "reference"
    return scenario_type[i[-1]]

df.set_meta([assign_type(s) for m, s in df.index], "budget_type")
[5]:
scenario_family = {
    "NoPolicy": "baseline",
    "NPi": "NPi",
    "INDCi": "INDCi"
}

def assign_family(i):
    for key, value in scenario_family.items():
        if i.startswith(f"EN_{key}"):
            return value

df.set_meta([assign_family(s) for m, s in df.index], "scenario_family")

Closure of CO2 emissions regional & sectoral hierarchy

Compute “Emissions|CO2|Other” and “Emissions|CO2|Energy|Demand|Other” explicitly to ensure consistent data.

Note: The unit of CO2 Emissions “Mt CO2/yr” cannot be directly handled by the iam-units package, which supports unit-handling of the algebraic operations in the pyam package. Therefore, the following cells override the automated unit-handling and set the units explicitly using the keyword argument ignore_units="Mt CO2/yr".

[6]:
co2 = "Emissions|CO2"

df.subtract(
    co2,
    [f"{co2}|{cat}" for cat in ["AFOLU", "Energy|Demand", "Energy|Supply", "Industrial Processes"]],
    f"{co2}|Other",
    append=True,
    ignore_units="Mt CO2/yr",
)
[7]:
co2_demand = "Emissions|CO2|Energy|Demand"

df.subtract(
    co2_demand,
    [f"{co2_demand}|{cat}" for cat in ["Industry", "Transportation", "Residential and Commercial"]],
    f"{co2_demand}|Other",
    append=True,
    ignore_units="Mt CO2/yr",
)
[8]:
r5_regions = df.region
r5_regions.remove("World")
[9]:
df_co2 = df.filter(variable=co2)
df_co2_other_region = df_co2.subtract("World", r5_regions, "Other (R5)", axis="region", ignore_units="Mt CO2/yr")
df.append(df_co2_other_region, inplace=True)

Compute cumulative CO2 emissions and year of netzero

[10]:
co2 = (
    df.filter(region="World", variable="Emissions|CO2")
    .convert_unit("Mt CO2/yr", "Gt CO2/yr")
    .timeseries()
)

def calculate_cumulative(last_year):
    return co2.apply(pyam.cumulative, raw=False, axis=1,
                     first_year=2020, last_year=last_year)

df.set_meta(calculate_cumulative(2100), "cumulative_emissions_2100")
df.set_meta(calculate_cumulative(2050), "cumulative_emissions_2050")
[11]:
def _cross_threshold(x):
    y = pyam.cross_threshold(x, threshold=0.1)
    # set threshold slightly above 0 to catch convergence to 0
    return y[0] if len(y) else np.nan

def calculate_netzero(_df):
    return _df.apply(_cross_threshold, raw=False, axis=1)
[12]:
df.set_meta(calculate_netzero(co2), "netzero|CO2")

Determine peak and end-of-century temperature

[13]:
median_warming = "AR5 climate diagnostics|Temperature|Global Mean|MAGICC6|MED"
[14]:
df_mean_temperature = df.filter(variable=median_warming)
[15]:
df_mean_temperature.set_meta_from_data("median warming at peak", np.max)
df_mean_temperature.set_meta_from_data("median warming in 2100", year=2100)
[16]:
peak_decline = (
    df_mean_temperature.meta["median warming at peak"]
    - df_mean_temperature.meta["median warming in 2100"]
)

df_mean_temperature.set_meta(peak_decline, "median warming peak-and-decline")
[17]:
def year_of_peak_warming(x):
    return int(x[x == x.max()].index[0])
[18]:
df_mean_temperature.set_meta(
    df_mean_temperature.timeseries().apply(year_of_peak_warming, raw=False, axis=1),
    "year of peak warming"
)

Merge new meta columns back to IamDataFrame

[19]:
for i in df_mean_temperature.meta.columns:
    if i not in df.meta.columns:
        df.set_meta(df_mean_temperature.meta[i])

Temperature categorization using probabilistic MAGICC

[20]:
def warming_exceedance_prob(x):
    return "AR5 climate diagnostics|Temperature|Exceedance Probability|{} degC|MAGICC6".format(x)
[21]:
df.set_meta(meta="uncategorized", name="category")

Categorization with several criteria hinging on the same variable need to be implemented iteratively.

[22]:
pyam.categorize(df, exclude=False, category="uncategorized",
                value="low overshoot", name="category",
                criteria={warming_exceedance_prob(1.5): {"up": 0.66}})
pyam.core - INFO: 33 scenarios categorized as `category: low overshoot`
[23]:
pyam.categorize(df, exclude=False, category="low overshoot",
                value="1.5C (with low overshoot)", name="category",
                criteria={warming_exceedance_prob(1.5): {"up": 0.50, "year": 2100}})
pyam.core - INFO: 24 scenarios categorized as `category: 1.5C (with low overshoot)`

Reset the remaining low_overshoot scenarios to uncategorized.

[24]:
df.set_meta(meta="uncategorized", name="category", index=df.filter(category="low overshoot"))

Remaining below 2°C with 66% probability means that the exceedance probability has to be (at most) 34%.

[25]:
pyam.categorize(df, exclude=False, category="uncategorized",
                value="2C", name="category",
                criteria={warming_exceedance_prob(2.0): {"up": 0.34}})
pyam.core - INFO: 152 scenarios categorized as `category: 2C`
[26]:
pyam.categorize(df, exclude=False, category="uncategorized",
                value="2.5C", name="category",
                criteria={warming_exceedance_prob(2.5): {"up": 0.34}})
pyam.core - INFO: 157 scenarios categorized as `category: 2.5C`

All remaining uncategorized scenarios exceed the 2°C threshold.

[27]:
df.set_meta(meta=">2.5C", name="category", index=df.filter(category="uncategorized"))

Temperature categorization by corresponding peak-budget scenario

The meta indicator category_peak maps the assigned category of a peak-budget scenario to the corresponding full-century budget.

[28]:
df_peak = df.filter(budget_type="peak_budget")
df_fullcentury = df.filter(budget_type="full_century_budget")
[29]:
df.set_meta(meta=df_peak["category"], name="category_peak")
[30]:
scenario_mapping = dict([(s, f"{s}f") for s in df_peak.scenario])
full_century_index = pyam.index.replace_index_values(df_peak, "scenario", scenario_mapping)

peak_category = pd.Series(data=df_peak["category"].values, index=full_century_index)
[31]:
df.set_meta(
    meta=peak_category[peak_category.index.intersection(df_fullcentury.index)],
    name="category_peak"
)

Export to file

Save full scenario ensemble data as well as subsets for particular figures.

[32]:
df.to_excel(data_folder / "ENGAGE_processed_snapshot.xlsx")

Exclude sensitivity scenarios for use in analysis and figures.

[33]:
df_analysis = df.filter(scenario="*p", keep=False)
[34]:
df_analysis.filter(
    variable=["Emissions|CO2", "Emissions|Kyoto Gases"],
    region="World"
).to_excel(data_folder / "ENGAGE_fig1.xlsx")
[35]:
df_analysis.filter(
    variable=["GDP|*", "Price|Carbon"],
    region="World"
).to_excel(data_folder / "ENGAGE_fig2.xlsx")
[36]:
df_analysis.filter(
    variable="Emissions|CO2*",
).to_excel(data_folder / "ENGAGE_fig3.xlsx")

Export sensitivity scenarios as own file.

[37]:
df.filter(
    model="MESSAGEix-GLOBIOM 1.1",
    scenario=[f"EN_NPi2020_{b}*" for b in [1000, 600]],
).to_excel(data_folder / "ENGAGE_MESSAGE_sensitivity_runs.xlsx")
[ ]: