"
]
},
{
"cell_type": "markdown",
"id": "f375f8f6",
"metadata": {},
"source": [
" "
]
},
{
"cell_type": "markdown",
"id": "7a279bed",
"metadata": {},
"source": [
"## Install ADS API\n",
"\n",
"We will need to install the Application Programming Interface (API) of the [Atmosphere Data Store (ADS)](https://ads-beta.atmosphere.copernicus.eu/). This will allow us to programmatically download data."
]
},
{
"cell_type": "markdown",
"id": "a3efd2aa",
"metadata": {},
"source": [
"```{note}\n",
"Note the exclamation mark in the line of code below. This means the code will run as a shell (as opposed to a notebook) command.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ddaaf27e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:08:31.667927Z",
"iopub.status.busy": "2024-09-12T13:08:31.667499Z",
"iopub.status.idle": "2024-09-12T13:08:44.622983Z",
"shell.execute_reply": "2024-09-12T13:08:44.621511Z",
"shell.execute_reply.started": "2024-09-12T13:08:31.667885Z"
}
},
"outputs": [],
"source": [
"!pip install cdsapi"
]
},
{
"cell_type": "markdown",
"id": "074bf254",
"metadata": {
"tags": []
},
"source": [
"## Import libraries\n",
"\n",
"Here we import a number of publicly available Python packages, needed for this tutorial."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ccacb4a6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:08:46.026360Z",
"iopub.status.busy": "2024-09-12T13:08:46.025923Z",
"iopub.status.idle": "2024-09-12T13:08:46.033269Z",
"shell.execute_reply": "2024-09-12T13:08:46.031764Z",
"shell.execute_reply.started": "2024-09-12T13:08:46.026318Z"
},
"tags": []
},
"outputs": [],
"source": [
"# CDS API\n",
"import cdsapi\n",
"\n",
"# Library to extract data\n",
"from zipfile import ZipFile\n",
"\n",
"# Libraries to read and process arrays\n",
"import numpy as np\n",
"import xarray as xr\n",
"import pandas as pd\n",
"\n",
"# Disable warnings for data download via API\n",
"import urllib3 \n",
"urllib3.disable_warnings()"
]
},
{
"cell_type": "markdown",
"id": "b66df851",
"metadata": {
"tags": []
},
"source": [
"## Access data\n",
"\n",
"To access data from the ADS, you will need first to register (if you have not already done so), by visiting https://ads-beta.atmosphere.copernicus.eu/ and selecting **\"Login/Register\"**\n",
"\n",
"To obtain data programmatically from the ADS, you will need an API Key. This can be found in the page https://ads-beta.atmosphere.copernicus.eu/how-to-api. Here your key will appear automatically in the black window, assuming you have already registered and logged into the ADS. Your API key is the entire string of characters that appears after `key:`\n",
"\n",
"Now copy your API key into the code cell below, replacing `#######` with your key."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "a59f6cd6",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:08:48.998728Z",
"iopub.status.busy": "2024-09-12T13:08:48.998296Z",
"iopub.status.idle": "2024-09-12T13:08:49.004463Z",
"shell.execute_reply": "2024-09-12T13:08:49.003200Z",
"shell.execute_reply.started": "2024-09-12T13:08:48.998685Z"
}
},
"outputs": [],
"source": [
"URL = 'https://ads-beta.atmosphere.copernicus.eu/api'\n",
"\n",
"# Replace the hashtags with your key:\n",
"KEY = '#############################'"
]
},
{
"cell_type": "markdown",
"id": "d93e2335",
"metadata": {
"tags": []
},
"source": [
"Here we specify a data directory into which we will download our data and all output files that we will generate:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "aadc02ea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:08:50.598715Z",
"iopub.status.busy": "2024-09-12T13:08:50.598260Z",
"iopub.status.idle": "2024-09-12T13:08:50.603819Z",
"shell.execute_reply": "2024-09-12T13:08:50.602729Z",
"shell.execute_reply.started": "2024-09-12T13:08:50.598674Z"
}
},
"outputs": [],
"source": [
"DATADIR = '.'"
]
},
{
"cell_type": "markdown",
"id": "39d52a42",
"metadata": {},
"source": [
"The data we will download and inspect in this tutorial comes from the CAMS Global Atmospheric Composition Forecast dataset. This can be found in the [Atmosphere Data Store (ADS)](https://ads-beta.atmosphere.copernicus.eu/) by scrolling through the datasets, or applying search filters as illustrated here:\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"id": "88d87462",
"metadata": {},
"source": [
"Having selected the correct dataset, we now need to specify what product type, variables, temporal and geographic coverage we are interested in. These can all be selected in the **\"Download data\"** tab. In this tab a form appears in which we will select the following parameters to download:\n",
"\n",
"- Variables (Single level): *Dust aerosol optical depth at 550nm*, *Organic matter aerosol optical depth at 550nm*, *Total aerosol optical depth at 550nm*\n",
"- Date: Start: *2021-08-01*, End: *2021-08-08*\n",
"- Time: *00:00*, *12:00* (default)\n",
"- Leadtime hour: *0* (only analysis)\n",
"- Type: *Forecast* (default)\n",
"- Area: Restricted area: *North: 90*, *East: 180*, *South: 0*, *West: -180* \n",
"- Format: *Zipped netCDF (experimental)*\n",
"\n",
"At the end of the download form, select **\"Show API request\"**. This will reveal a block of code, which you can simply copy and paste into a cell of your Jupyter Notebook (see cell below)..."
]
},
{
"cell_type": "markdown",
"id": "5ebfe0b2",
"metadata": {},
"source": [
"```{note}\n",
"Before running this code, ensure that you have **accepted the terms and conditions**. This is something you only need to do once for each CAMS dataset. You will find the option to do this by selecting the dataset in the ADS, then scrolling to the end of the *Download data* tab.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a81c5194",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:08:59.852157Z",
"iopub.status.busy": "2024-09-12T13:08:59.851739Z",
"iopub.status.idle": "2024-09-12T13:11:51.521600Z",
"shell.execute_reply": "2024-09-12T13:11:51.519929Z",
"shell.execute_reply.started": "2024-09-12T13:08:59.852116Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-12 13:09:00,125 INFO Request ID is cc9448c4-2415-4c46-a73f-316d98350daf\n",
"2024-09-12 13:09:00,184 INFO status has been updated to accepted\n",
"2024-09-12 13:09:01,721 INFO status has been updated to running\n",
"2024-09-12 13:11:50,751 INFO Creating download object as zip with files:\n",
"['data_sfc.nc']\n",
"2024-09-12 13:11:50,752 INFO status has been updated to successful\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "e1d946fd28fd4cd2bb6891ca8edcadc9",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"6b5b7143834942c728fdaee2011a2c60.zip: 0%| | 0.00/24.1M [00:00, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"'./2021-08_AOD.zip'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dataset = \"cams-global-atmospheric-composition-forecasts\"\n",
"request = {\n",
" 'variable': ['dust_aerosol_optical_depth_550nm', 'organic_matter_aerosol_optical_depth_550nm', 'total_aerosol_optical_depth_550nm'],\n",
" 'date': ['2021-08-01/2021-08-08'],\n",
" 'time': ['00:00', '12:00'],\n",
" 'leadtime_hour': ['0'],\n",
" 'type': ['forecast'],\n",
" 'data_format': 'netcdf_zip',\n",
" 'area': [90, -180, 0, 180]\n",
"}\n",
"\n",
"client = cdsapi.Client(url=URL, key=KEY)\n",
"client.retrieve(dataset, request).download(f'{DATADIR}/2021-08_AOD.zip')"
]
},
{
"cell_type": "markdown",
"id": "8f414b9d",
"metadata": {},
"source": [
"## Read data\n",
"\n",
"Now that we have downloaded the data, we can read, plot and analyse it...\n",
"\n",
"We have requested the data in NetCDF format. This is a commonly used format for gridded (array-based) scientific data. \n",
"\n",
"To read and process this data we will make use of the Xarray library. Xarray is an open source project and Python package that makes working with labelled multi-dimensional arrays simple and efficient. We will read the data from our NetCDF file into an Xarray **\"dataset\"**."
]
},
{
"cell_type": "markdown",
"id": "f510a133",
"metadata": {},
"source": [
"First we extract the downloaded zip file:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "afe8efdd",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:11:58.789209Z",
"iopub.status.busy": "2024-09-12T13:11:58.788763Z",
"iopub.status.idle": "2024-09-12T13:11:58.872988Z",
"shell.execute_reply": "2024-09-12T13:11:58.871875Z",
"shell.execute_reply.started": "2024-09-12T13:11:58.789170Z"
}
},
"outputs": [],
"source": [
"# Create a ZipFile Object and load zip file in it\n",
"with ZipFile(f'{DATADIR}/2021-08_AOD.zip', 'r') as zipObj:\n",
" # Extract all the contents of zip file into a directory\n",
" zipObj.extractall(path=f'{DATADIR}/2021-08_AOD/')"
]
},
{
"cell_type": "markdown",
"id": "dc64adae",
"metadata": {},
"source": [
"For convenience, we create a variable with the name of our downloaded file:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "9d2744a9",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:12:37.279647Z",
"iopub.status.busy": "2024-09-12T13:12:37.279236Z",
"iopub.status.idle": "2024-09-12T13:12:37.284837Z",
"shell.execute_reply": "2024-09-12T13:12:37.283521Z",
"shell.execute_reply.started": "2024-09-12T13:12:37.279611Z"
}
},
"outputs": [],
"source": [
"fn = f'{DATADIR}/2021-08_AOD/data_sfc.nc'"
]
},
{
"cell_type": "markdown",
"id": "f60d4003",
"metadata": {},
"source": [
"Now we can read the data into an Xarray dataset:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "9d5e7907",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:12:39.919625Z",
"iopub.status.busy": "2024-09-12T13:12:39.918519Z",
"iopub.status.idle": "2024-09-12T13:12:40.017666Z",
"shell.execute_reply": "2024-09-12T13:12:40.016297Z",
"shell.execute_reply.started": "2024-09-12T13:12:39.919568Z"
}
},
"outputs": [],
"source": [
"# Create Xarray Dataset\n",
"ds = xr.open_dataset(fn)"
]
},
{
"cell_type": "markdown",
"id": "5ce06a70",
"metadata": {},
"source": [
"Let's see how this looks by querying our newly created Xarray dataset ..."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "cbda3c2a",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:12:42.913661Z",
"iopub.status.busy": "2024-09-12T13:12:42.913244Z",
"iopub.status.idle": "2024-09-12T13:12:43.670964Z",
"shell.execute_reply": "2024-09-12T13:12:43.669879Z",
"shell.execute_reply.started": "2024-09-12T13:12:42.913621Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" Size: 64B\n",
"[16 values with dtype=float32]\n",
"Coordinates:\n",
" * forecast_period (forecast_period) timedelta64[ns] 8B 00:00:00\n",
" * forecast_reference_time (forecast_reference_time) datetime64[ns] 128B 20...\n",
" latitude float64 8B 48.8\n",
" longitude float64 8B 2.4\n",
" valid_time (forecast_reference_time, forecast_period) datetime64[ns] 128B ...\n",
"Attributes: (12/33)\n",
" GRIB_paramId: 210207\n",
" GRIB_dataType: fc\n",
" GRIB_numberOfPoints: 203400\n",
" GRIB_typeOfLevel: surface\n",
" GRIB_stepUnits: 1\n",
" GRIB_stepType: instant\n",
" ... ...\n",
" GRIB_units: ~\n",
" long_name: Total Aerosol Optical Depth at ...\n",
" units: ~\n",
" standard_name: unknown\n",
" GRIB_number: 0\n",
" GRIB_surface: 0.0"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"paris"
]
},
{
"cell_type": "markdown",
"id": "97b13d46",
"metadata": {},
"source": [
"#### Regional subset\n",
"\n",
"Often we may wish to select a regional subset. Note that you can specify a region of interest in the [ADS](https://ads-beta.atmosphere.copernicus.eu/) prior to downloading data. This is more efficient as it reduces the data volume. However, there may be cases when you wish to select a regional subset after download. One way to do this is with the `.where()` function. \n",
"\n",
"In the previous examples, we have used methods that return a subset of the original data. By default `.where()` maintains the original size of the data, with selected elements masked (which become \"not a number\", or `nan`). Use of the option `drop=True` clips coordinate elements that are fully masked.\n",
"\n",
"The example below uses `.where()` to select a geographic subset from 30 to 60 degrees latitude. We could also specify longitudinal boundaries, by simply adding further conditions."
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "af74e3bc",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:24:47.629405Z",
"iopub.status.busy": "2024-09-12T13:24:47.628988Z",
"iopub.status.idle": "2024-09-12T13:24:47.704007Z",
"shell.execute_reply": "2024-09-12T13:24:47.702738Z",
"shell.execute_reply.started": "2024-09-12T13:24:47.629366Z"
}
},
"outputs": [],
"source": [
"mid_lat = da.where((da.latitude > 30.) & (da.latitude < 60.), drop=True)"
]
},
{
"cell_type": "markdown",
"id": "952e7370",
"metadata": {},
"source": [
"## Aggregate data\n",
"\n",
"Another common task is to aggregate data. This may include reducing hourly data to daily means, minimum, maximum, or other statistical properties. We may wish to apply over one or more dimensions, such as averaging over all latitudes and longitudes to obtain one global value."
]
},
{
"cell_type": "markdown",
"id": "d20037e0",
"metadata": {},
"source": [
"### Temporal aggregation\n",
"\n",
"To aggregate over one or more dimensions, we can apply one of a number of methods to the original dataset, such as `.mean()`, `.min()`, `.max()`, `.median()` and others (see https://docs.xarray.dev/en/stable/api.html#id6 for the full list). \n",
"\n",
"The example below takes the mean of all time steps. The `keep_attrs` parameter is optional. If set to `True` it will keep the original attributes of the Data Array (i.e. description of variable, units, etc). If set to false, the attributes will be stripped."
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "26d5b430",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:25:29.934199Z",
"iopub.status.busy": "2024-09-12T13:25:29.933799Z",
"iopub.status.idle": "2024-09-12T13:25:30.075958Z",
"shell.execute_reply": "2024-09-12T13:25:30.074820Z",
"shell.execute_reply.started": "2024-09-12T13:25:29.934163Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" Size: 8B\n",
"array([0.2940043])\n",
"Coordinates:\n",
" * forecast_period (forecast_period) timedelta64[ns] 8B 00:00:00"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Total_AOD = time_mean_weighted.mean([\"longitude\", \"latitude\"])\n",
"Total_AOD"
]
},
{
"cell_type": "markdown",
"id": "1189c393",
"metadata": {},
"source": [
"## Export data\n",
"\n",
"This section includes a few examples of how to export data."
]
},
{
"cell_type": "markdown",
"id": "91bcd358",
"metadata": {},
"source": [
"### Export data as NetCDF\n",
"\n",
"The code below provides a simple example of how to export data to NetCDF."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "f44b5cc0",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:26:45.478744Z",
"iopub.status.busy": "2024-09-12T13:26:45.478325Z",
"iopub.status.idle": "2024-09-12T13:26:45.574055Z",
"shell.execute_reply": "2024-09-12T13:26:45.572843Z",
"shell.execute_reply.started": "2024-09-12T13:26:45.478707Z"
}
},
"outputs": [],
"source": [
"paris.to_netcdf(f'{DATADIR}/2021-08_AOD_Paris.nc')"
]
},
{
"cell_type": "markdown",
"id": "0cf6d871",
"metadata": {},
"source": [
"### Export data as CSV"
]
},
{
"cell_type": "markdown",
"id": "bef10e05",
"metadata": {},
"source": [
"You may wish to export this data into a format which enables processing with other tools. A commonly used file format is CSV, or \"Comma Separated Values\", which can be used in software such as Microsoft Excel. This section explains how to export data from an xarray object into CSV. Xarray does not have a function to export directly into CSV, so instead we use the Pandas library. We will read the data into a Pandas Data Frame, then write to a CSV file using a dedicated Pandas function."
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "5e1c9b09",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:26:49.382056Z",
"iopub.status.busy": "2024-09-12T13:26:49.381657Z",
"iopub.status.idle": "2024-09-12T13:26:49.403747Z",
"shell.execute_reply": "2024-09-12T13:26:49.402171Z",
"shell.execute_reply.started": "2024-09-12T13:26:49.382021Z"
}
},
"outputs": [],
"source": [
"df = paris.to_dataframe()"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "762aab7f",
"metadata": {
"execution": {
"iopub.execute_input": "2024-09-12T13:26:50.593559Z",
"iopub.status.busy": "2024-09-12T13:26:50.593153Z",
"iopub.status.idle": "2024-09-12T13:26:50.616117Z",
"shell.execute_reply": "2024-09-12T13:26:50.614870Z",
"shell.execute_reply.started": "2024-09-12T13:26:50.593522Z"
}
},
"outputs": [
{
"data": {
"text/html": [
"