dl4ds
Deep Learning for empirical DownScaling
DL4DS
(Deep Learning for empirical DownScaling) is a Python package that implements state-of-the-art and novel deep learning algorithms for empirical downscaling of gridded Earth science data.
The general architecture of DL4DS
is shown on the image below. A low-resolution gridded dataset can be downscaled, with the help of (an arbitrary number of) auxiliary predictor and static variables, and a high-resolution reference dataset. The mapping between the low- and high-resolution data is learned with either a supervised or a conditional generative adversarial DL model.
The training can be done from explicit pairs of high- and low-resolution samples (MOS-style, e.g., high-res observations and low-res numerical weather prediction model output) or only with a HR dataset (PerfectProg-style, e.g., high-res observations or high-res model output).
A wide variety of network architectures have been implemented in DL4DS
. The main modelling approaches can be combined into many different architectures:
Downscaling type | Training (loss type) | Sample type | Backbone section | Upsampling method |
---|---|---|---|---|
MOS (explicit pairs of HR and LR data) | Supervised (non-adversarial) | Spatial | Plain convolutional | Pre-upsampling via interpolation |
PerfectProg (implicit pairs, only HR data) | Conditional Adversarial | Spatio-temporal | Residual | Post-upsampling via sub-pixel convolution |
Dense | Post-upsampling via resize convolution | |||
Unet (PIN, Spatial samples) | Post-upsampling via deconvolution | |||
Convnext (Spatial samples) |
In DL4DS
, we implement a channel attention mechanism to exploit inter-channel relationship of features by providing a weight for each channel in order to enhance those that contribute the most to the optimizaiton and learning process. Aditionally, a Localized Convolutional Block (LCB) is located in the output module of the networks in DL4DS
. With the LCB we learn location-specific information via a locally connected layer with biases.
DL4DS
is built on top of Tensorflow/Keras and supports distributed GPU training (data parallelism) thanks to Horovod.
API documentation
Check out the API documentation here.
Installation
pip install dl4ds
Example notebooks
Colab notebooks are under construction. Stay tuned!
View Source
0""" 1.. include:: ../README.md 2""" 3 4__version__ = "1.7.0" 5 6BACKBONE_BLOCKS = [ 7 'convnet', # plain convolutional block w/o skip connections 8 'resnet', # residual convolutional blocks 9 'densenet', # dense convolutional blocks 10 'convnext', # convnext style residual blocks 11 'unet', # unet (encoder-decoder) backbone 12 'invnet' 13] 14 15UPSAMPLING_METHODS = [ 16 'spc', # pixel shuffle or subpixel convolution in post-upscaling 17 'rc', # resize convolution in post-upscaling 18 'dc', # deconvolution or transposed convolution in post-upscaling 19 'pin' # pre-upsampling via (bicubic) interpolation 20] 21POSTUPSAMPLING_METHODS = ['spc', 'rc', 'dc'] 22 23INTERPOLATION_METHODS = [ 24 'inter_area', # resampling using pixel area relation (from opencv) 25 'nearest', # nearest neightbors interpolation (from opencv) 26 'bicubic', # bicubic interpolation (from opencv) 27 'bilinear', # bilinear interpolation (from opencv) 28 'lanczos' # lanczos interpolation over 8x8 neighborhood (from opencv) 29] 30 31LOSS_FUNCTIONS = [ 32 'mae', # mean absolute error 33 'mse', # mean squarred error 34 'dssim', # structural dissimilarity 35 'dssim_mae', # 0.8 * DSSIM + 0.2 * MAE 36 'dssim_mse', # 0.8 * DSSIM + 0.2 * MSE 37 'dssim_mae_mse', # 0.6 * DSSIM + 0.2 * MAE + 0.2 * MSE 38 'msdssim', # multiscale structural dissimilarity 39 'msdssim_mae', # 0.8 * MSDSSIM + 0.2 * MAE 40 'msdssim_mae_mse' # 0.6 * MSDSSIM + 0.2 * MAE + 0.2 * MSE 41] 42 43DROPOUT_VARIANTS = [ 44 'vanilla', # vanilla dropout 45 'gaussian', # gaussian dropout 46 'spatial', # spatial dropout 47 'mcdrop', # monte carlo (vanilla) dropout 48 'mcgaussiandrop', # monte carlo gaussian dropout 49 'mcspatialdrop' # monte carlo spatial dropout 50] 51 52from .metrics import * 53from .inference import * 54from .utils import * 55from .dataloader import * 56from .models import * 57from .training import * 58from .preprocessing import *