Neural processes
A Neural Process (NP) is a map from a set of observed input-output pairs to a predictive distribution over functions, which is designed to mimic other stochastic processes’ inference mechanisms. NPs define distributions over functions, are capable of rapid adaptation to new observations, and can estimate the uncertainty in their predictions
# Resources
# Code
- #CODE Neuralprocesses - A framework for composing Neural Processes in Python
# References
- #PAPER Neural Processes (Garnelo 2018)
- #PAPER Conditional Neural Processes (Garnelo 2018)
- #PAPER Residual Neural Processes (Lee 2020)
- #PAPER Convolutional Conditional Neural Processes (Gordon 2020)
- #PAPER The Gaussian Neural Process (Bruinsma 2021)
- #PAPER GP-ConvCNP: Better Generalization for Convolutional Conditional Neural Processes on Time Series Data (Petersen 2021)
- #PAPER Conditional Temporal Neural Processes with Covariance Loss (Yoo 2021)
- #PAPER
Contrastive Conditional Neural Processes (Ye 2022)
- Conditional Neural Processes (CNPs) bridge neural networks with probabilistic inference to approximate functions of Stochastic Processes under meta-learning settings
- Proposed to equip CNPs by 1) aligning prediction with encoded ground-truth observation, and 2) decoupling metarepresentation adaptation from generative reconstruction
- #PAPER #REVIEW The Neural Process Family: Survey, Applications and Perspectives (Jha 2022)