Bayesian modelling
See AI/Math and Statistics/Monte Carlo methods and AI/Deep learning/Probabilistic deep learning
# Resources
- https://en.wikipedia.org/wiki/Bayesian_statistics
- https://en.wikipedia.org/wiki/Bayesian_inference
- http://brohrer.github.io/how_bayesian_inference_works.html
- http://willwolf.io/en/2017/02/07/bayesian-inference-via-simulated-annealing/
- #TALK Bayesian Inference, Shakir Mohamed, MLSS 2020:
# Bayesian vs frequentist discussion
- http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/
- http://www.fharrell.com/2017/02/my-journey-from-frequentist-to-bayesian.html
- https://mchankins.wordpress.com/2013/04/21/still-not-significant-2/
- https://aeon.co/essays/it-s-time-for-science-to-abandon-the-term-statistically-significant
- http://www.fharrell.com/2017/02/a-litany-of-problems-with-p-values.html?m=1
# Bayes theorem
- http://blogs.scientificamerican.com/cross-check/bayes-s-theorem-what-s-the-big-deal/
- http://www.analyticsvidhya.com/blog/2016/06/bayesian-statistics-beginners-simple-english/
# MAP
- https://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation
- In Bayesian statistics, a maximum a posteriori probability(MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data.It is closely related toFisher’s method of maximum likelihood(ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution(that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of ML estimation.
# MLE
- https://en.wikipedia.org/wiki/Maximum_likelihood_estimation
- Maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model given observations, by finding the parameter values that maximize the likelihood of making the observations given the parameters. MLE can be seen as a special case of the maximum a posteriori estimation (MAP) that assumes a uniform prior distribution of the parameters, or as a variant of the MAP that ignores the prior and which therefore is unregularized.
# Bayesian network
# Naive Bayes algorithm
- Supervised machine learning method
- Naive Bayes (scikit-learn)
# Variational Bayesian methods
- See AI/Deep learning/Normalizing flows
- Variational Bayesian inference with normalizing flows: a simple example
# MCMC
- See MCMC section in AI/Math and Statistics/Monte Carlo methods
# Code
- #CODE Stan
- #CODE Pymc3 - Probabilistic Programming in Python
- #CODE Arviz - Exploratory analysis of Bayesian models with Python
- #CODE BayesicFitting - A package for model fitting and bayesian evidence calculation
# Courses
# Books
- #BOOK
Think Bayes - Bayesian Statistics Made Simple (Downey 2012)
- Think Bayes is an introduction to Bayesian statistics using computational methods
- #BOOK Probabilistic programming and bayesian methods for hackers
- #BOOK Bayesian Modeling and Computation in Python (Martin 2021, CRC)