Recurrent Neural Networks (RNNs)
See:
# Resources
- https://en.wikipedia.org/wiki/Recurrent_neural_network
- https://github.com/kjw0612/awesome-rnn
- Recurrent Neural Networks cheatsheet
- Tensorflow, DL and RNNs without a PhD
- http://www.abigailsee.com/2017/04/16/taming-rnns-for-better-summarization.html
- http://karpathy.github.io/2015/05/21/rnn-effectiveness/
- https://towardsdatascience.com/the-fall-of-rnn-lstm-2d1594c74ce0
- https://www.kaggle.com/thebrownviking20/intro-to-recurrent-neural-networks-lstm-gru
- https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21
- 4 Sequence Encoding Blocks You Must Know Besides RNN/LSTM in Tensorflow
- When Recurrent Models Don’t Need to be Recurrent (recurrent vs feed-forward models)
- Deep Learning: No, LSTMs Are Not Dead!
# References
- #PAPER Neural Turing Machines (Graves 2014)
- #PAPER Attention and Augmented Recurrent Neural Networks (Olah 2016)
- #PAPER Engineering Extreme Event Forecasting at Uber with Recurrent Neural Networks (Laptev 2017)
- #PAPER
Deep and Confident Prediction for Time Series at Uber (Zhu 2017)
- https://eng.uber.com/neural-networks-uncertainty-estimation/
- introduced a new end-to-end Bayesian neural network (BNN) architecture that more accurately forecasts time series predictions and uncertainty estimations at scale
- #PAPER Neural Machine Translation by Jointly Learning to Align and Translate (Bahdanau 2016)
- #PAPER DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks (Salinas 2019)
- #PAPER A Modern Self-Referential Weight Matrix That Learns to Modify Itself (Irie 2022)