Multi-task learning
Multi-task learning is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks
See:
# Resources
- https://en.wikipedia.org/wiki/Multi-task_learning
- Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better
- Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly
- An Overview of Multi-Task Learning in Deep Neural Networks
# Courses
- #COURSE CS 330: Deep Multi-Task and Meta Learning (Stanford)
- #COURSE Multi task learning lecture (CS 152 Neural Networks, Harvey Mudd college)
- #COURSE Multi task Learning lecture (DeepLearningAI, Andrew Ng)
# References
- #PAPER Multitask Learning (Caruana 1997)
- #PAPER Multi-Task Learning with Deep Neural Networks: A Survey (Crawshaw 2020)
- #PAPER Which Tasks Should Be Learned Together in Multi-task Learning? (Standley 2020)
- #PAPER Multi-task UNet: Jointly Boosting Saliency Prediction and Disease Classification on Chest X-ray Images (Zhu 2022)