Meta-learning the Learning Trends Shared Across Tasks
Document Type
Conference Proceeding
Publication Title
32nd British Machine Vision Conference, BMVC 2021
Abstract
Meta-learning stands for 'learning to learn' such that generalization to new tasks is achieved. Among these methods, Gradient-based meta-learning algorithms are a specific sub-class that excel at quick adaptation to new tasks with limited data. This demonstrates their ability to acquire transferable knowledge, a capability that is central to human learning. However, the existing meta-learning approaches only depend on the current task information during the adaptation, and do not share the meta-knowledge of how a similar task has been adapted before. To address this gap, we propose a 'Path-aware' model-agnostic meta-learning approach. Specifically, our approach not only learns a good initialization (meta-parameters) for adaptation, it also learns an optimal way to adapt these parameters to a set of task-specific parameters, with learnable update directions, learning rates and, most importantly, the way updates evolve over different time-steps. Compared to the existing meta-learning methods, our approach offers the following benefits: (a) The ability to learn gradient-preconditioning at different time-steps of the inner-loop, thereby modeling the dynamic learning behavior shared across tasks, and (b) The capability of aggregating the learning context through the provision of direct gradient-skip connections from the old time-steps, thus avoiding overfitting and improving generalization. We report significant performance improvements on a number of datasets for few-shot learning on classification and regression tasks.
Publication Date
1-1-2021
Recommended Citation
J. Rajasegaran et al., "Meta-learning the Learning Trends Shared Across Tasks," 32nd British Machine Vision Conference, BMVC 2021, Jan 2021.