Thomas Bouvier

23 avril 2024 · 2 min

Je vais donner une présentation à JLESC 16 @ Kobe

Je présenterai notre travail en cours intitulé “Efficient Distributed Continual Learning for Steering Experiments in Real-Time” à JLESC 16. Cette présentation est une mise à jour sur l’avancement du projet JLESC “Towards Continual Learning at Scale”, que nous avons lancé en 2022. Le programme complet est disponible ici.

Efficient Distributed Continual Learning for Steering Experiments in Real-Time - Project Update

Deep learning has emerged as a powerful method for extracting valuable information from large volumes of data. However, when new training data arrives continuously (i.e., is not fully available from the beginning), incremental training suffers from catastrophic forgetting (i.e., new patterns are reinforced at the expense of previously acquired knowledge). Training from scratch each time new training data becomes available would result in extremely long training times and massive data accumulation. Rehearsal-based continual learning has shown promise for addressing the catastrophic forgetting challenge, but research to date has not addressed performance and scalability. To fill this gap, we propose an approach based on a distributed rehearsal buffer that efficiently complements data-parallel training on multiple GPUs to achieve high accuracy, short runtime, and scalability. It leverages a set of buffers (local to each GPU) and uses several asynchronous techniques for updating these local buffers in an embarrassingly parallel fashion, all while handling the communication overheads necessary to augment input mini-batches (groups of training samples fed to the model) using unbiased, global sampling. After evaluating our approach on classification problems, we further propose a generalization of rehearsal buffers to support generative learning tasks, as well as more advanced rehearsal strategies (notably dark experience replay, leveraging knowledge distillation). We illustrate these extensions with a real-life HPC streaming application from the domain of ptychographic image reconstruction, in which experiments need to be steered in real-time.