Survey
Learning to Prompt for Continual Learning
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Generating Instance-level Prompts for Rehearsal-free Continual Learning
Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning
CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need
上次更新: 2024/07/05, 15:24:13