Reza Bayat, Mohammad Pezeshki, Elvis Dohmatob, David Lopez-Paz, Pascal Vincent [h/t Charles / @reivers]
Abstract
Neural networks often learn simple explanations that fit the majority of the data while memorizing
exceptions that deviate from these explanations. This behavior leads to poor generalization when
the learned explanations rely on spurious correlations. In this work, we formalize the interplay
between memorization and generalization, showing that spurious correlations would particularly lead
to poor generalization when are combined with memorization. Memorization can reduce training
loss to zero, leaving no incentive to learn robust, generalizable patterns. To address this, we propose
memorization-aware training (MAT), which uses held-out predictions as a signal of memorization
to shift a model’s logits. MAT encourages learning robust patterns invariant across distributions,
improving generalization under distribution shifts.
Source:
https://arxiv.org/pdf/2412.07684v1
Keine Kommentare:
Kommentar veröffentlichen