Mittwoch, 4. Januar 2023

The overfitted brain: Dreams evolved to assist generalization

The overfitted brain: Dreams evolved to assist generalization
(May 2021)
Erik Hoel


Dreaming remains a mystery to neuroscience. While various hypotheses of why brains evolved nightly dreaming have been put forward, many of these are contradicted by the sparse, hallucinatory, and narrative nature of dreams, a nature that seems to lack any particular function. Recently, research on artificial neural networks has shown that during learning, such networks face a ubiquitous problem: that of overfitting to a particular dataset, which leads to failures in generalization and therefore performance on novel datasets. Notably, the techniques that researchers employ to rescue overfitted artificial neural networks generally involve sampling from an out-of-distribution or randomized dataset. The overfitted brain hypothesis is that the brains of organisms similarly face the challenge of fitting too well to their daily distribution of stimuli, causing overfitting and poor generalization. By hallucinating out-of-distribution sensory stimulation every night, the brain is able to rescue the generalizability of its perceptual and cognitive abilities and increase task performance.


Summary

Understanding of the evolved biological function of sleep has advanced considerably in the past decade. However, no equivalent understanding of dreams has emerged. Contemporary neuroscientific theories often view dreams as epiphenomena, and many of the proposals for their biological function are contradicted by the phenomenology of dreams themselves. Now, the recent advent of deep neural networks (DNNs) has finally provided the novel conceptual framework within which to understand the evolved function of dreams. Notably, all DNNs face the issue of overfitting as they learn, which is when performance on one dataset increases but the network's performance fails to generalize (often measured by the divergence of performance on training versus testing datasets). This ubiquitous problem in DNNs is often solved by modelers via “noise injections” in the form of noisy or corrupted inputs. The goal of this paper is to argue that the brain faces a similar challenge of overfitting and that nightly dreams evolved to combat the brain's overfitting during its daily learning. That is, dreams are a biological mechanism for increasing generalizability via the creation of corrupted sensory inputs from stochastic activity across the hierarchy of neural structures. Sleep loss, specifically dream loss, leads to an overfitted brain that can still memorize and learn but fails to generalize appropriately. Herein this ”overfitted brain hypothesis” is explicitly developed and then compared and contrasted with existing contemporary neuroscientific theories of dreams. Existing evidence for the hypothesis is surveyed within both neuroscience and deep learning, and a set of testable predictions is put forward that can be pursued both in vivo and in silico.

Keine Kommentare:

Kommentar veröffentlichen