Mittwoch, 15. Oktober 2025

The Impact of Artificial Intelligence on Human Thought:

Rénald Gesnot (2025)

https://arxiv.org/pdf/2508.16628

h/t @reiver / Charles

"Beyond describing the risks of atrophy, it is crucial to identify and promote individual and collective strategies to maintain and strengthen critical thinking, creativity, and cognitive diversity in the face of AI’s omnipresence. Exploring effective “cognitive hygiene” practices is essential."

"Support research on “pro-cognitive” AI: It is imperative to actively encourage the design and experimentation of AI systems that, by their very design, stimulate active cognitive engagement, intellectual curiosity, and critical thinking, rather than fostering passivity."

"promote AI as an amplifier—not a substitute"

-----

"One of the key phenomena between cognitive standardization and manipulation is the perception of AI as “conscious” or as a “human expert.” Anthropomorphism—the human tendency to attribute intentionality and emotions to machines—exacerbates this illusion. As Placani notes, anthropomorphism in AI artificially amplifies its capabilities and biases our moral judgments toward it. In other words, we overestimate what a chatbot “understands” and what it is capable of. This belief reinforces the trust we place in its answers. Guingrich and Graziano remind us that the problem is not so much whether AI is conscious, but that users perceive it as such. This attribution of consciousness activates “human mental schemas” during interaction, with two notable consequences: on the one hand, it inclines the user to treat AI as a humanlike interlocutor (demanding coherence, intention); on the other hand, the behaviors and judgments we reserve for it tend to spill over into our interhuman interactions. Put differently, considering AI as “alive” subtly alters our general attitudes (e.g., reducing our empathy or vigilance toward others) without our full awareness."

"The user, little inclined to challenge a “nice speech” delivered by an AI perceived as wise, and victim of confirmation bias as well as anthropomorphic credulity, becomes an easy receptacle for content standardized by algorithms. Conversely, algorithmic manipulation (filtering, personalization) can reinforce the belief that a system “understands us,” thus closing the loop. "

Keine Kommentare:

Kommentar veröffentlichen