Cal Newport:
"I mention these two examples because when we talk about AI, they present two differing styles.
In Somers’s thoughtful article, we experience a fundamentally modern approach. He looks inside the proverbial black box to understand the actual mechanisms within LLMs that create the behavior he observed. He then uses this understanding to draw interesting conclusions about the technology.
Weinstein’s approach, by contrast, is fundamentally pre-modern in the sense that he never attempts to open the box and ask how the model actually works. He instead observed its behavior (it’s fluent with language), crafted a story to explain this behavior (maybe language models operate like a child’s mind), and then extrapolated conclusions from his story (children eventually become autonomous and conscious beings, therefore language models will too).
This is not unlike how pre-modern man would tell stories to describe natural phenomena, and then react to the implication of their tales; e.g., lightning comes from the Gods, so we need to make regular sacrifices to keep the Gods from striking us with a bolt from the heavens.
Language model-based AI is an impressive technology that is accompanied by implications and risks that will require cool-headed responses. All of this is too important for pre-modern thinking. When it comes to AI, it’s time to start our most serious conversations by thinking inside the box."
Keine Kommentare:
Kommentar veröffentlichen