Cal Newport:
"
- ChatGPT is almost certainly not going to take your job. Once you understand how it works, it becomes clear that ChatGPT’s functionality is crudely reducible to the following: it can write grammatically-correct text about an arbitrary combination of known subjects in an arbitrary combination of known styles, where “known” means it encountered it sufficiently many times in its training data. This ability can produce impressive chat transcripts that spread virally on Twitter, but it’s not useful enough to disrupt most existing jobs. The bulk of the writing that knowledge workers actually perform tends to involve bespoke information about their specific organization and field. ChatGPT can write a funny poem about a peanut butter sandwich, but it doesn’t know how to write an effective email to the Dean’s office at my university with a subtle question about our hiring policies.
- ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory. It’s possible that we might one day day create a self-aware AI (keep an eye on this guy), but if such an intelligence does arise, it will not be in the form of a large language model.
... check out my article, if you’re able.
[Conclusion from the article:] “It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy,” I wrote. “ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem.”"
Keine Kommentare:
Kommentar veröffentlichen