Discussion about this post

User's avatar
Marco's avatar

Je parle français, mais pour les choses complexe, je préfère anglais. I find the poor academic's theory rather silly. He is saying that because we invented a prediction engine (using vast amounts of existing HUMAN text as a training base) that is capable of generating (when prompted) complete "sentences" that make linguistic sense, therefore human language works the way LLMs work and therefore so do our brains?! This type of thinking caused the financial crisis of 2008. It is recursive. Human thought/language has patterns, obviously, because pattern matching is our superpower (evolutionarily speaking). Training an LLM to mimic it was inevitable. (Computers are pattern matchers too, as are neural networks) So we train an LLM (a large MODEL of the LANGUAGE) to mimic human conversation and are surprised when it works?! Okay, sure. But to claim that the MODEL defines how human language works (the SOURCE DATA) is laughable! If this were even remotely true, all writers and non-scientific academics would be put of work, because LLMs would do language better than humans.

Nick Usborne's avatar

Thierry, you make my brain hurt. : ) At the simplest level, when people discount LLMs simply for guessing/predicting/anticipating the next word, my first response has always been with the question, 'Well, isn't that what we do?" We don't hold an entire paragraph or page in our heads and then carefully write it down, word for word. We proceed one word at a time, with a sense of where we are going, but not the exact words until it is "their turn".

5 more comments...

No posts

Ready for more?