AI's New Knowledge: The Rise of Stochastic Thinking .
The Epistemology of AI: How LLMs Resemble Human System One Cognition.
AI's Knowledge is Not Human Knowledge
According to AI pioneer Yann LeCun, Large Language Models (LLMs) like GPT-4 are "stochastic parrots." They do not think or possess knowledge in the way humans do. Instead, they are complex statistical models that predict the next word in a sequence based on an exhaustive analysis of text. GPT-4, for instance, has been trained on 45 terabytes of data, containing an estimated 100 trillion parameters, a scale so vast it would take a person reading eight hours a day over 22,000 years to consume the same amount of information.
LeCun suggests that systems like GPT-4, at best, approximate the functions of the brain's Wernicke and Broca's areas, which are responsible for language processing. This is because language is the primary medium for human knowledge transmission. We use speech and writing to communicate information, feelings, and abstract concepts.
While other forms of knowledge exist, like motor skills, language is the foundation for how we share and understand information. Similarly, AI models trained on text excel at processing and generating language. They can analyze an image and return a text description, but they do not return images, underscoring their primary function as language-based systems.
The New Turing Test and System One Cognition
The traditional Turing Test, which sought to determine if a computer could pass as intelligent, has become obsolete with the advent of LLMs. GPT-4 has demonstrated an ability to pass a U.S. legal bar exam and answer complex medical questions accurately, not because it "knows" the answers through conscious reasoning, but because it is a highly sophisticated statistical "guesser."
This "guessing" ability is both the promise and the problem for brand research.
LeCun compares this form of AI cognition to Daniel Kahneman’s System One thinking: a fast, intuitive, heuristic mode of thought. GPT's ability to "guess" at answers based on its vast training data makes it a powerful model for simulating this type of quick, intuitive human thinking. It can be used to simulate consumer behavior and build hypotheses, producing distinct opinions for a variety of personas with remarkable accuracy.
This low-cost, high-speed approach is why a major market research company is already testing a GPT-type system to analyze tabular data and summarize research.
This raises a new perspective on AI's knowledge: it is a form of statistical, language-based knowledge that effectively simulates System One thinking. This is particularly relevant for brand strategy, as brands often aim to trigger this precise mode of quick, heuristic human response.
A New Ethical Imperative
Ignoring the arrival of these systems is a futile exercise. They are here and their impact on society is profound. The challenge lies not in whether to use them, but how to use them. As the poet Louis MacNeice wrote, "The glass is falling hour by hour, the glass will fall forever, but if you break the bloody glass you won’t hold up the weather." Trying to stop this technological shift is impossible.
As an industry, we must engage with this new form of knowledge intelligently, creatively, and ethically. The original Luddites were not anti-technology; they were advocates for the ethical use of machinery to ensure quality goods and well-qualified workers. We must adopt a similar stance.
The market is already being flooded with "slapdash AI-only solutions" that, if left unchecked, could discredit the entire marketing sector. The path forward is to embrace AI as a tool, but with the vital human overlay of ethical consideration and creative inspiration.
Bottom Line
AI’s statistical knowledge and ability to simulate human System One thinking make it an invaluable tool for brand strategy.
The responsibility falls on us to use this technology with an ethical and creative vision, ensuring it enhances, rather than diminishes, human insight.