Consciousness is complicated.
Currently, there is no proof that any single theory is correct.
Several theories exist and are often hotly debated.
AI research is advancing this field and consciousness research is advancing AI research.
THE THEORIES
This framework suggests that the brain creates a model of the world and constantly updates it by minimising prediction errors between expected and incoming sensory information.
In this view, consciousness emerges when the brain refines these models in a way that allows for flexible, high-level predictions.
Read more in this paper
Predictive Processing Theory (PPT) shares several conceptual similarities with how large language models (LLMs) and AI systems work, particularly in the way both systems generate responses based on patterns and probabilities. While biological brains and artificial intelligence are fundamentally different, the parallels between the two can offer interesting insights.
Key Similarities Between Predictive Processing Theory and AI (LLMs):
1. Prediction-Based Mechanism:
- Predictive Processing: The human brain constantly generates expectations about sensory input and updates them based on real-world feedback to minimize prediction errors.
- LLMs: AI models, such as GPT, function by predicting the next word (token) based on prior context using probability distributions learned from massive datasets. They continuously refine their “expectations” by adjusting probabilities as they process text.
2. Hierarchical Structure:
- Predictive Processing: The brain works in a hierarchical manner, with lower levels processing raw data and higher levels generating abstract concepts and expectations.
- LLMs: Similarly, transformer-based models process data in layers, capturing lower-level syntax and higher-level semantic structures as the input moves through the model.
3. Error Minimization (Bayesian Inference):
- Predictive Processing: The brain constantly updates its internal model by minimizing the gap between expectations and actual sensory inputs, essentially performing a form of Bayesian inference.
- LLMs: AI models optimize their internal weights during training to minimize the difference between predicted and actual outputs (loss function), a process conceptually similar to reducing prediction errors in the brain.
4. Contextual Updating:
- Predictive Processing: Human perception updates with new sensory input, adjusting internal models dynamically based on experience.
- LLMs: AI models dynamically adjust their next prediction based on all previous context in the conversation, much like how humans adjust their expectations based on ongoing experiences.
5. Lack of “True” Understanding:
- Predictive Processing: While humans perceive meaning by comparing sensory input with prior experiences, they sometimes make errors in perception or hold illusions based on faulty predictions.
- LLMs: AI lacks intrinsic understanding but generates outputs based on learned statistical patterns, sometimes leading to hallucinations or errors similar to human misperceptions.
Key Differences
1. Embodiment and Multi-sensory Input:
- The human brain integrates data from multiple senses and interacts with the physical world, while LLMs are text-based and lack embodied experience.
2. Intrinsic Motivation and Emotions:
- The brain has built-in survival goals, emotions, and self-generated motivations, while AI operates solely based on mathematical optimization.
3. Long-Term Memory and Adaptation:
- Humans have lifelong learning abilities and adaptive memory, whereas LLMs rely on static training data and only adapt within session memory constraints.
Conclusion
While LLMs and the brain’s predictive processing share foundational principles of pattern recognition, prediction, and feedback loops, the AI models lack the deeply interconnected and embodied nature of biological intelligence.
However, the comparison suggests that advancing AI in directions inspired by predictive processing — such as self-updating models with sensory integration — could bring AI closer to human-like learning.
This theory, proposed by Giulio Tononi, suggests that consciousness arises from the ability of a system to integrate information in a highly interconnected way.
According to IIT, the more a system can connect and process information as a whole, the higher its level of consciousness.
It attempts to quantify consciousness through a measure called “phi,” representing the degree of integration within the system.
Proposed by Bernard Baars and expanded by neuroscientists like Stanislas Dehaene, this theory views consciousness as a “global workspace” in the brain, where information from different processes is broadcast and made available to various cognitive systems.
Essentially, consciousness acts like a theatre where different unconscious processes compete for attention.
These theories propose that consciousness arises when the brain generates higher-order representations of its own mental states.
In other words, being aware of thoughts is what makes them conscious.