Consciousness: here, there and everywhere?
📄 Original study ↗Plain English Summary
Integrated Information Theory (IIT) is one of the most ambitious attempts to explain consciousness scientifically. It starts with five basic truths about what experience feels like and works backward to figure out what kind of physical system could produce it. The key idea: consciousness arises from how deeply a system's parts are woven together in ways that can't be split into simpler pieces — measured by a value called Phi. Here's where it gets fascinating: IIT explains why your wrinkly cerebral cortex is conscious but your cerebellum, which packs four times more neurons, is not. It also predicts that no matter how perfectly you simulate a brain on a computer, that simulation wouldn't actually experience anything. Consciousness comes in degrees and likely exists across many living creatures. A practical tool called the perturbational complexity index can actually detect consciousness in sleeping or brain-injured patients.
Actual Paper Abstract
The science of consciousness has made great strides by focusing on the behavioural and neuronal correlates of experience. However, while such correlates are important for progress to occur, they are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, preterm infants, non-mammalian species and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it. Integrated information theory (IIT) does so by starting from experience itself via five phenomenological axioms: intrinsic existence, composition, information, integration and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience (a quale), and a calculus to evaluate whether or not a particular physical system is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions and extrapolates to a number of problematic conditions. The theory holds that consciousness is a fundamental property possessed by physical systems having specific causal properties. It predicts that consciousness is graded, is common among biological organisms and can occur in some very simple systems. Conversely, it predicts that feed-forward networks, even complex ones, are not conscious, nor are aggregates such as groups of individuals or heaps of sand. Also, in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
Research Notes
The most influential formal theory treating consciousness as a fundamental, intrinsic property of physical systems. Its panpsychist-adjacent implications — that consciousness is graded and not exclusive to brains — connect directly to the library's core debates about mind-matter interaction, the causal power of consciousness, and whether subjective experience can be reduced to neural computation. Companion to Tononi et al. 2016 (IIT full formalism).
Integrated Information Theory (IIT 3.0) is presented as a principled framework for understanding consciousness. Starting from five phenomenological axioms — intrinsic existence, composition, information, integration, and exclusion — IIT derives postulates about the physical substrates required for experience. The theory identifies consciousness with maximally irreducible integrated information (Phi_max), explaining why the cerebral cortex supports consciousness while the cerebellum, despite having four times as many neurons, does not. IIT predicts that feed-forward networks are unconscious regardless of computational sophistication, that digital simulations of conscious brains would not themselves be conscious, and that consciousness is graded and widespread among biological organisms. The perturbational complexity index (PCI), inspired by IIT, reliably tracks consciousness across sleep, anaesthesia, and brain injury.
Links
Related Papers
Companion
- Integrated Information Theory: From Consciousness to Its Physical Substrate — Tononi, Giulio (2016)
- Consciousness in the Universe: Neuroscience, Quantum Space-Time Geometry and Orch OR Theory — Penrose, Roger (2011)
- The CEMI Field Theory: Closing the Loop — McFadden, Johnjoe (2013)
- Can Panpsychism Become an Observational Science? — Matloff, Gregory L (2016)
Cites
More in Methodology
Paranormal belief, conspiracy endorsement, and positive wellbeing: a network analysis
Planning Falsifiable Confirmatory Research
Addressing Researcher Fraud: Retrospective, Real-Time, and Preventive Strategies — Including Legal Points and Data Management That Prevents Fraud
Quantum Aspects of the Brain-Mind Relationship: A Hypothesis with Supporting Evidence
Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research
📋 Cite this paper
Tononi, Giulio, Koch, Christof (2015). Consciousness: here, there and everywhere?. Philosophical Transactions of the Royal Society B: Biological Sciences. https://doi.org/10.1098/rstb.2014.0167
@article{tononi_koch_2015_consciousness_here_there,
title = {Consciousness: here, there and everywhere?},
author = {Tononi, Giulio and Koch, Christof},
year = {2015},
journal = {Philosophical Transactions of the Royal Society B: Biological Sciences},
doi = {10.1098/rstb.2014.0167},
}