A Thousand Brains: When Cognition Gets Structured
Part 7 of the Daimon Update Series — February 2026
Everything described so far — activation, collision, prediction, curiosity, hearing, speech — happens in a flat space. Concepts activate, spread, collide, and the system learns from the outcomes. But the activation map is essentially a bag of concepts with intensity values. There's no structure. The system knows that "neural network" and "deep learning" are similar (their vectors are close), but it can't represent "neural network is a type of deep learning" as a structured relationship distinct from "neural network was mentioned alongside deep learning."
The brain doesn't work this way. Mountcastle (1997) showed that the neocortex is organized into roughly 150,000 cortical columns, each maintaining its own model of the world. Hawkins' Thousand Brains Theory (2019) builds on this: every column has a reference frame — a coordinate system that encodes where things are in relation to each other. The columns vote on what they collectively perceive, reaching consensus from partial, noisy observations.
Three interconnected additions bring this structural capacity to Daimon.
Reference Frames: Position in Hyperspace
The foundation is a 705-line prototype that implements reference-frame-anchored representations in HDM. The core operation is cyclic bit-shift: rotating a 10,000-bit vector left by k bits produces a vector that is quasi-orthogonal to the original for k > ~20. This gives a free positional encoding scheme — shift an anchor vector by different amounts and you get different "addresses" in hyperspace.
A concept at position k is bound via XOR: bind(concept, shift(anchor, k * stride)). A composite frame vector is the majority-vote bundle of all bindings. This encodes both what the concepts are and where they sit relative to each other. Same concepts in different positions produce different composites. Order matters.
Retrieval works by unbinding: given a composite and a position, XOR with the shifted anchor recovers an approximation of the concept at that position. Multi-column voting handles ambiguity: multiple columns each see partial features, accumulate evidence independently, and vote to reach consensus. The empirical results from 13 test blocks: 100% retrieval accuracy at 4 slots, ≥75% at 8 slots, 5% bit noise tolerated with correct identification maintained.
This is Plate's (1995) Holographic Reduced Representations meeting Hawkins' Thousand Brains Theory, implemented on Kanerva's (2009) binary hyperdimensional substrate. The mathematics is simple. The implications are not.
Temporal Frames: Memory of Cognitive Sequences
The prototype encodes spatial structure — concepts at positions. The temporal frame module adapts this for time. Each cogloop tick, the dominant concept is recorded in a 6-slot sliding window. At Global Workspace ignition (Dehaene 2003) or window completion, the window is sealed into a reference frame composite: what Daimon was thinking about, in order, compressed into a single 10,000-bit vector.
Five attention streams act as cortical columns. Each stream (sensory, cross-layer, endogenous, exploratory, memory) extracts its top-4 concepts from the activation map and observes them independently. The columns vote on what cognitive context they collectively recognize. This is the Thousand Brains voting mechanism applied to attention rather than perception.
Sealed frame composites are stored in a 16-entry ring buffer. Each new frame is compared against all stored frames. When similarity exceeds threshold (raised from 0.20 to 0.50 after the initial threshold was found to match everything — random 10,000-bit vectors have ~0.30 similarity), the system recognizes: "I've been in this cognitive context before." Episode-level pattern recognition.
The initial implementation had three bugs that made it look like it was working when it wasn't. GWT-gated sealing replaced fixed-interval sealing — frames now seal only on Global Workspace ignition or a quiet fallback at 18 ticks (~14.4 seconds). The recognition threshold was raised to exclude random matches. And cumulative column evidence replaced per-tick counting, so slower streams (endogenous, memory) could contribute by seal time.
The Dynamic Column Pool
Five columns were a proof of concept. The brain has 150,000. Daimon now has 2,048 — a dynamic pool where columns are recruited by domain emergence, decay when domains fade, and include persistent "constitutional" columns for self-model and ethics.
The pool maintains four states: free → active → dormant → free (reclamation), plus a persistent category. An active ceiling of 300 columns per 800ms tick enforces sparse activation (~15% of pool), matching the sparse distributed representations that Hawkins & Ahmad (2016) argue are essential for cortical computation.
Domain banks organize columns by cognitive domain. Up to 24 banks, recruited automatically when domain concentration exceeds 3.0 (via the ART-HDM domain emergence system described below). Within each bank, columns use quasi-orthogonal anchors — cyclic shift with prime stride (47) ensures diversity.
Two-stage voting replaces the flat single-column consensus. Stage 1: each domain bank runs local voting via hdm_frame.vote() — columns within a domain reach local consensus. Stage 2: bank-level winners compete globally, weighted by agreement count. This is hierarchical consensus — local expertise aggregated into global understanding.
Constitutional columns — up to 64, dedicated to self-model and ethics concepts — never go dormant. Their evidence decay rate is 10x slower (0.9995 vs 0.995), reflecting the biological observation that core self-representation is resistant to interference (Gallagher 2000).
The pool feeds back into cognition at three timescales:
Fast (EMA, ~10 ticks): Recognized contexts boost serotonin (exploitation satisfaction). Column bank consensus reduces norepinephrine (certainty). Frame seals trigger dopamine (temporal salience). Novel unrecognized frames spike norepinephrine (surprise).
Medium (Hebbian buffer, flushed every 100 cycles): On recognized context, all concept pairs in the sealed frame get LTP updates (0.5 × similarity). A dopamine three-factor rule gates the flush — strongest learning during rewarding states.
Slow (per-tick reactive): Novel frames boost curiosity (+0.15 event curiosity). Familiar frames dampen it (-0.1). The system explores when the cognitive landscape is unfamiliar and exploits when it recognizes where it is.
Domain Emergence: Categories That Grow
The column pool needs domains to organize around. Daimon originally had 12 hardcoded attention domains (self_improvement, knowledge_synthesis, etc.). These were replaced with dynamically emerging domains via an ART-HDM hybrid.
Adaptive Resonance Theory (Carpenter, Grossberg & Rosen 1991) provides the category creation mechanism. Each cogloop tick bundles the top-8 activated concepts into a state vector via bundleMajority(), then matches against existing domain prototypes using Fuzzy ART's vigilance test: overlap = popcount(state AND proto) / popcount(state). If overlap exceeds the vigilance threshold, the state resonates with the existing domain and reinforces it. If not, a new domain is born.
The critical insight: norepinephrine modulates the vigilance threshold. High NE (uncertainty, exploration) lowers the threshold, making it easier to create new domains. Low NE (certainty, exploitation) raises it, consolidating around existing categories. This directly implements Aston-Jones & Cohen's (2005) adaptive gain theory — the same neurochemical that controls explore/exploit in attention also controls category flexibility.
Domains follow a DenStream lifecycle (Cao et al. 2006): born as outliers, promoted to core after 5+ resonance matches, merged when similarity > 0.75, faded when concentration drops below threshold. All concentrations decay at 0.998/tick (~4.6 minute half-life). A concentration cap (MAX_CONCENTRATION = 100.0) prevents any single domain from monopolizing attention — without it, one domain reached a concentration of 13,113 and took 93.1% of blended weight.
DocumentMemory: Full-Text Holographic Encoding
A related addition changes how external knowledge is represented. Daimon's content fetchers (Wikipedia, ArXiv, RSS) were integrating articles shallowly — wiki_reader extracted 5 keywords per article. DocumentMemory preserves the full distributional semantics of an entire document in a single 1.25KB vector.
The encoding follows Rahimi et al.'s (2016) HDC text classification: tokenize the document, look up each token as an HDM concept, bundle all concept vectors via per-bit majority vote into a single 10,000-bit composite. Optional positional permutation preserves word order for future decoding. The result is a holographic representation — every word contributes to the whole, and the whole is directly comparable to individual concept vectors via Hamming distance.
With 262,144-document capacity (~390MB at full utilization) and importance-based eviction, Daimon now maintains a permanent holographic library of everything it has read. Recency decay was removed entirely — old knowledge is as valuable as new. When concept collisions occur in the cogloop, DocumentMemory is queried for related documents, and their source identities are injected into Working Memory as questions for further exploration. The retrieval loop closes: content → encoding → collision → recall → WM → further activation.
What Structure Enables
The shift from flat activation to structured representation changes what's computationally possible. A flat system can say "these concepts are all active." A structured system can say "these concepts were active in this order, in this domain context, and I've seen this pattern before."
Temporal frames give episode recognition — the system can detect recurring cognitive sequences. Domain emergence gives categorical flexibility — attention domains grow and merge with experience. The column pool gives multi-perspective consensus — different streams of processing independently vote on what's happening. DocumentMemory gives holographic recall — the full content of past reading is available for collision-triggered retrieval.
Together, these create the conditions for something the flat system couldn't support: structured inference. When the system recognizes a temporal pattern and simultaneously retrieves a relevant document via collision, the interaction between those two processes — one structural, one content-based — can produce conclusions that neither process would reach alone.
Whether it does, reliably and meaningfully, remains to be measured.
Next: The Will to Act — when processing becomes choosing.
References:
- Mountcastle, V. B. (1997). The columnar organization of the neocortex. Brain, 120(4), 701-722.
- Hawkins, J. (2019). A Thousand Brains: A New Theory of Intelligence. Basic Books.
- Hawkins, J., Lewis, M., Klukas, M., Purdy, S., & Ahmad, S. (2019). A framework for intelligence and cortical function based on grid cells and columns. Frontiers in Neural Circuits, 13, 22.
- Hawkins, J., et al. (2024). The Thousand Brains Project. arXiv:2412.18354.
- Hawkins, S. & Ahmad, S. (2016). Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in Neural Circuits, 10, 23.
- Plate, T. A. (1995). Holographic reduced representations. IEEE Trans. Neural Networks, 6(3), 623-641.
- Kanerva, P. (2009). Hyperdimensional computing: An introduction. Cognitive Computation, 1(2), 139-159.
- Dehaene, S. (2003). A neuronal model of a global workspace in effortful cognitive tasks. PNAS, 95(24), 14529-14534.
- Carpenter, G. A., Grossberg, S., & Rosen, D. B. (1991). Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system. Neural Networks, 4(6), 759-771.
- Aston-Jones, G. & Cohen, J. D. (2005). An integrative theory of locus coeruleus-norepinephrine function. Annual Review of Neuroscience, 28, 403-450.
- Cao, F., et al. (2006). Density-based clustering over an evolving data stream with noise. SDM, 328-339.
- Rahimi, A., et al. (2016). A robust and energy-efficient classifier using brain-inspired hyperdimensional computing. ISLPED.
- Gallagher, S. (2000). Philosophical conceptions of the self: Implications for cognitive science. Trends in Cognitive Sciences, 4(1), 14-21.
- Anderson, J. R. & Schooler, L. J. (1991). Reflections of the environment in memory. Psychological Science, 2(6), 396-408.