0% found this document useful (0 votes)
20 views21 pages

Isotelesis AIModel

Uploaded by

isotelesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views21 pages

Isotelesis AIModel

Uploaded by

isotelesis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Atoms, Symbols, and Intelligence:

A Motivic Framework for AI


HAMID JAVANBAKHT∗
September 28, 2025

Abstract
Artificial intelligence has achieved remarkable progress through optimization
and statistical learning, but such methods often lack invariant structures that guar-
antee stability, interpretability, and integration with deep principles of mathemat-
ics. Recent advances in arithmetic geometry, motivic information theory, and the
geometry of neural networks suggest an alternative foundation for intelligence based
on invariants.
The arithmetic Hodge index theorem and Kontsevich’s theory of arithmetic
supports provide inequality and support structures that persist under deformation,
while motivic zeta functions capture hidden costs of singularities. Marcolli’s mo-
tivic information and Manin’s homotopy-theoretic neural codes extend information
theory into categorical and motivic domains, where entropy and coding acquire
topological meaning. Neural architectures such as cyclic, hyperbolic, and spik-
ing networks instantiate these invariants dynamically, implementing stability and
memory akin to biological systems.
We further clarify the role of “symbols” in recent work by Cavenaghi, Katzarkov,
and Kontsevich, where the term designates algebraic invariants of group actions
rather than cognitive tokens. In this paper, we reinterpret such structures metaphor-
ically as analogues of symbolic reasoning in AI.
By integrating arithmetic invariants, motivic entropy, homotopy codes, and
invariant-based architectures, we propose a motivic theory of intelligence. This
framework views reasoning as the minimization of motivic relative information
subject to arithmetic and categorical constraints, unifying cognition, learning, and
invariance across mathematics and artificial intelligence.


DBA Sebastian Ruliad, Isotelesis Inc.
Mountain View, CA isotelesis@[Link]

1
Contents
1 Introduction 4

2 Arithmetic and Motivic Background 5


2.1 The Arithmetic Hodge Index Theorem . . . . . . . . . . . . . . . . . . . 5
2.2 Kontsevich’s Arithmetic Supports . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Motivic Zeta Functions and Log Geometry . . . . . . . . . . . . . . . . . 5
2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Motivic Information Theory 6


3.1 Entropy and Information Loss in Motives . . . . . . . . . . . . . . . . . . 6
3.2 Geometry of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Relative Information as a Motivic Invariant . . . . . . . . . . . . . . . . 6
3.4 Bias, Stability, and Learning . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Neural Codes and Homotopy Structures 7


4.1 Neural Codes as Homotopy Types . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Categorical Models of Neural Information Networks . . . . . . . . . . . . 7
4.3 Gamma Spaces and Higher Information Structures . . . . . . . . . . . . . 7
4.4 δ-Algebras and Homotopy-Theoretic Foundations . . . . . . . . . . . . . 8
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

5 Geometric and Circular Neural Architectures 8


5.1 Circularity in Neural Computation . . . . . . . . . . . . . . . . . . . . . 8
5.2 Circular Kernels and Backpropagation . . . . . . . . . . . . . . . . . . . 8
5.3 Riemannian Geometry in Neural Networks . . . . . . . . . . . . . . . . . 9
5.4 Circularity and Biological Intelligence . . . . . . . . . . . . . . . . . . . . 9
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

6 Hyperbolic and Dynamical Perspectives 9


6.1 Hyperbolic Embeddings and Network Geometry . . . . . . . . . . . . . . 9
6.2 Loxodromic Dynamics and Counting Invariants . . . . . . . . . . . . . . 10
6.3 Geometry and Dynamics in Non-Proper Settings . . . . . . . . . . . . . . 10
6.4 Group Actions and Structural Stability . . . . . . . . . . . . . . . . . . . 10
6.5 Low-Dimensional Topology and Periodic Structures . . . . . . . . . . . . 10
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

7 Kontsevich, Arithmetic Structures, and Quantum Insights 11


7.1 Fourier Transform and One-Loop Exactness . . . . . . . . . . . . . . . . 11
7.2 Holomorphic Floer Theory and Exponential Integrals . . . . . . . . . . . 11
7.3 Integer Eigenvalues as Invariants . . . . . . . . . . . . . . . . . . . . . . 11
7.4 Operads, Motives, and Deformation Quantization . . . . . . . . . . . . . 12
7.5 Cocycle Constructions and Quantization Theorems . . . . . . . . . . . . 12
7.6 Atoms and Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2
8 Toward a Motivic Theory of Intelligence 12
8.1 Invariants as the Basis of Reasoning . . . . . . . . . . . . . . . . . . . . . 12
8.2 Motivic Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.3 Cognitive Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.4 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

9 Conclusion 14

A On Atoms, Symbols, and Information Geometry 15


A.1 Atoms Meet Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
A.2 Birational Invariants and Quantum Multiplication . . . . . . . . . . . . . 15
A.3 Ergodic Theory and Symbolic Dynamics . . . . . . . . . . . . . . . . . . 16
A.4 F-Manifolds and the Geometry of Information . . . . . . . . . . . . . . . 16
A.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

B Glossary 17
B.1 I. Arithmetic and Geometric Invariants . . . . . . . . . . . . . . . . . . . 17
B.2 II. Motivic and Information-Theoretic Frameworks . . . . . . . . . . . . . 17
B.3 III. Neural Architectures and Learning . . . . . . . . . . . . . . . . . . . 18
B.4 IV. Dynamical and Topological Structures . . . . . . . . . . . . . . . . . 18
B.5 V. Cross-Cutting Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3
1 Introduction
Artificial intelligence has been largely guided by statistical learning and optimization
frameworks. These approaches, though effective in practice, often lack invariant struc-
tures that guarantee stability, interpretability, and integration with deep principles of
mathematics and cognition. Recent developments in arithmetic geometry and motivic
information theory suggest the possibility of grounding reasoning and learning in more
fundamental invariants. The title of this paper, Atoms, Symbols, and Intelligence, alludes
to recent work by Cavenaghi, Katzarkov, and Kontsevich,1 where “symbols” designate
algebraic invariants of group actions; in our framework, we reinterpret this terminology
metaphorically to suggest analogies with symbolic reasoning in AI.
The arithmetic Hodge index theorem, extended to quasi-projective varieties by Ab-
boud,2 introduces local inequalities that capture geometric and arithmetic stability. Kont-
sevich’s theory of arithmetic supports provides another perspective: operators in algebraic
and analytic settings carry irreducible “costs” that persist under deformation, functioning
as invariants of computational processes.3 At the same time, Marcolli and Manin have
argued that information theory can be recast in motivic and categorical terms, linking
entropy to motivic measures4 and neural coding to homotopy types.5
Parallel advances in machine learning have highlighted the importance of network
geometry. Circular and cyclic architectures more closely resemble biological neural net-
works than traditional feed-forward models,6 while hyperbolic embeddings provide effi-
cient representations of hierarchical and scale-free structures.7 Spiking neural networks,
developed within Shaowei Lin’s motivic information framework, demonstrate how relative
information minimization can yield biologically plausible learning rules with convergence
guarantees.8
This paper integrates these mathematical, informational, and architectural insights
into a single framework. We argue that reasoning can be understood as the minimization
of motivic relative information subject to arithmetic constraints, and that neural codes
serve as the categorical carriers of these invariants. By bringing together the arithmetic
Hodge index, motivic entropy, and the geometry of neural networks, we aim to outline a
motivic theory of intelligence that unifies cognition, learning, and invariance.
1
Leonardo F. Cavenaghi, Ludmil Katzarkov, and Maxim Kontsevich, “Atoms Meet Symbols,” arXiv
preprint arXiv:2509.15831 (2025).
2
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
3
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
4
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
5
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
6
Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S. Yu,
“Cyclic Neural Networks,” arXiv preprint arXiv:2402.03332 (2024).
7
M. Á. Serrano, Dmitri Krioukov, and Marián Boguñá, “Self-Similarity of Complex Networks and
Hidden Metric Spaces,” Nature Communications 8, no. 1 (2017): 1856.
8
Shaowei Lin, “Spiking Neural Networks,” Motivic Information, Path Integrals and Spiking Networks
(2021), [Link]

4
2 Arithmetic and Motivic Background
2.1 The Arithmetic Hodge Index Theorem
The classical Hodge index theorem relates intersection forms on algebraic surfaces to
constraints on quadratic forms. In the arithmetic setting, intersection theory gains new
complexity, linking curvature, stability, and number-theoretic invariants. Abboud’s local
version of the arithmetic Hodge index theorem formulates inequalities for quasi-projective
varieties, establishing local arithmetic invariants that constrain stability across geometric
and arithmetic dimensions.9 These inequalities can be interpreted as analogues of “sta-
bility costs” in reasoning processes, bounding inference structures in ways comparable to
physical or informational energy.

2.2 Kontsevich’s Arithmetic Supports


A complementary perspective is provided by Kontsevich’s theory of arithmetic supports.
In joint work with Odesskii, he shows that differential and difference operators admit
arithmetic supports that encode irreducible structural features persisting under deforma-
tion.10 These supports generalize spectra by attaching arithmetic invariants that behave
like conserved quantities in computation. In the context of learning dynamics, arithmetic
supports resemble structural “bias terms” that persist across iterations, ensuring stability
under perturbation and preventing divergence.

2.3 Motivic Zeta Functions and Log Geometry


Motivic zeta functions, introduced by Denef and Loeser, encode subtle information about
singularities and their arithmetic structure. Bultot and Nicaise computed motivic zeta
functions on log smooth models, producing explicit formulas and identifying poles relevant
to the monodromy conjecture.11 Pham extended this direction by connecting motivic
integration with Donaldson–Thomas theory through the integral identity conjecture in
motivic homotopy theory.12 In this perspective, the poles of motivic zeta functions act
as thresholds or bifurcation points, capturing the transition behavior of both geometric
and informational systems.

2.4 Summary
The arithmetic Hodge index theorem, arithmetic supports, and motivic zeta functions
together constitute a triad of invariants: inequalities, supports, and poles. These math-
ematical structures, though developed in pure arithmetic geometry, resonate with the
principles of stability, irreducibility, and transition thresholds found in cognition and
learning. They provide the invariant foundations on which motivic information theory
builds, as we explore in the next section.
9
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
10
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
11
Emmanuel Bultot and Johannes Nicaise, “Computing Motivic Zeta Functions on Log Smooth Mod-
els,” Mathematische Zeitschrift 295, no. 3–4 (2020): 1279–1311.
12
Khoa Bang Pham, “The Integral Identity Conjecture in Motivic Homotopy Theory,” arXiv preprint
arXiv:2411.19699 (2024).

5
3 Motivic Information Theory
3.1 Entropy and Information Loss in Motives
The triad of invariants introduced in arithmetic geometry—inequalities, supports, and
poles—finds a natural continuation in information theory. Classical approaches, grounded
in Shannon’s entropy, treat information as a numerical measure of uncertainty. While
effective in communication theory, this view lacks the structural invariance revealed by
arithmetic geometry. Marcolli proposed a motivic generalization in which entropy is
valued not in the reals but in Grothendieck rings of varieties, allowing information to
be expressed as a motivic invariant.13 In this framework, information loss is not merely
numerical subtraction but a categorical morphism, aligning informational processes with
the invariance principles of arithmetic geometry.

3.2 Geometry of Information


Combe, Manin, and Marcolli extended this perspective by developing a geometry of
information, which formulates both classical and quantum informational processes in
categorical and functorial terms.14 Information becomes relational rather than scalar,
embodied in transitions across motives, probability spaces, and operator algebras. This
geometric approach resonates with cognition and AI, where reasoning processes often
preserve relational structures, such as context or hierarchy, rather than collapsing into
single entropy measures.

3.3 Relative Information as a Motivic Invariant


Building on these foundations, Lin developed a program of motivic information grounded
in relative, rather than absolute, measures. Relative information formalizes the difference
between processes, generalizing conditional entropy and linking it to zeta functions, Mellin
transforms, and the Gelfand–Leray form.15 In this view, relative information is not
simply an analytic quantity but a motivic invariant that persists across transformations,
analogous to arithmetic supports in operator theory.

3.4 Bias, Stability, and Learning


Lin further demonstrated how biased stochastic approximation can be stabilized by cor-
recting for bias through Poisson equations, yielding convergence guarantees in iterative
processes.16 This mirrors the way arithmetic supports act as correction terms that per-
sist under deformation. In neural contexts, spiking networks trained via minimization
of relative information embody this principle: synaptic updates are guided not only by
13
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
14
Noémie Combe, Yuri I. Manin, and Matilde Marcolli, “Geometry of Information: Classical and
Quantum Aspects,” arXiv preprint arXiv:2107.08006 (2021).
15
Shaowei Lin, “Motivic Information, Path Integrals and Spiking Networks,” blog series (2020–2021),
available at [Link]
16
Shaowei Lin, “Biased Stochastic Approximation,” blog post (2020), available at https://
[Link]/posts/2020-12-01-biased-stochastic-approximation/.

6
error gradients but also by structural constraints, ensuring convergence and biological
plausibility.17

3.5 Summary
Marcolli’s motivic entropy, the geometry of information, and Lin’s program of relative
information together extend the concept of invariance from geometry to information
and learning. They suggest that reasoning should be understood as the minimization
of motivic relative information subject to arithmetic constraints. In this reformulation,
information becomes a structural invariant, positioning it as a natural foundation for a
motivic theory of intelligence.

4 Neural Codes and Homotopy Structures


4.1 Neural Codes as Homotopy Types
Manin introduced a homotopy-theoretic approach to neural codes, showing that place
field recognition in biological systems can be modeled by associating simplicial complexes
to neuronal activity.18 These complexes capture not only local firing patterns but also
the global homotopy type of the cognitive map, making neural codes into topological
invariants. This perspective suggests that cognition itself is mediated by categorical
structures, where neural activity corresponds to objects and morphisms in a homotopy
category.

4.2 Categorical Models of Neural Information Networks


Expanding this framework, Manin and Marcolli proposed categorical models of neural
networks, in which information is carried not simply as a flow of numbers but as mor-
phisms in enriched categories.19 Their formulation situates neural computation within
homotopy theory, aligning learning dynamics with categorical invariants. This categori-
cal recasting of networks resonates with the shift, already visible in motivic information
theory, from scalar to structural representations of information.

4.3 Gamma Spaces and Higher Information Structures


Marcolli further advanced these ideas by connecting information to gamma spaces, a
concept from stable homotopy theory.20 In this setting, information systems inherit
multiplicative and higher homotopical structures, providing a bridge between abstract
algebraic topology and concrete models of learning. Gamma spaces enable a refined ac-
count of how information aggregates and transforms, capturing invariants across multiple
layers of cognitive or computational processes.
17
Shaowei Lin, “Spiking Neural Networks,” Motivic Information, Path Integrals and Spiking Networks
(2021), [Link]
18
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
19
Yuri I. Manin and Matilde Marcolli, “Homotopy Theoretic and Categorical Models of Neural Infor-
mation Networks,” arXiv preprint arXiv:2006.15136 (2020).
20
Matilde Marcolli, “Gamma Spaces and Information,” arXiv preprint arXiv:1807.05314 (2018).

7
4.4 δ-Algebras and Homotopy-Theoretic Foundations
Jack Morava’s recent work on δ-algebras and prisms in homotopy theory provides a
further foundation for categorical and motivic approaches to information.21 δ-algebras
encode deep structural relations between cohomology and arithmetic, suggesting that
the invariants governing cognition may be rooted in low-dimensional algebraic topology.
In this sense, Morava’s work extends the motivic program by linking homotopy theory,
arithmetic, and information invariants into a single framework.

4.5 Summary
Neural codes, categorical network models, gamma spaces, and δ-algebras together reveal
how cognition can be described in terms of homotopy and categorical invariants. These
approaches extend motivic information theory into the biological and computational do-
main, suggesting that learning is mediated by structures that are as much topological as
they are statistical. This sets the stage for the next section, where geometric and circular
neural architectures embody these invariants in network design.

5 Geometric and Circular Neural Architectures


5.1 Circularity in Neural Computation
Biological neural systems are highly recurrent, with circular and feedback connections
forming the majority of synaptic interactions. Artificial neural networks, in contrast,
have historically favored acyclic, feed-forward architectures. Recent work has begun to
close this gap. Yang and collaborators introduced cyclic neural networks that embed
recurrence directly into their architecture, showing improved performance on tasks re-
quiring temporal and contextual sensitivity.22 Circular connections function as memory
structures, allowing present activations to be modulated by past states, a principle long
recognized in theoretical neuroscience.

5.2 Circular Kernels and Backpropagation


The geometry of kernels further demonstrates the role of circularity. He, Li, Yang, Huang,
and Hopcroft integrated large circular kernels into convolutional neural networks through
neural architecture search, showing that rotational symmetry can enhance feature extrac-
tion in high-dimensional spaces.23 Earlier, Ridella and collaborators developed circular
backpropagation networks for classification, where the propagation of error signals itself
followed circular dynamics.24 Both approaches highlight how circular symmetry enriches
representation, creating invariants that are stable under cyclic transformations.
21
Jack Morava, “Notes on δ-Algebras and Prisms in Homotopy Theory,” arXiv preprint
arXiv:2401.12336 (2024).
22
Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S. Yu,
“Cyclic Neural Networks,” arXiv preprint arXiv:2402.03332 (2024).
23
Kun He, Chao Li, Yixiao Yang, Gao Huang, and John E. Hopcroft, “Integrating Large Circular
Kernels into CNNs through Neural Architecture Search,” arXiv preprint arXiv:2107.02451 (2021).
24
S. Ridella, S. Rovetta, and R. Zunino, “Circular Backpropagation Networks for Classification,” IEEE
Transactions on Neural Networks 8, no. 1 (1997): 84–97.

8
5.3 Riemannian Geometry in Neural Networks
Geometric structures extend beyond circularity. A line of research at NeurIPS explored
how Riemannian geometry shapes learning dynamics, showing that curvature-sensitive
optimization can improve generalization and robustness.25 By embedding networks in
curved spaces, one captures invariants analogous to those of differential geometry, where
geodesics and curvature guide learning trajectories.

5.4 Circularity and Biological Intelligence


The role of circularity is not merely architectural but cognitive. A discussion on the Psy-
chology StackExchange emphasized that recurrent and circular structures allow networks
to process temporal sequences differently depending on context: the same input may carry
distinct meaning depending on what precedes it.26 Similarly, an essay in The Quantum
Record suggested that circular data structures could mimic biological intelligence and
improve machine learning by encoding context-dependent feedback.27 These perspectives
highlight circularity as an invariant principle across both biology and computation.

5.5 Summary
Circular, recurrent, and geometric architectures reveal how invariants of symmetry and
feedback are embedded in network design. Circular kernels, cyclic backpropagation, and
Riemannian structures extend the motivic and homotopy-theoretic perspectives into prac-
tical architectures, showing that invariance governs not only abstract theory but also
concrete implementations. In the next section, we turn to hyperbolic and dynamical
perspectives, where invariants of scale and action further shape learning.

6 Hyperbolic and Dynamical Perspectives


6.1 Hyperbolic Embeddings and Network Geometry
Complex networks, such as social, biological, and informational graphs, often display
scale-free and hierarchical structure. Serrano, Krioukov, and Boguñá demonstrated that
such networks can be efficiently represented in hyperbolic space, where geodesic distance
encodes both similarity and hierarchy.28 For AI, hyperbolic embeddings provide com-
pressed, geometry-aware representations, enabling efficient reasoning over hierarchical
data. These embeddings resonate with the motivic perspective by treating geometry as
an invariant substrate of information.
25
S. Bonnabel and R. Sepulchre, “Principles of Riemannian Geometry in Neural Networks,” in Ad-
vances in Neural Information Processing Systems, vol. 30 (2017).
26
Dylan Richard Muir, “What Role Do Circular Network Structures Play
in Neural Networks?,” answer on Psychology & Neuroscience StackExchange
(2017), available at [Link]
what-role-do-circular-network-structures-play-in-neural-networks.
27
James F. Stark, “Could Circular Data Mimic Biological Intelli-
gence and Improve Machine Learning?,” The Quantum Record (2023),
available at [Link]
could-circular-data-mimic-biological-intelligence-and-improve-machine-learning/.
28
M. Á. Serrano, D. Krioukov, and M. Boguñá, “Self-Similarity of Complex Networks and Hidden
Metric Spaces,” Nature Communications 8, no. 1 (2017): 1856.

9
6.2 Loxodromic Dynamics and Counting Invariants
Beyond static embeddings, dynamical aspects of hyperbolic geometry play a role. Gekht-
man, Taylor, and Tiozzo investigated the distribution of loxodromic elements in hyper-
bolic group actions, showing how counting such dynamics provides invariant measures of
complexity.29 In analogy, reasoning processes may be modeled as loxodromic trajectories:
iterative transformations that never collapse into trivial cycles but instead generate new
structural information with each step.

6.3 Geometry and Dynamics in Non-Proper Settings


Das, Simmons, and Urbański extended hyperbolic dynamics to non-proper metric spaces,
highlighting how invariants can persist even in settings lacking classical compactness.30
This generalization suggests that reasoning systems may retain structural invariants un-
der non-ideal or incomplete information, a feature critical for robust AI operating in
real-world environments.

6.4 Group Actions and Structural Stability


Hamann studied group actions on metric spaces, identifying conditions under which fixed
points or free subgroups emerge as invariant features.31 These results point to the im-
portance of symmetries and group actions in stabilizing dynamic systems. In the neural
context, such invariants can guide how feedback loops or recurrent modules stabilize
learning trajectories.

6.5 Low-Dimensional Topology and Periodic Structures


Morava’s work on topological invariants of chemical reaction networks,32 circle actions,33
Swan–Tate cohomology,34 and symmetry-breaking currents35 connects hyperbolic dynam-
ics to periodicity and symmetry-breaking. His later work on boundary framings and
equivariant orientations further explores conserved quantities in four-manifolds and K-
theory.36 By viewing neural processes as equivariant systems, one can identify invariants
that persist under symmetry-breaking transitions.
29
Ilya Gekhtman, Samuel J. Taylor, and Giulio Tiozzo, “Counting Loxodromics for Hyperbolic Ac-
tions,” arXiv preprint arXiv:1605.02103 (2016).
30
Tushar Das, David Simmons, and Mariusz Urbański, “Geometry and Dynamics in Gromov Hy-
perbolic Metric Spaces: With an Emphasis on Non-Proper Settings,” arXiv preprint arXiv:1409.2155
(2014).
31
Matthias Hamann, “Group Actions on Metric Spaces: Fixed Points and Free Subgroups,” arXiv
preprint arXiv:1301.6513 (2013).
32
Jack Morava, “Topological Invariants of Some Chemical Reaction Networks,” arXiv preprint
arXiv:1910.12609 (2019).
33
Jack Morava, “Periods for Topological Circle Actions,” arXiv preprint arXiv:2301.05772 (2023).
34
Jack Morava, “Swan–Tate Cohomology of Meromorphic Circle Actions,” arXiv preprint
arXiv:2403.19714 (2024).
35
Jack Morava, “Circular Symmetry-Breaking and Topological Noether Currents,” arXiv preprint
arXiv:2407.00672 (2024).
36
Jack Morava, “Boundary Framings for Locally Conformally Symplectic Four-Manifolds,” arXiv
preprint arXiv:2502.05983 (2025); Jack Morava, “On a Complex Topological Orientation for Circle-
Equivariant K-Theory,” arXiv preprint arXiv:2505.21719 (2025).

10
6.6 Summary
Hyperbolic embeddings, loxodromic dynamics, non-proper hyperbolic structures, group
actions, and topological invariants together form a dynamical perspective on invariance.
They extend motivic and categorical approaches into the geometry of action and transfor-
mation, offering tools for modeling reasoning as a balance between stability and generative
dynamism. In the next section, we connect these dynamical invariants to arithmetic and
quantum structures through Kontsevich’s and related work.

7 Kontsevich, Arithmetic Structures, and Quantum


Insights
7.1 Fourier Transform and One-Loop Exactness
Kontsevich and Odesskii recently investigated when Fourier transforms of oscillatory in-
tegrals become one-loop exact, meaning higher-order quantum corrections vanish.37 This
phenomenon points to situations where invariants collapse to minimal representations,
revealing hidden simplicity in complex integrals. In the context of AI, one-loop exact-
ness resonates with efficient reasoning processes: certain transformations stabilize at low
computational depth, encoding invariants that prevent runaway complexity.

7.2 Holomorphic Floer Theory and Exponential Integrals


In joint work with Soibelman, Kontsevich developed holomorphic Floer theory, extend-
ing exponential integrals to finite and infinite dimensions.38 This framework provides
tools for analyzing spaces of trajectories and critical points, linking arithmetic geometry
with quantum field theoretic methods. For AI, Floer-theoretic ideas suggest that rea-
soning processes may be modeled as path integrals over informational trajectories, with
holomorphic structures ensuring convergence and stability.

7.3 Integer Eigenvalues as Invariants


In collaboration with Kenyon, Ogievetsky, Pohoata, Sawin, and Shlosman, Kontsevich
explored “the miracle of integer eigenvalues.” Their work shows how discrete arithmetic
structures emerge naturally in spectral problems.39 These integer eigenvalues can be
interpreted as quantized invariants, suggesting that reasoning and learning processes may
also admit discrete spectra of stability conditions, much like energy levels in quantum
systems.
37
Maxim Kontsevich and Alexander Odesskii, “When the Fourier Transform Is One Loop Exact?,”
arXiv preprint arXiv:2306.02178 (2023).
38
Maxim Kontsevich and Yan Soibelman, “Holomorphic Floer Theory I: Exponential Integrals in Finite
and Infinite Dimensions,” arXiv preprint arXiv:2402.07343 (2024).
39
Richard Kenyon, Maxim Kontsevich, Oleg Ogievetsky, Cosmin Pohoata, Will Sawin, and Senya
Shlosman, “The Miracle of Integer Eigenvalues,” arXiv preprint arXiv:2401.05291 (2024).

11
7.4 Operads, Motives, and Deformation Quantization
Kontsevich’s earlier contributions to deformation quantization linked operads and mo-
tives, providing categorical frameworks for encoding deformation invariants.40 These
ideas foreshadow the motivic approach to information: operadic structures parallel neu-
ral architectures, while motives encode the invariants preserved across transformations.
The operadic viewpoint enriches AI by suggesting that learning rules themselves can be
composed and deformed while preserving higher-order invariants.

7.5 Cocycle Constructions and Quantization Theorems


Recent work by Ulmer has expanded on Kontsevich’s cocycle constructions, applying
them to the quantization of the Loday–Quillen–Tsygan theorem.41 These developments
highlight how cocycles act as correction terms in quantization, ensuring consistency of
algebraic structures. By analogy, AI systems may require cocycle-like invariants that
stabilize learning across levels of abstraction, preventing inconsistencies between local
updates and global reasoning.

7.6 Atoms and Symbols


In Atoms meet symbols, Cavenaghi, Katzarkov, and Kontsevich unify two frameworks
of invariants in G-equivariant birational geometry: Hodge-theoretic atoms and modular
symbols. Their framework extends to Chen–Ruan atoms for orbifolds and new Z/p-
birational invariants.42 In this context, “symbols” refer to algebraic encodings of group
actions, not cognitive tokens. However, we reinterpret this structure metaphorically: just
as atoms and symbols in algebraic geometry provide complementary invariants, so too
might global invariants of reasoning be complemented by local structures akin to symbolic
reasoning in AI.

7.7 Summary
Kontsevich’s body of work, spanning Fourier transforms, Floer theory, integer spectra,
operads, cocycles, and symbolic frameworks, exemplifies the unification of arithmetic,
geometry, and quantum ideas. These contributions reinforce the central claim of this
paper: reasoning and learning can be grounded in invariant structures that persist across
deformations, whether geometric, algebraic, or informational.

8 Toward a Motivic Theory of Intelligence


8.1 Invariants as the Basis of Reasoning
Across arithmetic geometry, motivic information, homotopy structures, and neural archi-
tectures, a common theme emerges: reasoning is stabilized by invariants. Inequalities,
40
Maxim Kontsevich, “Operads and Motives in Deformation Quantization,” arXiv preprint
arXiv:math/9904055 (1999).
41
Jakob Ulmer, “Kontsevich’s Cocycle Construction and Quantization of the Loday–Quillen–Tsygan
Theorem,” arXiv preprint arXiv:2506.15210 (2025).
42
Leonardo F. Cavenaghi, Ludmil Katzarkov, and Maxim Kontsevich, “Atoms Meet Symbols,” arXiv
preprint arXiv:2509.15831 (2025).

12
supports, poles, homotopy types, categorical morphisms, circular symmetries, hyperbolic
embeddings, and integer spectra all provide conserved structures that persist across trans-
formations. In cognition and AI, these invariants play the role of “anchors,” ensuring that
learning does not drift into instability or collapse into triviality.

8.2 Motivic Intelligence


Motivic intelligence can be defined as reasoning and learning guided by structural invari-
ants that transcend specific representations. Unlike conventional AI, which is typically
driven by optimization of numerical cost functions, motivic AI seeks stability across cat-
egorical, arithmetic, and geometric layers. In this view, neural codes serve as categorical
carriers of invariants,43 relative information measures function as motivic entropies,44 and
hyperbolic and circular architectures implement invariants in practice.45

8.3 Cognitive Implications


The motivic perspective suggests that cognition itself may be grounded in invariant struc-
tures. Place field recognition (modeled by homotopy types), recurrent memory (enabled
by circular architectures), and hierarchical reasoning (captured by hyperbolic embed-
dings) all demonstrate that cognition is not reducible to numbers but is instead shaped
by invariants of topology, geometry, and arithmetic. Such a perspective aligns with the
idea that intelligence evolves by maximizing structural stability and minimizing relative
information loss.46

8.4 Future Directions


Future research should aim to formalize motivic AI along several axes:

1. Arithmetic foundations: Integrating arithmetic Hodge theory, supports, and


motivic zeta functions into models of computational invariance.47

2. Information-theoretic frameworks: Extending motivic entropy and relative


information into practical measures for machine learning.48
43
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
44
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
45
Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S. Yu,
“Cyclic Neural Networks,” arXiv preprint arXiv:2402.03332 (2024); M. Á. Serrano, D. Krioukov, and M.
Boguñá, “Self-Similarity of Complex Networks and Hidden Metric Spaces,” Nature Communications 8,
no. 1 (2017): 1856.
46
Shaowei Lin, “Spiking Neural Networks,” Motivic Information, Path Integrals
and Spiking Networks (2021), available at [Link]
2021-06-05-spiking-neural-networks/.
47
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025); Maxim Kontsevich and Alexander Odesskii, “Ex-
plicit Formulas for Arithmetic Support of Differential and Difference Operators,” arXiv preprint
arXiv:2505.12480 (2025); Emmanuel Bultot and Johannes Nicaise, “Computing Motivic Zeta Functions
on Log Smooth Models,” Mathematische Zeitschrift 295, no. 3–4 (2020): 1279–1311.
48
Noémie Combe, Yuri I. Manin, and Matilde Marcolli, “Geometry of Information: Classical and
Quantum Aspects,” arXiv preprint arXiv:2107.08006 (2021).

13
3. Homotopy and categorical structures: Developing categorical learning archi-
tectures that preserve homotopy types across layers.49

4. Geometric and dynamical models: Implementing hyperbolic, circular, and


Floer-theoretic architectures as testbeds for invariance-based learning.50

These directions suggest a unification of AI, mathematics, and cognition around a


single principle: intelligence is invariant-driven.

8.5 Summary
Motivic AI offers a framework in which reasoning is guided by the invariants of arithmetic,
geometry, topology, and information. By grounding learning in structures that persist
across transformation, it provides stability, interpretability, and integration with deep
mathematical principles. This synthesis points toward a new generation of AI, where
invariants replace mere optimization as the foundation of intelligence.

9 Conclusion
This paper has outlined a framework for Motivic AI, grounded in the invariants of arith-
metic geometry, motivic information theory, homotopy structures, and geometric neu-
ral architectures. We have shown how diverse mathematical advances—from Abboud’s
arithmetic Hodge index theorem51 and Kontsevich’s arithmetic supports,52 to Marcolli’s
motivic entropy53 and Manin’s neural codes,54 converge on the idea that reasoning and
learning can be understood as processes governed by invariants rather than mere opti-
mizations.
By integrating arithmetic, categorical, geometric, and dynamical perspectives, motivic
AI provides a unified approach to stability, interpretability, and cognition. The triad of
arithmetic invariants (inequalities, supports, poles), the motivic extensions of entropy
and information, the homotopy-theoretic models of neural codes, and the architectural
role of circular and hyperbolic symmetries all reinforce the same principle: intelligence
evolves and operates by conserving invariants across transformations.
Kontsevich’s work exemplifies this unification, bridging Fourier analysis,55 Floer the-
49
Yuri I. Manin and Matilde Marcolli, “Homotopy Theoretic and Categorical Models of Neural In-
formation Networks,” arXiv preprint arXiv:2006.15136 (2020); Matilde Marcolli, “Gamma Spaces and
Information,” arXiv preprint arXiv:1807.05314 (2018).
50
Ilya Gekhtman, Samuel J. Taylor, and Giulio Tiozzo, “Counting Loxodromics for Hyperbolic
Actions,” arXiv preprint arXiv:1605.02103 (2016); Maxim Kontsevich and Yan Soibelman, “Holo-
morphic Floer Theory I: Exponential Integrals in Finite and Infinite Dimensions,” arXiv preprint
arXiv:2402.07343 (2024).
51
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
52
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
53
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
54
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
55
Maxim Kontsevich and Alexander Odesskii, “When the Fourier Transform Is One Loop Exact?,”
arXiv preprint arXiv:2306.02178 (2023).

14
ory,56 integer eigenvalues,57 and operadic motives58 with birational frameworks of atoms
and symbols.59 In the original work, “symbols” designate modular invariants of group
actions. In our reinterpretation, we extend this terminology metaphorically to suggest an
analogue with symbolic reasoning in AI, where global invariants and local symbolic struc-
tures interact to stabilize cognitive processes. For further clarification of this distinction,
see Appendix A.
Looking forward, motivic AI suggests new pathways for research: implementing invariant-
based architectures, formalizing motivic entropy in practice, and testing the cognitive
plausibility of homotopy-theoretic neural models. By grounding intelligence in invariants,
we move toward a vision of AI that is not only powerful but also principled, interpretable,
and deeply connected to the structures of mathematics and cognition.

A On Atoms, Symbols, and Information Geometry


A.1 Atoms Meet Symbols
The paper Atoms Meet Symbols by Leonardo F. Cavenaghi, Ludmil Katzarkov, and
Maxim Kontsevich develops a framework within G-equivariant birational geometry, where
two strands of invariants converge. The theory of atoms, developed by Katzarkov, Kontse-
vich, Pantev, and Yu, produces birational invariants from Hodge-theoretic and quantum
cohomological data. The theory of modular symbols, advanced by Kontsevich, Pestun,
and Tschinkel, encodes invariants of group actions through symbolic combinatorial struc-
tures. Their unification yields new birational obstructions, including Chen–Ruan atoms
and Z/p-birational invariants.60
Here, “symbols” do not refer to linguistic or cognitive tokens but to algebraic and
combinatorial invariants encoding group actions. The title emphasizes the meeting of
global invariants (“atoms”) and local symbolic invariants (“symbols”), both situated
firmly within algebraic geometry.

A.2 Birational Invariants and Quantum Multiplication


In related work, Katzarkov, Kontsevich, Pantev, and Yu construct Hodge atoms as bira-
tional invariants built from Hodge structures and quantum multiplication. These invari-
ants provide obstructions to rationality and show how cohomological structures persist
under birational transformations. The notion of atoms is thereby grounded in explicit
geometric constructions that link quantum cohomology, motives, and birational classifi-
cation.61
56
Maxim Kontsevich and Yan Soibelman, “Holomorphic Floer Theory I: Exponential Integrals in Finite
and Infinite Dimensions,” arXiv preprint arXiv:2402.07343 (2024).
57
Richard Kenyon, Maxim Kontsevich, Oleg Ogievetsky, Cosmin Pohoata, Will Sawin, and Senya
Shlosman, “The Miracle of Integer Eigenvalues,” arXiv preprint arXiv:2401.05291 (2024).
58
Maxim Kontsevich, “Operads and Motives in Deformation Quantization,” arXiv preprint
arXiv:math/9904055 (1999).
59
Leonardo F. Cavenaghi, Ludmil Katzarkov, and Maxim Kontsevich, “Atoms Meet Symbols,” arXiv
preprint arXiv:2509.15831 (2025).
60
Leonardo F. Cavenaghi, Ludmil Katzarkov, and Maxim Kontsevich, “Atoms Meet Symbols,” arXiv
preprint arXiv:2509.15831 (2025).
61
Ludmil Katzarkov, Maxim Kontsevich, Tony Pantev, and Tony Yue Yu, “Birational Invariants from
Hodge Structures and Quantum Multiplication,” arXiv preprint arXiv:2508.05105 (2025).

15
In this sense, the “symbols” of the earlier framework can be interpreted as structural
encodings of these invariants. They are not external additions but algebraic witnesses to
stability conditions in geometry.

A.3 Ergodic Theory and Symbolic Dynamics


McMullen’s lecture notes on Ergodic Theory, Geometry, and Dynamics highlight another
perspective on “symbols”: symbolic dynamics. In dynamical systems, complex flows are
often represented through sequences of discrete symbols encoding recurrence, periodicity,
or mixing. This symbolic representation does not simplify the system but translates it
into an invariant coding amenable to ergodic and spectral analysis.62
By analogy, one may view the “symbols” of Kontsevich’s framework as codings of
invariance under group actions. Just as ergodic theory relies on symbolic encodings
to reveal invariant statistical properties, birational geometry uses modular symbols to
encode invariant structures of varieties.

A.4 F-Manifolds and the Geometry of Information


Combe and Manin extend this symbolic perspective into the domain of information
theory. Their work on F-Manifolds and Geometry of Information shows that statisti-
cal manifolds—spaces of probability distributions—carry natural F -manifold structures,
generalizing Frobenius manifolds. These structures organize information flows and con-
ditional dependencies into algebraic objects.63
Here, symbols become categorical carriers of information geometry: algebraic struc-
tures that encode probability and statistical dependence in a way parallel to how modular
symbols encode group actions. The connection with atoms is that both describe invariant
carriers of structure—one in geometry, the other in information.

A.5 Summary
Across these works, “symbols” consistently denote algebraic or categorical invariants
rather than linguistic or cognitive tokens. They mediate between:

• Atoms: global invariants built from Hodge-theoretic and quantum data,

• Dynamics: symbolic encodings of flows and group actions,

• Information: categorical carriers of statistical structure.

This appendix clarifies the mathematical meaning of “symbols” and situates them
within a broader landscape. While this paper reinterprets such invariants for artificial in-
telligence, the reinterpretation is ours; the original mathematical works establish symbols
as structural invariants linking geometry, dynamics, and information.

62
Curtis T. McMullen, Ergodic Theory, Geometry, and Dynamics (Lecture Notes, Harvard Uni-
versity, 2020). [Link]
[Link].
63
Noémie Combe and Yuri I. Manin, “F-Manifolds and Geometry of Information,” arXiv preprint
arXiv:2004.08808 (2020).

16
B Glossary
B.1 I. Arithmetic and Geometric Invariants
Arithmetic Hodge Index Theorem. An extension of the classical Hodge
index theorem to arithmetic surfaces and higher-dimensional varieties.
Provides inequalities governing the intersection form in arithmetic in-
tersection theory. Recent local versions (Abboud, 2025) apply to quasi-
projective varieties, yielding stability constraints interpreted as “in-
variant costs.”
Arithmetic Supports. Introduced by Kontsevich and Odesskii (2025). Dif-
ferential and difference operators possess supports in arithmetic geome-
try, encoding irreducible structural features that persist under defor-
mation. Serve as analogues of spectra in non-commutative geometry.
Atoms. Birational invariants developed by Katzarkov, Kontsevich, Pan-
tev, and Yu (2025). Constructed from Hodge structures and quantum
multiplication, atoms provide obstructions to rationality. Invariant “build-
ing blocks” for classifying varieties.
Symbols (Mathematical sense). In Atoms Meet Symbols (Cavenaghi, Katzarkov,
Kontsevich, 2025), symbols refer not to cognitive or linguistic tokens
but to modular symbols and related algebraic encodings of group ac-
tions. Serve as local combinatorial invariants complementing global in-
variants (atoms).
Motivic Zeta Functions. Introduced by Denef and Loeser; encode arith-
metic and geometric data of singularities. Bultot and Nicaise (2020) de-
veloped computational methods on log smooth models. Poles of motivic
zeta functions often relate to monodromy conjectures.
Integral Identity Conjecture. Extended by Pham (2024) in motivic homotopy
theory. Connects motivic integration with Donaldson–Thomas invari-
ants, suggesting deep links between enumerative geometry and motivic
entropy.

B.2 II. Motivic and Information-Theoretic Frameworks


Motivic Information. A framework proposed by Marcolli (2017) for ex-
tending information theory to motivic and categorical settings. Recasts
entropy in terms of motivic measures and relations between categories.
Motivic Relative Information. Shaowei Lin’s extension of relative entropy
to motivic and categorical settings. Forms the foundation for his series
Motivic Information, Path Integrals, and Spiking Networks (2020–2021).
Gamma Spaces and Information. Marcolli (2018) linked Segal’s Γ-spaces
(infinite loop space machines) with information theory. Suggests that
information-processing structures can be modeled homotopically.
Homotopy Theoretic Neural Information Networks. Proposed by Manin and
Marcolli (2020). Neural codes are treated as categorical/homotopy
types, where information flows are modeled by higher categorical struc-
tures.
Neural Codes and Homotopy Types. Manin (2015) modeled hippocampal

17
place fields using homotopy theory. Neural codes correspond to topo-
logical invariants of neural activity.
F-Manifolds and Information Geometry. Combe and Manin (2020) general-
ized Frobenius manifolds to F-manifolds in the setting of probability dis-
tributions. Encodes algebraic structures underlying information flow
and statistical manifolds.

B.3 III. Neural Architectures and Learning


Cyclic Neural Networks. Introduced by Yang et al. (2024). Architectures
where recurrent cycles (loops) are central. Better capture temporal
dependencies than feed-forward models, inspired by cortical recurrence.
Circular Backpropagation Networks. Proposed by Ridella, Rovetta, and
Zunino (1997). Neural networks with circular connections trained using
backpropagation. Early exploration of recurrent feedback in classifi-
cation tasks.
Circular Kernels in CNNs. He et al. (2021) introduced large circular
kernels into CNNs using neural architecture search. Captures circular
and rotational symmetries in data.
Hyperbolic Embeddings. Embedding complex networks into hyperbolic
space (Serrano, Krioukov, Boguñá, 2017). Efficiently represent hier-
archical and scale-free structures with low distortion.
Spiking Neural Networks (SNNs). Explored by Shaowei Lin (2021). Net-
works where communication occurs through discrete spikes, closely mim-
icking biological neurons. Learning rules derived from motivic relative
information and biased stochastic approximation.
Biased Stochastic Approximation. Method introduced by Lin (2020) for
convergence in stochastic processes under biased conditions. Supports
stable learning dynamics in motivic-inspired models.

B.4 IV. Dynamical and Topological Structures


Symbolic Dynamics. Classical ergodic theory technique (McMullen, 2020).
Complex flows are encoded as symbolic sequences representing recur-
rence and periodicity. Bridges continuous dynamics with discrete sym-
bolic codes.
Ergodic Theory. The study of invariant measures under dynamical sys-
tems. Provides probabilistic descriptions of long-term system behavior.
Central in bridging symbolic codings with continuous flows.
Group Actions and Loxodromics. Gekhtman, Taylor, Tiozzo (2016) stud-
ied counting of loxodromic elements in hyperbolic group actions. These
describe dynamics with exponential divergence, central in non-amenable
group settings.
Gromov Hyperbolic Spaces. Spaces where geodesic triangles are δ-thin
(Das, Simmons, Urbański, 2014). Underlie much of geometric group the-
ory, relevant for network embeddings and hyperbolic neural models.
Renormalization Groupoids. Morava (2020) introduced renormalization
groupoids in algebraic topology, generalizing renormalization flows as

18
groupoid structures.
Circle Actions in Topology. Explored extensively by Morava (2023–2025).
Circle-equivariant K-theory, Swan–Tate cohomology, and locally con-
formally symplectic manifolds reveal deep symmetries in topology.

B.5 V. Cross-Cutting Concepts


Operads and Deformation Quantization. Kontsevich (1999) introduced op-
erads into deformation quantization. Provide a categorical framework
for encoding compositional structures of operations, foundational in
motives and quantization.
Holomorphic Floer Theory. Kontsevich and Soibelman (2024) developed a
Floer theory for exponential integrals in finite and infinite dimensions.
Extends Floer invariants into holomorphic settings.
Integer Eigenvalues Phenomenon. Kenyon, Kontsevich, et al. (2024) stud-
ied cases where random matrices have integer eigenvalues with unexpect-
edly high probability, linking combinatorics, probability, and represen-
tation theory.
Motivic AI (Our Framework). A synthesis proposed in this paper: reasoning
and learning modeled as minimization of motivic relative information sub-
ject to arithmetic constraints. Neural codes serve as categorical carri-
ers of invariants; network architectures (circular, hyperbolic, spiking)
implement them dynamically.

19
References
• Abboud, Marc. “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties.” arXiv preprint arXiv:2503.14099, 2025.
• Bonnabel, Silvère, and Rodolphe Sepulchre. “Principles of Riemannian Geometry in Neural Net-
works.” In Advances in Neural Information Processing Systems, vol. 30 (2017).
• Bultot, Emmanuel, and Johannes Nicaise. “Computing Motivic Zeta Functions on Log Smooth
Models.” Mathematische Zeitschrift 295, no. 3–4 (2020): 1279–1311.
• Cavenaghi, Leonardo F., Ludmil Katzarkov, and Maxim Kontsevich. “Atoms Meet Symbols.”
arXiv preprint arXiv:2509.15831, 2025.
• Combe, Noémie, and Yuri I. Manin. “F-Manifolds and Geometry of Information.” arXiv preprint
arXiv:2004.08808, 2020.
• Combe, Noémie, Yuri I. Manin, and Matilde Marcolli. “Geometry of Information: Classical and
Quantum Aspects.” arXiv preprint arXiv:2107.08006, 2021.
• Das, Tushar, David Simmons, and Mariusz Urbański. “Geometry and Dynamics in Gromov Hy-
perbolic Metric Spaces: With an Emphasis on Non-Proper Settings.” arXiv preprint arXiv:1409.2155,
2014.
• Gekhtman, Ilya, Samuel J. Taylor, and Giulio Tiozzo. “Counting Loxodromics for Hyperbolic
Actions.” arXiv preprint arXiv:1605.02103, 2016.
• Hamann, Matthias. “Group Actions on Metric Spaces: Fixed Points and Free Subgroups.” arXiv
preprint arXiv:1301.6513, 2013.
• He, Kun, Chao Li, Yixiao Yang, Gao Huang, and John E. Hopcroft. “Integrating Large Circular
Kernels into CNNs through Neural Architecture Search.” arXiv preprint arXiv:2107.02451, 2021.
• Katzarkov, Ludmil, Maxim Kontsevich, Tony Pantev, and Tony Yue Yu. “Birational Invariants
from Hodge Structures and Quantum Multiplication.” arXiv preprint arXiv:2508.05105, 2025.
• Kenyon, Richard, Maxim Kontsevich, Oleg Ogievetsky, Cosmin Pohoata, Will Sawin, and Senya
Shlosman. “The Miracle of Integer Eigenvalues.” arXiv preprint arXiv:2401.05291, 2024.
• Kontsevich, Maxim. “Operads and Motives in Deformation Quantization.” arXiv preprint arXiv:math/9904055,
1999.
• Kontsevich, Maxim, and Alexander Odesskii. “When the Fourier Transform Is One Loop Exact?”
arXiv preprint arXiv:2306.02178, 2023.
• Kontsevich, Maxim, and Alexander Odesskii. “Explicit Formulas for Arithmetic Support of Dif-
ferential and Difference Operators.” arXiv preprint arXiv:2505.12480, 2025.
• Kontsevich, Maxim, and Yan Soibelman. “Holomorphic Floer Theory I: Exponential Integrals in
Finite and Infinite Dimensions.” arXiv preprint arXiv:2402.07343, 2024.
• Lin, Shaowei. “Biased Stochastic Approximation.” Blog post, 2020. [Link]
[Link]/posts/2020-12-01-biased-stochastic-approximation/.
• Lin, Shaowei. “Spiking Neural Networks.” In Motivic Information, Path Integrals and Spiking
Networks, 2021. [Link]
• Lin, Shaowei. “Motivic Information, Path Integrals and Spiking Networks.” Blog series, 2020–2021.
[Link]
• Manin, Yuri I. “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recog-
nition.” arXiv preprint arXiv:1501.00897, 2015.
• Manin, Yuri I., and Matilde Marcolli. “Homotopy Theoretic and Categorical Models of Neural
Information Networks.” arXiv preprint arXiv:2006.15136, 2020.
• Marcolli, Matilde. “Motivic Information.” arXiv preprint arXiv:1712.08703, 2017.
• Marcolli, Matilde. “Gamma Spaces and Information.” arXiv preprint arXiv:1807.05314, 2018.

20
• McMullen, Curtis T. Ergodic Theory, Geometry, and Dynamics. Lecture Notes, Harvard Univer-
sity, 2020. [Link]
[Link].
• Morava, Jack. “Topological Invariants of Some Chemical Reaction Networks.” arXiv preprint
arXiv:1910.12609, 2019.
• Morava, Jack. “Periods for Topological Circle Actions.” arXiv preprint arXiv:2301.05772, 2023.
• Morava, Jack. “Swan–Tate Cohomology of Meromorphic Circle Actions.” arXiv preprint arXiv:2403.19714,
2024.
• Morava, Jack. “Circular Symmetry-Breaking and Topological Noether Currents.” arXiv preprint
arXiv:2407.00672, 2024.
• Morava, Jack. “Notes on δ-Algebras and Prisms in Homotopy Theory.” arXiv preprint arXiv:2401.12336,
2024.
• Morava, Jack. “Boundary Framings for Locally Conformally Symplectic Four-Manifolds.” arXiv
preprint arXiv:2502.05983, 2025.
• Morava, Jack. “On a Complex Topological Orientation for Circle-Equivariant K-Theory.” arXiv
preprint arXiv:2505.21719, 2025.
• Muir, Dylan Richard. “What Role Do Circular Network Structures Play in Neural Networks?”
Answer on Psychology & Neuroscience StackExchange, 2017. [Link]
com/questions/17005/what-role-do-circular-network-structures-play-in-neural-networks.
• Ridella, S., S. Rovetta, and R. Zunino. “Circular Backpropagation Networks for Classification.”
IEEE Transactions on Neural Networks 8, no. 1 (1997): 84–97.
• Serrano, M. Á., Dmitri Krioukov, and Marián Boguñá. “Self-Similarity of Complex Networks and
Hidden Metric Spaces.” Nature Communications 8, no. 1 (2017): 1856.
• Stark, James F. “Could Circular Data Mimic Biological Intelligence and Improve Machine Learn-
ing?” The Quantum Record, 2023. [Link]
could-circular-data-mimic-biological-intelligence-and-improve-machine-learning/.
• Ulmer, Jakob. “Kontsevich’s Cocycle Construction and Quantization of the Loday–Quillen–Tsygan
Theorem.” arXiv preprint arXiv:2506.15210, 2025.
• Yang, Liangwei, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S.
Yu. “Cyclic Neural Networks.” arXiv preprint arXiv:2402.03332, 2024.

21

You might also like