Isotelesis AIModel
Isotelesis AIModel
Abstract
Artificial intelligence has achieved remarkable progress through optimization
and statistical learning, but such methods often lack invariant structures that guar-
antee stability, interpretability, and integration with deep principles of mathemat-
ics. Recent advances in arithmetic geometry, motivic information theory, and the
geometry of neural networks suggest an alternative foundation for intelligence based
on invariants.
The arithmetic Hodge index theorem and Kontsevich’s theory of arithmetic
supports provide inequality and support structures that persist under deformation,
while motivic zeta functions capture hidden costs of singularities. Marcolli’s mo-
tivic information and Manin’s homotopy-theoretic neural codes extend information
theory into categorical and motivic domains, where entropy and coding acquire
topological meaning. Neural architectures such as cyclic, hyperbolic, and spik-
ing networks instantiate these invariants dynamically, implementing stability and
memory akin to biological systems.
We further clarify the role of “symbols” in recent work by Cavenaghi, Katzarkov,
and Kontsevich, where the term designates algebraic invariants of group actions
rather than cognitive tokens. In this paper, we reinterpret such structures metaphor-
ically as analogues of symbolic reasoning in AI.
By integrating arithmetic invariants, motivic entropy, homotopy codes, and
invariant-based architectures, we propose a motivic theory of intelligence. This
framework views reasoning as the minimization of motivic relative information
subject to arithmetic and categorical constraints, unifying cognition, learning, and
invariance across mathematics and artificial intelligence.
∗
DBA Sebastian Ruliad, Isotelesis Inc.
Mountain View, CA isotelesis@[Link]
1
Contents
1 Introduction 4
2
8 Toward a Motivic Theory of Intelligence 12
8.1 Invariants as the Basis of Reasoning . . . . . . . . . . . . . . . . . . . . . 12
8.2 Motivic Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.3 Cognitive Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.4 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
8.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
9 Conclusion 14
B Glossary 17
B.1 I. Arithmetic and Geometric Invariants . . . . . . . . . . . . . . . . . . . 17
B.2 II. Motivic and Information-Theoretic Frameworks . . . . . . . . . . . . . 17
B.3 III. Neural Architectures and Learning . . . . . . . . . . . . . . . . . . . 18
B.4 IV. Dynamical and Topological Structures . . . . . . . . . . . . . . . . . 18
B.5 V. Cross-Cutting Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3
1 Introduction
Artificial intelligence has been largely guided by statistical learning and optimization
frameworks. These approaches, though effective in practice, often lack invariant struc-
tures that guarantee stability, interpretability, and integration with deep principles of
mathematics and cognition. Recent developments in arithmetic geometry and motivic
information theory suggest the possibility of grounding reasoning and learning in more
fundamental invariants. The title of this paper, Atoms, Symbols, and Intelligence, alludes
to recent work by Cavenaghi, Katzarkov, and Kontsevich,1 where “symbols” designate
algebraic invariants of group actions; in our framework, we reinterpret this terminology
metaphorically to suggest analogies with symbolic reasoning in AI.
The arithmetic Hodge index theorem, extended to quasi-projective varieties by Ab-
boud,2 introduces local inequalities that capture geometric and arithmetic stability. Kont-
sevich’s theory of arithmetic supports provides another perspective: operators in algebraic
and analytic settings carry irreducible “costs” that persist under deformation, functioning
as invariants of computational processes.3 At the same time, Marcolli and Manin have
argued that information theory can be recast in motivic and categorical terms, linking
entropy to motivic measures4 and neural coding to homotopy types.5
Parallel advances in machine learning have highlighted the importance of network
geometry. Circular and cyclic architectures more closely resemble biological neural net-
works than traditional feed-forward models,6 while hyperbolic embeddings provide effi-
cient representations of hierarchical and scale-free structures.7 Spiking neural networks,
developed within Shaowei Lin’s motivic information framework, demonstrate how relative
information minimization can yield biologically plausible learning rules with convergence
guarantees.8
This paper integrates these mathematical, informational, and architectural insights
into a single framework. We argue that reasoning can be understood as the minimization
of motivic relative information subject to arithmetic constraints, and that neural codes
serve as the categorical carriers of these invariants. By bringing together the arithmetic
Hodge index, motivic entropy, and the geometry of neural networks, we aim to outline a
motivic theory of intelligence that unifies cognition, learning, and invariance.
1
Leonardo F. Cavenaghi, Ludmil Katzarkov, and Maxim Kontsevich, “Atoms Meet Symbols,” arXiv
preprint arXiv:2509.15831 (2025).
2
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
3
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
4
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
5
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
6
Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S. Yu,
“Cyclic Neural Networks,” arXiv preprint arXiv:2402.03332 (2024).
7
M. Á. Serrano, Dmitri Krioukov, and Marián Boguñá, “Self-Similarity of Complex Networks and
Hidden Metric Spaces,” Nature Communications 8, no. 1 (2017): 1856.
8
Shaowei Lin, “Spiking Neural Networks,” Motivic Information, Path Integrals and Spiking Networks
(2021), [Link]
4
2 Arithmetic and Motivic Background
2.1 The Arithmetic Hodge Index Theorem
The classical Hodge index theorem relates intersection forms on algebraic surfaces to
constraints on quadratic forms. In the arithmetic setting, intersection theory gains new
complexity, linking curvature, stability, and number-theoretic invariants. Abboud’s local
version of the arithmetic Hodge index theorem formulates inequalities for quasi-projective
varieties, establishing local arithmetic invariants that constrain stability across geometric
and arithmetic dimensions.9 These inequalities can be interpreted as analogues of “sta-
bility costs” in reasoning processes, bounding inference structures in ways comparable to
physical or informational energy.
2.4 Summary
The arithmetic Hodge index theorem, arithmetic supports, and motivic zeta functions
together constitute a triad of invariants: inequalities, supports, and poles. These math-
ematical structures, though developed in pure arithmetic geometry, resonate with the
principles of stability, irreducibility, and transition thresholds found in cognition and
learning. They provide the invariant foundations on which motivic information theory
builds, as we explore in the next section.
9
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
10
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
11
Emmanuel Bultot and Johannes Nicaise, “Computing Motivic Zeta Functions on Log Smooth Mod-
els,” Mathematische Zeitschrift 295, no. 3–4 (2020): 1279–1311.
12
Khoa Bang Pham, “The Integral Identity Conjecture in Motivic Homotopy Theory,” arXiv preprint
arXiv:2411.19699 (2024).
5
3 Motivic Information Theory
3.1 Entropy and Information Loss in Motives
The triad of invariants introduced in arithmetic geometry—inequalities, supports, and
poles—finds a natural continuation in information theory. Classical approaches, grounded
in Shannon’s entropy, treat information as a numerical measure of uncertainty. While
effective in communication theory, this view lacks the structural invariance revealed by
arithmetic geometry. Marcolli proposed a motivic generalization in which entropy is
valued not in the reals but in Grothendieck rings of varieties, allowing information to
be expressed as a motivic invariant.13 In this framework, information loss is not merely
numerical subtraction but a categorical morphism, aligning informational processes with
the invariance principles of arithmetic geometry.
6
error gradients but also by structural constraints, ensuring convergence and biological
plausibility.17
3.5 Summary
Marcolli’s motivic entropy, the geometry of information, and Lin’s program of relative
information together extend the concept of invariance from geometry to information
and learning. They suggest that reasoning should be understood as the minimization
of motivic relative information subject to arithmetic constraints. In this reformulation,
information becomes a structural invariant, positioning it as a natural foundation for a
motivic theory of intelligence.
7
4.4 δ-Algebras and Homotopy-Theoretic Foundations
Jack Morava’s recent work on δ-algebras and prisms in homotopy theory provides a
further foundation for categorical and motivic approaches to information.21 δ-algebras
encode deep structural relations between cohomology and arithmetic, suggesting that
the invariants governing cognition may be rooted in low-dimensional algebraic topology.
In this sense, Morava’s work extends the motivic program by linking homotopy theory,
arithmetic, and information invariants into a single framework.
4.5 Summary
Neural codes, categorical network models, gamma spaces, and δ-algebras together reveal
how cognition can be described in terms of homotopy and categorical invariants. These
approaches extend motivic information theory into the biological and computational do-
main, suggesting that learning is mediated by structures that are as much topological as
they are statistical. This sets the stage for the next section, where geometric and circular
neural architectures embody these invariants in network design.
8
5.3 Riemannian Geometry in Neural Networks
Geometric structures extend beyond circularity. A line of research at NeurIPS explored
how Riemannian geometry shapes learning dynamics, showing that curvature-sensitive
optimization can improve generalization and robustness.25 By embedding networks in
curved spaces, one captures invariants analogous to those of differential geometry, where
geodesics and curvature guide learning trajectories.
5.5 Summary
Circular, recurrent, and geometric architectures reveal how invariants of symmetry and
feedback are embedded in network design. Circular kernels, cyclic backpropagation, and
Riemannian structures extend the motivic and homotopy-theoretic perspectives into prac-
tical architectures, showing that invariance governs not only abstract theory but also
concrete implementations. In the next section, we turn to hyperbolic and dynamical
perspectives, where invariants of scale and action further shape learning.
9
6.2 Loxodromic Dynamics and Counting Invariants
Beyond static embeddings, dynamical aspects of hyperbolic geometry play a role. Gekht-
man, Taylor, and Tiozzo investigated the distribution of loxodromic elements in hyper-
bolic group actions, showing how counting such dynamics provides invariant measures of
complexity.29 In analogy, reasoning processes may be modeled as loxodromic trajectories:
iterative transformations that never collapse into trivial cycles but instead generate new
structural information with each step.
10
6.6 Summary
Hyperbolic embeddings, loxodromic dynamics, non-proper hyperbolic structures, group
actions, and topological invariants together form a dynamical perspective on invariance.
They extend motivic and categorical approaches into the geometry of action and transfor-
mation, offering tools for modeling reasoning as a balance between stability and generative
dynamism. In the next section, we connect these dynamical invariants to arithmetic and
quantum structures through Kontsevich’s and related work.
11
7.4 Operads, Motives, and Deformation Quantization
Kontsevich’s earlier contributions to deformation quantization linked operads and mo-
tives, providing categorical frameworks for encoding deformation invariants.40 These
ideas foreshadow the motivic approach to information: operadic structures parallel neu-
ral architectures, while motives encode the invariants preserved across transformations.
The operadic viewpoint enriches AI by suggesting that learning rules themselves can be
composed and deformed while preserving higher-order invariants.
7.7 Summary
Kontsevich’s body of work, spanning Fourier transforms, Floer theory, integer spectra,
operads, cocycles, and symbolic frameworks, exemplifies the unification of arithmetic,
geometry, and quantum ideas. These contributions reinforce the central claim of this
paper: reasoning and learning can be grounded in invariant structures that persist across
deformations, whether geometric, algebraic, or informational.
12
supports, poles, homotopy types, categorical morphisms, circular symmetries, hyperbolic
embeddings, and integer spectra all provide conserved structures that persist across trans-
formations. In cognition and AI, these invariants play the role of “anchors,” ensuring that
learning does not drift into instability or collapse into triviality.
13
3. Homotopy and categorical structures: Developing categorical learning archi-
tectures that preserve homotopy types across layers.49
8.5 Summary
Motivic AI offers a framework in which reasoning is guided by the invariants of arithmetic,
geometry, topology, and information. By grounding learning in structures that persist
across transformation, it provides stability, interpretability, and integration with deep
mathematical principles. This synthesis points toward a new generation of AI, where
invariants replace mere optimization as the foundation of intelligence.
9 Conclusion
This paper has outlined a framework for Motivic AI, grounded in the invariants of arith-
metic geometry, motivic information theory, homotopy structures, and geometric neu-
ral architectures. We have shown how diverse mathematical advances—from Abboud’s
arithmetic Hodge index theorem51 and Kontsevich’s arithmetic supports,52 to Marcolli’s
motivic entropy53 and Manin’s neural codes,54 converge on the idea that reasoning and
learning can be understood as processes governed by invariants rather than mere opti-
mizations.
By integrating arithmetic, categorical, geometric, and dynamical perspectives, motivic
AI provides a unified approach to stability, interpretability, and cognition. The triad of
arithmetic invariants (inequalities, supports, poles), the motivic extensions of entropy
and information, the homotopy-theoretic models of neural codes, and the architectural
role of circular and hyperbolic symmetries all reinforce the same principle: intelligence
evolves and operates by conserving invariants across transformations.
Kontsevich’s work exemplifies this unification, bridging Fourier analysis,55 Floer the-
49
Yuri I. Manin and Matilde Marcolli, “Homotopy Theoretic and Categorical Models of Neural In-
formation Networks,” arXiv preprint arXiv:2006.15136 (2020); Matilde Marcolli, “Gamma Spaces and
Information,” arXiv preprint arXiv:1807.05314 (2018).
50
Ilya Gekhtman, Samuel J. Taylor, and Giulio Tiozzo, “Counting Loxodromics for Hyperbolic
Actions,” arXiv preprint arXiv:1605.02103 (2016); Maxim Kontsevich and Yan Soibelman, “Holo-
morphic Floer Theory I: Exponential Integrals in Finite and Infinite Dimensions,” arXiv preprint
arXiv:2402.07343 (2024).
51
Marc Abboud, “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties,” arXiv preprint arXiv:2503.14099 (2025).
52
Maxim Kontsevich and Alexander Odesskii, “Explicit Formulas for Arithmetic Support of Differential
and Difference Operators,” arXiv preprint arXiv:2505.12480 (2025).
53
Matilde Marcolli, “Motivic Information,” arXiv preprint arXiv:1712.08703 (2017).
54
Yuri I. Manin, “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recogni-
tion,” arXiv preprint arXiv:1501.00897 (2015).
55
Maxim Kontsevich and Alexander Odesskii, “When the Fourier Transform Is One Loop Exact?,”
arXiv preprint arXiv:2306.02178 (2023).
14
ory,56 integer eigenvalues,57 and operadic motives58 with birational frameworks of atoms
and symbols.59 In the original work, “symbols” designate modular invariants of group
actions. In our reinterpretation, we extend this terminology metaphorically to suggest an
analogue with symbolic reasoning in AI, where global invariants and local symbolic struc-
tures interact to stabilize cognitive processes. For further clarification of this distinction,
see Appendix A.
Looking forward, motivic AI suggests new pathways for research: implementing invariant-
based architectures, formalizing motivic entropy in practice, and testing the cognitive
plausibility of homotopy-theoretic neural models. By grounding intelligence in invariants,
we move toward a vision of AI that is not only powerful but also principled, interpretable,
and deeply connected to the structures of mathematics and cognition.
15
In this sense, the “symbols” of the earlier framework can be interpreted as structural
encodings of these invariants. They are not external additions but algebraic witnesses to
stability conditions in geometry.
A.5 Summary
Across these works, “symbols” consistently denote algebraic or categorical invariants
rather than linguistic or cognitive tokens. They mediate between:
This appendix clarifies the mathematical meaning of “symbols” and situates them
within a broader landscape. While this paper reinterprets such invariants for artificial in-
telligence, the reinterpretation is ours; the original mathematical works establish symbols
as structural invariants linking geometry, dynamics, and information.
62
Curtis T. McMullen, Ergodic Theory, Geometry, and Dynamics (Lecture Notes, Harvard Uni-
versity, 2020). [Link]
[Link].
63
Noémie Combe and Yuri I. Manin, “F-Manifolds and Geometry of Information,” arXiv preprint
arXiv:2004.08808 (2020).
16
B Glossary
B.1 I. Arithmetic and Geometric Invariants
Arithmetic Hodge Index Theorem. An extension of the classical Hodge
index theorem to arithmetic surfaces and higher-dimensional varieties.
Provides inequalities governing the intersection form in arithmetic in-
tersection theory. Recent local versions (Abboud, 2025) apply to quasi-
projective varieties, yielding stability constraints interpreted as “in-
variant costs.”
Arithmetic Supports. Introduced by Kontsevich and Odesskii (2025). Dif-
ferential and difference operators possess supports in arithmetic geome-
try, encoding irreducible structural features that persist under defor-
mation. Serve as analogues of spectra in non-commutative geometry.
Atoms. Birational invariants developed by Katzarkov, Kontsevich, Pan-
tev, and Yu (2025). Constructed from Hodge structures and quantum
multiplication, atoms provide obstructions to rationality. Invariant “build-
ing blocks” for classifying varieties.
Symbols (Mathematical sense). In Atoms Meet Symbols (Cavenaghi, Katzarkov,
Kontsevich, 2025), symbols refer not to cognitive or linguistic tokens
but to modular symbols and related algebraic encodings of group ac-
tions. Serve as local combinatorial invariants complementing global in-
variants (atoms).
Motivic Zeta Functions. Introduced by Denef and Loeser; encode arith-
metic and geometric data of singularities. Bultot and Nicaise (2020) de-
veloped computational methods on log smooth models. Poles of motivic
zeta functions often relate to monodromy conjectures.
Integral Identity Conjecture. Extended by Pham (2024) in motivic homotopy
theory. Connects motivic integration with Donaldson–Thomas invari-
ants, suggesting deep links between enumerative geometry and motivic
entropy.
17
place fields using homotopy theory. Neural codes correspond to topo-
logical invariants of neural activity.
F-Manifolds and Information Geometry. Combe and Manin (2020) general-
ized Frobenius manifolds to F-manifolds in the setting of probability dis-
tributions. Encodes algebraic structures underlying information flow
and statistical manifolds.
18
groupoid structures.
Circle Actions in Topology. Explored extensively by Morava (2023–2025).
Circle-equivariant K-theory, Swan–Tate cohomology, and locally con-
formally symplectic manifolds reveal deep symmetries in topology.
19
References
• Abboud, Marc. “A Local Version of the Arithmetic Hodge Index Theorem over Quasi-Projective
Varieties.” arXiv preprint arXiv:2503.14099, 2025.
• Bonnabel, Silvère, and Rodolphe Sepulchre. “Principles of Riemannian Geometry in Neural Net-
works.” In Advances in Neural Information Processing Systems, vol. 30 (2017).
• Bultot, Emmanuel, and Johannes Nicaise. “Computing Motivic Zeta Functions on Log Smooth
Models.” Mathematische Zeitschrift 295, no. 3–4 (2020): 1279–1311.
• Cavenaghi, Leonardo F., Ludmil Katzarkov, and Maxim Kontsevich. “Atoms Meet Symbols.”
arXiv preprint arXiv:2509.15831, 2025.
• Combe, Noémie, and Yuri I. Manin. “F-Manifolds and Geometry of Information.” arXiv preprint
arXiv:2004.08808, 2020.
• Combe, Noémie, Yuri I. Manin, and Matilde Marcolli. “Geometry of Information: Classical and
Quantum Aspects.” arXiv preprint arXiv:2107.08006, 2021.
• Das, Tushar, David Simmons, and Mariusz Urbański. “Geometry and Dynamics in Gromov Hy-
perbolic Metric Spaces: With an Emphasis on Non-Proper Settings.” arXiv preprint arXiv:1409.2155,
2014.
• Gekhtman, Ilya, Samuel J. Taylor, and Giulio Tiozzo. “Counting Loxodromics for Hyperbolic
Actions.” arXiv preprint arXiv:1605.02103, 2016.
• Hamann, Matthias. “Group Actions on Metric Spaces: Fixed Points and Free Subgroups.” arXiv
preprint arXiv:1301.6513, 2013.
• He, Kun, Chao Li, Yixiao Yang, Gao Huang, and John E. Hopcroft. “Integrating Large Circular
Kernels into CNNs through Neural Architecture Search.” arXiv preprint arXiv:2107.02451, 2021.
• Katzarkov, Ludmil, Maxim Kontsevich, Tony Pantev, and Tony Yue Yu. “Birational Invariants
from Hodge Structures and Quantum Multiplication.” arXiv preprint arXiv:2508.05105, 2025.
• Kenyon, Richard, Maxim Kontsevich, Oleg Ogievetsky, Cosmin Pohoata, Will Sawin, and Senya
Shlosman. “The Miracle of Integer Eigenvalues.” arXiv preprint arXiv:2401.05291, 2024.
• Kontsevich, Maxim. “Operads and Motives in Deformation Quantization.” arXiv preprint arXiv:math/9904055,
1999.
• Kontsevich, Maxim, and Alexander Odesskii. “When the Fourier Transform Is One Loop Exact?”
arXiv preprint arXiv:2306.02178, 2023.
• Kontsevich, Maxim, and Alexander Odesskii. “Explicit Formulas for Arithmetic Support of Dif-
ferential and Difference Operators.” arXiv preprint arXiv:2505.12480, 2025.
• Kontsevich, Maxim, and Yan Soibelman. “Holomorphic Floer Theory I: Exponential Integrals in
Finite and Infinite Dimensions.” arXiv preprint arXiv:2402.07343, 2024.
• Lin, Shaowei. “Biased Stochastic Approximation.” Blog post, 2020. [Link]
[Link]/posts/2020-12-01-biased-stochastic-approximation/.
• Lin, Shaowei. “Spiking Neural Networks.” In Motivic Information, Path Integrals and Spiking
Networks, 2021. [Link]
• Lin, Shaowei. “Motivic Information, Path Integrals and Spiking Networks.” Blog series, 2020–2021.
[Link]
• Manin, Yuri I. “Neural Codes and Homotopy Types: Mathematical Models of Place Field Recog-
nition.” arXiv preprint arXiv:1501.00897, 2015.
• Manin, Yuri I., and Matilde Marcolli. “Homotopy Theoretic and Categorical Models of Neural
Information Networks.” arXiv preprint arXiv:2006.15136, 2020.
• Marcolli, Matilde. “Motivic Information.” arXiv preprint arXiv:1712.08703, 2017.
• Marcolli, Matilde. “Gamma Spaces and Information.” arXiv preprint arXiv:1807.05314, 2018.
20
• McMullen, Curtis T. Ergodic Theory, Geometry, and Dynamics. Lecture Notes, Harvard Univer-
sity, 2020. [Link]
[Link].
• Morava, Jack. “Topological Invariants of Some Chemical Reaction Networks.” arXiv preprint
arXiv:1910.12609, 2019.
• Morava, Jack. “Periods for Topological Circle Actions.” arXiv preprint arXiv:2301.05772, 2023.
• Morava, Jack. “Swan–Tate Cohomology of Meromorphic Circle Actions.” arXiv preprint arXiv:2403.19714,
2024.
• Morava, Jack. “Circular Symmetry-Breaking and Topological Noether Currents.” arXiv preprint
arXiv:2407.00672, 2024.
• Morava, Jack. “Notes on δ-Algebras and Prisms in Homotopy Theory.” arXiv preprint arXiv:2401.12336,
2024.
• Morava, Jack. “Boundary Framings for Locally Conformally Symplectic Four-Manifolds.” arXiv
preprint arXiv:2502.05983, 2025.
• Morava, Jack. “On a Complex Topological Orientation for Circle-Equivariant K-Theory.” arXiv
preprint arXiv:2505.21719, 2025.
• Muir, Dylan Richard. “What Role Do Circular Network Structures Play in Neural Networks?”
Answer on Psychology & Neuroscience StackExchange, 2017. [Link]
com/questions/17005/what-role-do-circular-network-structures-play-in-neural-networks.
• Ridella, S., S. Rovetta, and R. Zunino. “Circular Backpropagation Networks for Classification.”
IEEE Transactions on Neural Networks 8, no. 1 (1997): 84–97.
• Serrano, M. Á., Dmitri Krioukov, and Marián Boguñá. “Self-Similarity of Complex Networks and
Hidden Metric Spaces.” Nature Communications 8, no. 1 (2017): 1856.
• Stark, James F. “Could Circular Data Mimic Biological Intelligence and Improve Machine Learn-
ing?” The Quantum Record, 2023. [Link]
could-circular-data-mimic-biological-intelligence-and-improve-machine-learning/.
• Ulmer, Jakob. “Kontsevich’s Cocycle Construction and Quantization of the Loday–Quillen–Tsygan
Theorem.” arXiv preprint arXiv:2506.15210, 2025.
• Yang, Liangwei, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, and Philip S.
Yu. “Cyclic Neural Networks.” arXiv preprint arXiv:2402.03332, 2024.
21