Literal Labs is developing next-generation AI algorithms built on Logic-Based Networks (LBNs), a fundamentally different approach to AI that replaces opaque matrix multiplication with explicit logical structure. LBNs are composed of symbolic logic expressions that operate with more speed, utilise far less energy, and are far more explainable than their neural network counterparts.
LBNs represent knowledge as compositions of propositional logic rather than continuous numerical weights. This yields models that are:
- Computationally efficient
- Highly explainable and interpretable
- Robust on small or noisy datasets
- Well suited to edge, embedded, and battery-powered deployment
LBNs are particularly effective for structured data such as time series, sensor streams, tabular data, and symbolic signals, where classical deep learning often overfits or over-consumes resources. Benchmarking data is occasionally released.
Literal Labs’ LBN implementations draw on and extend several logic-based learning techniques, including Tsetlin Machines. Tsetlin Machines are a rule-based learning algorithm that constructs human-readable logical clauses using simple automata. They offer strong performance with minimal compute and memory, while retaining full explainability.
Literal Labs was founded to remove the bottlenecks holding AI back from real-world deployment — excessive energy use, capital cost, and infrastructure dependence. By returning to first principles, we are making AI that is efficient, explainable, and deployable everywhere it is needed.