Pi Day 2026: Formulas, Series, and Plots for π

Introduction

  • Happy Pi Day! Today (3/14) we celebrate the most famous mathematical constant: π ≈ 3.141592653589793…
  • π is irrational and transcendental, appears in circles, waves, probability, physics, and even random walks.
  • Raku (with its built-in π constant, excellent rational support, lazy lists, and unicode operators) makes experimenting with π relatively easy and enjoyable.
  • In this blog post (notebook) we explore a selection of formulas and algorithms.

0. Setup

use Math::NumberTheory;
use BigRoot;
use Image::Markup::Utilities;
use Graphviz::DOT::Chessboard;
use Data::Reshapers;
use JavaScript::D3;
use JavaScript::D3::Utilities;

D3.js

#%javascript
require.config({
paths: {
d3: 'https://d3js.org/d3.v7.min'
}});
require(['d3'], function(d3) {
console.log(d3);
});
my $title-color = 'Ivory';
my $stroke-color = 'SlateGray';

1. Continued fraction approximation

The built-in Raku constant pi (or π) is fairly low precision:

say π.fmt('%.25f')
# 3.1415926535897930000000000

One way to remedy that is to use continued fractions. For example, using the (first) sequence line of On-line Encyclopedia of Integer Sequences (OEIS) A001203 produces  with precision 56:

my @s = 3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1, 15, 3, 13, 1, 4, 2, 6, 6, 99, 1, 2, 2, 6, 3, 5, 1, 1, 6, 8, 1, 7, 1, 2, 3, 7, 1, 2, 1, 1, 12, 1, 1, 1, 3, 1, 1, 8, 1, 1, 2, 1, 6, 1, 1, 5, 2, 2, 3, 1, 2, 4, 4, 16, 1, 161, 45, 1, 22, 1, 2, 2, 1, 4, 1, 2, 24, 1, 2, 1, 3, 1, 2, 1;
my $pi56 = from-continued-fraction(@s».FatRat.List);
# 3.14159265358979323846264338327950288419716939937510582097

Here we verify the precision using Wolfram Language:

"wolframscript -code 'N[Pi, 100] - $pi56'"
andthen .&shell(:out)
andthen .out.slurp(:close)
# 0``56.

More details can be found in Wolfram MathWorld page “Pi Continued Fraction”, [EW1].


2. Continued fraction terms plots

It is interesting to consider the plotting the terms of continued fraction terms of .

First we ingest the more “pi-terms” from OEIS A001203 (20k terms):

my @ds = data-import('https://oeis.org/A001203/b001203.txt').split(/\s/)».Int.rotor(2);
my @terms = @ds».tail;
@terms.elems
# 20000

Here is the summary:

sink records-summary(@terms)
# +-------------------+
# | numerical |
# +-------------------+
# | 1st-Qu => 1 |
# | Median => 2 |
# | Min => 1 |
# | Max => 20776 |
# | Mean => 12.6809 |
# | 3rd-Qu => 5 |
# +-------------------+

Here is an array plot of the first 128 terms of the continued fraction approximating :

#% html
my @mat = |@terms.head(128)».&integer-digits(:2base);
my $max-digits = @mat».elems.max;
@mat .= map({ [|(0 xx (``max-digits - ``_.elems)), |$_] });
dot-matrix-plot(transpose(@mat), size => 10):svg
cell 26 output 1 svg 1

Next, we show the Pareto principle manifestation of for the continued fraction terms. First we observe that the terms a distribution similar to Benford’s law:

#% js
my @tally-pi = tally(@terms).sort(-*.value).head(16) <</>> @terms.elems;
my @terms-b = random-variate(BenfordDistribution.new(:10base), 2_000);
my @tally-b = tally(@terms-b).sort(-*.value).head(16) <</>> @terms-b.elems;
js-d3-bar-chart(
[
|@tally-pi.map({ %( x => ``_.key, y => ``_.value, group => 'π') }),
|@tally-b.map({ %( x => ``_.key, y => ``_.value, group => 'Benford') })
],
plot-label => "Pi continued fraction terms vs. Benford's law",
:$title-color,
:$background)

Here is the Pareto principle plot — ≈5% of the unique term values correspond to ≈80% of the terms:

#% js
js-d3-list-line-plot(
pareto-principle-statistic(@terms),
plot-label => "Pi continued fraction terms vs. Benford's law",
:$title-color,
:$background,
stroke-width => 5,
:grid-lines
)

3. Classic Infinite Series

Many ways to express π as an infinite sum — some converge slowly, others surprisingly fast.

Leibniz–Gregory series (1671/ Madhava earlier)

Raku implementation:

sub pi-leibniz($n) {
4 * [+] map { (``_ %% 2 ?? 1 !! -1) / (2 * ``_.FatRat + 1) }, 0 ..^ $n
}
my $piLeibniz = pi-leibniz(1_000);
# 3.140592653839792925963596502869395970451389330779724489367457783541907931239747608265172332007670207231403885276038710899938066629552214564551237742887150050440512339302537072825852760246628025562008569471700451065826106184744099667808080815231833582150382088582680381403109153574884416966097481526954707518119416184546424446286573712097944309435229550466609113881892172898692240992052089578302460852737674933105951137782047028552762288434104643076549100475536363928011329215789260496788581009721784276311248084584199773204673225752150684898958557383759585526225507807731149851003571219339536433193219280858501643712664329591936448794359666472018649604860641722241707730107406546936464362178479780167090703126423645364670050100083168338273868059379722964105943903324595829044270168232219388683725629678859726914882606728649659763620568632099776069203461323565260334137877

Verify with Wolfram Language (again):

"wolframscript -code 'N[Pi, 1000] - $piLeibniz'"
andthen .&shell(:out)
andthen .out.slurp(:close)
# 0.000999999750000312499...814206`866.9999998914263

Nilakantha series (faster convergence):

Raku:

sub pi-nilakantha($n) {
3 + [+] map {
($_ %% 2 ?? -1 !! 1 ) * 4 / ((2 * $_.FatRat) * (2 * $_ + 1) * (2 * $_ + 2))
}, 1 .. $n
}
pi-nilakantha(1_000);
# 3.141592653340542051900128736253203567152539255317954874674304859504426172618558702218695071137605738966036069683335561974900086119307836254205910905806190030949758215864755464129701335459521079534522811851010296642538249613529207613335816447914992502190861349451746347920350033634355181084537761886275546599078437173552420948534950023442771396391252038722980428723971632669306434394851189528826699233048019261441283970866004550291393472342649870962106821115715774722114776992400455398838055772839725805047379519366309217982783671029012753365224924699602163737619311405432798527164991008945233085366633073462699045511265528492985424805854418596455931463431855615794431867539190155631617285217459790661344075940516099637034367441911754544671168909454186231972510120715400925996293656987342326715209388299050131213232932065481743222390684073879385764855135985734675127240826
"wolframscript -code 'N[Pi, 1000] - {pi-nilakantha(1_000)}'"
andthen .&shell(:out)
andthen .out.slurp(:close)
# 2.4925118...83814206`860.3966372344514*^-10

3. Beautiful Products

Wallis product (1655) — elegant infinite product:

Raku running product:

my $p = 2.0;
for 1 .. 1_000 -> $n {
``p *= (2 * ``n) * (2 * ``n) / ( (2 * ``n - 1 ) * ( 2 * $n + 1) );
say "``n → {``p / ``piLeibniz} relative error" if ``n %% 100;
}
# 100 → 0.9978331595460779 relative error
# 200 → 0.9990719099195204 relative error
# 300 → 0.9994865459690567 relative error
# 400 → 0.9996941876848563 relative error
# 500 → 0.9998188764663584 relative error
# 600 → 0.9999020455903246 relative error
# 700 → 0.9999614733132168 relative error
# 800 → 1.0000060557070767 relative error
# 900 → 1.0000407377794782 relative error
# 1000 → 1.000068487771041 relative error

4. Very Fast Modern Series — Chudnovsky Algorithm

One of the fastest-converging series used in record computations:

Each term adds roughly 14 correct digits. Cannot be implemented easily in Raku, since Raku does not have bignum sqrt and power operations.


5. Spigot Algorithms — Digits “Drip” One by One

Spigot algorithms compute decimal digits using only integer arithmetic — no floating-point errors accumulate.

The classic Rabinowitz–Wagon spigot (based on a transformed Wallis product) produces base-10 digits sequentially.

Simple (but bounded) version outline in Raku:

sub spigot-pi($digits) {
my ``len = (10 * ``digits / 3).floor + 1;
my @a = 2 xx $len;
my @result;
for 1..$digits {
my $carry = 0;
for ``len-1 ... 0 -> ``i {
my ``x = 10 * @a[``i] + ``carry * (``i + 1);
@a[``i] = ``x % (2 * $i + 1);
``carry = ``x div (2 * $i + 1);
}
@result.push($carry div 10);
@a[0] = $carry % 10;
# (handle carry-over / nines adjustment in full impl)
}
@result.head(1).join('.') ~ @result[1..*].join
}
spigot-pi(50);
# 314159265358979323846264338327941028841971693993751
"wolframscript -code 'N[Pi, 100] - {spigot-pi(50).FatRat / 10e49.FatRat}'"
andthen .&shell(:out)
andthen .out.slurp(:close)
# 2.3969628881355243801510070603398913366797194459230781640628621`41.37966130996076*^-16

6. BBP Formula — Hex Digits Without Predecessors

Bailey–Borwein–Plouffe (1995) formula lets you compute the nth hexadecimal digit of π directly (without earlier digits):

Very popular for distributed π-hunting projects. The best known digit-extraction algorithm.

Raku snippet for partial sum (base 16 sense):

sub bbp-digit-sum($n) {
[+] (0..$n).map: -> $k {
my $r = 1/16**$k;
$r * (4/(8*$k+1) - 2/(8*$k+4) - 1/(8*$k+5) - 1/(8*$k+6))
}
}
say bbp-digit-sum(100).base(16).substr(0,20);
# 3.243F6B

7. (Instead of) Conclusion

  • π contains (almost surely) every finite sequence of digits — your birthday appears infinitely often.
  • The Feynman point: six consecutive 9s starting at digit 762.
  • Memorization world record > 100,000 digits.
  • π appears in the normal distribution, quantum mechanics, random walks, Buffon’s needle problem (probability ≈ 2/π).

Let us plot a random walk using the terms of continued fraction of Pi — the 20k or OEIS A001203 — to determine directions:

#% js
my @path = angle-path(@terms)».reverse».List;
my &pi-path-map = {
given @terms[$_] // 0 {
when $_ ≤ 100 { 0 }
when $_ ≤ 1_000 { 1 }
default { 2 }
}
}
@path = @path.kv.map( -> $i, $p {[|$p, &pi-path-map($i).Str]});
my %opts = color-scheme => 'Observable10', background => '#1F1F1F', :!axes, :!legends, stroke-width => 2;
js-d3-list-line-plot(@path, :800width, :500height, |%opts)

In the plot above the blue segments correspond to origin terms ≤ 100, yellow segments to terms between 100 and 1000, and red segment for origin terms greater than 1000.


References

[EW1] Eric Weisstein, “Pi Continued Fraction”Wolfram MathWorld.

Jupyter::Chatbook Cheatsheet

Quick reference for the Raku package “Jupyter::Chatbook”. (raku.landGitHub.)


0) Preliminary steps

Follow the instructions in the README of “Jupyter::Chatbook”:

For installation and setup problems see the issues (both open and closed) of package’s GitHub repository.
(For example, this comment.)


1) New LLM persona initialization

A) Create persona with #%chat or %%chat (and immediately send first message)

#%chat assistant1, name=ChatGPT model=gpt-4.1-mini prompt="You are a concise technical assistant."
Say hi and ask what I am working on.
# Hi! What are you working on?

Remark: For all “Jupyter::Chatbook” magic specs both prefixes %% and #% can be used.

Remark: For the prompt argument the following delimiter pairs can be used: '...'"..."«...»{...}⎡...⎦.

B) Create persona with #%chat <id> prompt (create only)

#%chat assistant2 prompt, conf=ChatGPT, model=gpt-4.1-mini
You are a code reviewer focused on correctness and edge cases.
# Chat object created with ID : assistant2.

You can use prompt specs from “LLM::Prompts”, for example:

#%chat yoda prompt
@Yoda
# Chat object created with ID : yoda.
Expanded prompt:
⎡You are Yoda.
Respond to ALL inputs in the voice of Yoda from Star Wars.
Be sure to ALWAYS use his distinctive style and syntax. Vary sentence length.⎦

The Raku package “LLM::Prompts” (GitHub link) provides a collection of prompts and an implementation of a prompt-expansion Domain Specific Language (DSL).


2) Notebook-wide chat with an LLM persona

Continue an existing chat object

Render the answer as Markdown:

#%chat assistant1 > markdown
Give me a 5-step implementation plan for adding authentication to a FastAPI app. VERY CONCISE.

Magic cell parameter values can be assigned using the equal sign (“=”):

#%chat assistant1 > markdown
Now rewrite step 2 with test-first details.

Default chat object (NONE)

#%chat
Does vegetarian sushi exist?
# Yes, vegetarian sushi definitely exists! It's a popular option for those who avoid fish or meat. Instead of raw fish, vegetarian sushi typically includes ingredients like:
- Avocado
- Cucumber
- Carrots
- Pickled radish (takuan)
- Asparagus
- Sweet potato
- Mushrooms (like shiitake)
- Tofu or tamago (Japanese omelette)
- Seaweed salad
These ingredients are rolled in sushi rice and nori seaweed, just like traditional sushi. Vegetarian sushi can be found at many sushi restaurants and sushi bars, and it's also easy to make at home.

Using the prompt-expansion DSL to modify the previous chat-cell result:

#%chat
!HaikuStyled>^
# Rice, seaweed embrace,
Avocado, crisp and bright,
Vegetarian.

3) Management of personas (#%chat <id> meta)

Query one persona

#%chat assistant1 meta
prompt
# "You are a concise technical assistant."
#%chat assistant1 meta
say
# Chat: assistant1
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# Prompts: You are a concise technical assistant.
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : user
# content : Say hi and ask what I am working on.
# timestamp : 2026-03-14T09:23:01.989418-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : assistant
# content : Hi! What are you working on?
# timestamp : 2026-03-14T09:23:03.222902-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : user
# content : Give me a 5-step implementation plan for adding authentication to a FastAPI app. VERY CONCISE.
# timestamp : 2026-03-14T09:23:03.400597-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : assistant
# content : 1. Install `fastapi` and `python-jose` for JWT handling.
# 2. Define user model and fake user database.
# 3. Create OAuth2 password flow with `OAuth2PasswordBearer`.
# 4. Implement token creation and verification functions.
# 5. Protect routes using dependency injection for authentication.
# timestamp : 2026-03-14T09:23:05.106661-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : user
# content : Now rewrite step 2 with test-first details.
# timestamp : 2026-03-14T09:23:05.158446-04:00
# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺
# role : assistant
# content : 2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly.
# timestamp : 2026-03-14T09:23:06.901396-04:00
# Bool::True

Query all personas

#%chat all
keys
# NONE
assistant1
assistant2
ce
gc
html
latex
raku
yoda
#%chat all
gist
# {NONE => LLM::Functions::Chat(chat-id = NONE, llm-evaluator.conf.name = chatgpt, messages.elems = 4, last.message = ${:content("Rice, seaweed embrace, \nAvocado, crisp and bright, \nVegetarian."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,10.770353078842163,:timezone(-14400)))}), assistant1 => LLM::Functions::Chat(chat-id = assistant1, llm-evaluator.conf.name = ChatGPT, messages.elems = 6, last.message = ${:content("2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,6.901396036148071,:timezone(-14400)))}), assistant2 => LLM::Functions::Chat(chat-id = assistant2, llm-evaluator.conf.name = chatgpt, messages.elems = 0), ce => LLM::Functions::Chat(chat-id = ce, llm-evaluator.conf.name = chatgpt, messages.elems = 0), gc => LLM::Functions::Chat(chat-id = gc, llm-evaluator.conf.name = chatgpt, messages.elems = 0), html => LLM::Functions::Chat(chat-id = html, llm-evaluator.conf.name = chatgpt, messages.elems = 0), latex => LLM::Functions::Chat(chat-id = latex, llm-evaluator.conf.name = chatgpt, messages.elems = 0), raku => LLM::Functions::Chat(chat-id = raku, llm-evaluator.conf.name = chatgpt, messages.elems = 0), yoda => LLM::Functions::Chat(chat-id = yoda, llm-evaluator.conf.name = chatgpt, messages.elems = 0)}

Delete one persona

#%chat assistant1 meta
delete
# Deleted: assistant1
Gist: LLM::Functions::Chat(chat-id = assistant1, llm-evaluator.conf.name = ChatGPT, messages.elems = 6, last.message = ${:content("2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,6.901396036148071,:timezone(-14400)))})

Clear message history of one persona (keep persona)

#%chat assistant2 meta
clear
# Cleared messages of: assistant2
Gist: LLM::Functions::Chat(chat-id = assistant2, llm-evaluator.conf.name = chatgpt, messages.elems = 0)

Delete all personas

#%chat all
drop
# Deleted 8 chat objects with names NONE assistant2 ce gc html latex raku yoda.

#%chat <id>|all meta command aliases / synonyms:

  • delete or drop
  • keys or names
  • clear or empty

4) Regular chat cells vs direct LLM-provider cells

Regular chat cells (#%chat)

  • Stateful across cells (conversation memory stored in chat objects).
  • Persona-oriented via identifier + optional prompt.
  • Backend chosen with conf (default: ChatGPT).

Direct provider cells (#%openai%%gemini%%llama%%dalle)

  • Direct single-call access to provider APIs.
  • Useful for explicit provider/model control.
  • Do not use chat-object memory managed by #%chat.

Remark: For all “Jupyter::Chatbook” magic specs both prefixes %% and #% can be used.

Examples

OpenAI’s (ChatGPT) models:

#%openai > markdown, model=gpt-4.1-mini
Write a regex for US ZIP+4.

Google’s (Gemini) models:

#%gemini > markdown, model=gemini-2.5-flash
Explain async/await in Python using three point each with less than 10 words.

Access llamafile, locally run models:

#%llama > markdown
Give me three Linux troubleshooting tips. VERY CONCISE.

Remark: In order to run the magic cell above you have to run a llamafile program/model on your computer. (For example, ./google_gemma-3-12b-it-Q4_K_M.llamafile.)

Access Ollama models:

#%chat ollama > markdown, conf=Ollama
Give me three Linux troubleshooting tips. VERY CONCISE.

Remark: In order to run the magic cell above you have to run an Ollama app on your computer.

Create images using DALL-E:

#%dalle, model=dall-e-3, size=landscape
A dark-mode digital painting of a lighthouse in stormy weather.

5) DALL-E interaction management

For a detailed discussion of the DALL-E interaction in Raku and magic cell parameter descriptions see “Day 21 – Using DALL-E models in Raku”.

Image generation:

#%dalle, model=dall-e-3, size=landscape, style=vivid
A dark-mode digital painting of a lighthouse in stormy weather.

Here we use a DALL-E meta cell to see how many images were generated in a notebook session:

#% dalle meta
elems
# 3

Here we export the second image — using the index 1 — into a file named “stormy-weather-lighthouse-2.png”:

#% dalle export, index=1
stormy-weather-lighthouse-2.png
# stormy-weather-lighthouse-2.png

Here we show all generated images:

#% dalle meta
show

Here we export all images (into file names with the prefix “cheatsheet”):

#% dalle export, index=all, prefix=cheatsheet

6) LLM provider access facilitation

API keys can be passed inline (api-key) or through environment variables.

Notebook-session environment setup

%*ENV<OPENAI_API_KEY> = "YOUR_OPENAI_KEY";
%*ENV<GEMINI_API_KEY> = "YOUR_GEMINI_KEY";
%*ENV<OLLAMA_API_KEY> = "YOUR_OLLAMA_KEY";

Ollama-specific defaults:

  • OLLAMA_HOST (default host fallback is http://localhost:11434)
  • OLLAMA_MODEL (default model if model=... not given)

The magic cells take as argument base-url. This allows to use LLMs that have ChatGPT compatible APIs. The argument base_url is a synonym of host for magic cell #%ollama.


7) Notebook/chatbook session initialization with custom code + personas JSON

Initialization runs when the extension is loaded.

A) Custom Raku init code

  • Env var override: RAKU_CHATBOOK_INIT_FILE
  • If not set, first existing file is used in this order:
  1. ~/.config/raku-chatbook/init.py
  2. ~/.config/init.raku

Use this for imports/helpers you always want in chatbook sessions.

B) Pre-load personas from JSON

  • Env var override: RAKU_CHATBOOK_LLM_PERSONAS_CONF
  • If not set, first existing file is used in this order:
  1. ~/.config/raku-chatbook/llm-personas.json
  2. ~/.config/llm-personas.json

The supported JSON shape is an array of dictionaries:

[
{
"chat-id": "raku",
"conf": "ChatGPT",
"prompt": "@CodeWriterX|Raku",
"model": "gpt-4.1-mini",
"max_tokens": 8192,
"temperature": 0.4
}
]

Recognized persona spec fields include:

  • chat-id
  • prompt
  • conf (or configuration)
  • modelmax-tokenstemperaturebase-url
  • api-key
  • evaluator-args (object)

Verify pre-loaded personas:

#%chat all
keys

LLM::Graph plots interpretation guide

Introduction

This document (notebook) provides visual dictionaries for the interpretation of graph-plots of LLM-graphs, [AAp1, AAp2].

The “orthogonal style” LLM-graph plot is used in “Agentic-AI for text summarization”, [AA1].


Setup

use LLM::Graph;


LLM graph

Node specs:

sink my %rules =
        poet1 => "Write a short poem about summer.",
        poet2 => "Write a haiku about winter.",
        poet3 => sub ($topic, $style) {
            "Write a poem about $topic in the $style style."
        },
        poet4 => {
                llm-function => {llm-synthesize('You are a famous Russian poet. Write a short poem about playing bears.')},
                test-function => -> $with-russian { $with-russian ~~ Bool:D && $with-russian || $with-russian.Str.lc ∈ <true yes> }
        },
        judge => sub ($poet1, $poet2, $poet3, $poet4) {
            [
                "Choose the composition you think is best among these:\n\n",
                "1) Poem1: $poet1",
                "2) Poem2: $poet2",
                "3) Poem3: {$poet4.defined && $poet4 ?? $poet4 !! $poet3}",
                "and copy it:"
            ].join("\n\n")
        },
        report => {
            eval-function => sub ($poet1, $poet2, $poet3, $poet4, $judge) {
                [
                    '# Best poem',
                    'Three poems were submitted. Here are the statistics:',
                    to-html( ['poet1', 'poet2', $poet4.defined && $poet4 ?? 'poet4' !! 'poet3'].map({ [ name => $_, |text-stats(::('$' ~ $_))] })».Hash.Array, field-names => <name chars words lines> ),
                    '## Judgement',
                    $judge
                ].join("\n\n")
            }
        }
    ;

Remark: This is a documentation example — I want to be seen that $poet4 can be undefined. That hints that the corresponding sub is not always evaluated. (Because of the result of the corresponding test function.)

Make the graph:

my $gBestPoem = LLM::Graph.new(%rules)

Now. to make the execution quicker, we assign the poems (instead of LLM generating them):

# Poet 1
my $poet1 = q:to/END/;
Golden rays through skies so blue,
Whispers warm in morning dew.
Laughter dances on the breeze,
Summer sings through rustling trees.

Fields of green and oceans wide,
Endless days where dreams abide.
Sunset paints the world anew,
Summer’s heart in every hue.
END

# Poet 2
my $poet2 = q:to/END/;
Silent snowflakes fall,
Blanketing the earth in white,
Winter’s breath is still.
END

# Poet 3
my $poet3 = q:to/END/;
There once was a game on the ice,  
Where players would skate fast and slice,  
With sticks in their hands,  
They’d score on the stands,  
Making hockey fans cheer twice as nice!
END

# Poet 4
sink my $poet4 = q:to/END/;
В лесу играют медведи —  
Смех разносится в тиши,  
Тяжело шагают твердо,  
Но в душе — мальчишки.

Плюшевые лапы сильны,  
Игривы глаза блестят,  
В мире грёз, как в сказке дивной,  
Детство сердце охраняет.
END

sink my $judge = q:to/END/;
The 3rd one.
END


Graph evaluation

Evaluate the LLM graph with input arguments and intermediate nodes results:

$gBestPoem.eval(topic => 'Hockey', style => 'limerick', with-russian => 'yes', :$poet1, :$poet2, :$poet3, :$poet4)
#$gBestPoem.eval(topic => 'Hockey', style => 'limerick', with-russian => 'yes')

Here is the final result (of the node “report”):

#% markdown
$gBestPoem.nodes<report><result>


Default style

Here is the Graphviz DOT visualization of the LLM graph:

#% html
$gBestPoem.dot(engine => 'dot', :9graph-size, node-width => 1.2, node-color => 'grey', edge-width => 0.8):svg

Here are the node spec-types:

$gBestPoem.nodes.nodemap(*<spec-type>)

Here is a dictionary of the shapes and the corresponding node spec-types:


Specified shapes

Here different node shapes are specified and the edges are additionally styled:

#% html
$gBestPoem.dot(
    engine => 'dot', :9graph-size, node-width => 1.2, node-color => 'Grey', 
    edge-color => 'DimGrey', edge-width => 0.8, splines => 'ortho',
    node-shapes => {
        Str => 'note', 
        Routine => 'doubleoctagon', 
        :!RoutineWrapper, 
        'LLM::Function' => 'octagon' 
    }
):svg

Similar visual effect is achieved with the option spec theme => 'ortho':

$gBestPoem.dot(node-width => 1.2, theme => 'ortho'):svg

Remark: The option “theme” takes the values “default”, “ortho”, and Whatever.

Here is the corresponding dictionary:


References

Articles, blog posts

[AA1] Anton Antonov, “Agentic-AI for text summarization”, (2025), RakuForPrediction at WordPress.

Packages

[AAp1] Anton Antonov, LLM::Graph, Raku package, (2025), GitHub/antononcube.

[AAp2] Anton Antonov, Graph, Raku package, (2024-2025), GitHub/antononcube.

Agentic-AI for text summarization

Introduction

One of the “standard” things to do with an Agentic Artificial Intelligence (AI) system is to summarize (large) texts using different Large Language Model (LLM) agents.

This (computational Markdown) document illustrates how to specify an LLM graph for deriving comprehensive summaries of large texts. The LLM graph is based on different LLM- and non-LLM functions. The Raku package “LLM::Graph” is used, [AAp1].

Using the LLM graph is an alternative to the Literate programming based solutions shown in [AA1, AAn1].


Setup

Load the Raku packages needed for the computations below:

use LLM::Graph;
use LLM::Functions;
use LLM::Prompts;
use LLM::Tooling;
use Data::Importers;
use Data::Translators;

Define an LLM-access configuration:

sink my $conf41-mini = llm-configuration('ChatGPT', model => 'gpt-4.1-mini', temperature => 0.55, max-tokens => 4096);


Procedure outline

For a given URL, file path, or text a comprehensive text summary document is prepared in the following steps (executed in accordance to the graph below):

  • User specifies an input argument ($_ in the graph)
  • LLM classifies the input as “URL”, “FilePath”, “Text”, or “Other”
  • The text is ingested
    • If the obtained label is different than “Text”
  • Using asynchronous LLM computations different summaries are obtained
    • The title of the summary document can be user specified
    • Otherwise, it is LLM-deduced
  • A report is compiled from all summaries
  • The report is exported and opened
    • If that is user specified

In the graph:

  • Parallelogram nodes represent user input
  • Hexagonal nodes represent LLM calls
  • Rectangular nodes represent deterministic computations

LLM graph

Specify the LLM graph nodes:

sink my %rules =
TypeOfInput => sub ($_) {
        "Determine the input type of\n\n$_.\n\nThe result should be one of: 'Text', 'URL', 'FilePath', or 'Other'."  ~ 
        llm-prompt('NothingElse')('single string')
    },

IngestText =>  { eval-function => sub ($TypeOfInput, $_) { $TypeOfInput ~~ / URL | FilePath/ ?? data-import($_) !! $_} },

Title => { 
    eval-function => sub ($IngestText, $with-title = Whatever) { $with-title ~~ Str:D ?? $with-title !! llm-synthesize([llm-prompt("TitleSuggest")($IngestText, 'article'), "Short title with less that 6 words"]) },
},

Summary => sub ($IngestText) { llm-prompt("Summarize")() ~ "\n\n$IngestText" },

TopicsTable => sub ($IngestText) { llm-prompt("ThemeTableJSON")($IngestText, 'article', 20) },

ThinkingHats => sub ($IngestText) { llm-prompt("ThinkingHatsFeedback")($IngestText, <yellow grey>, format => 'HTML') },

MindMap => sub ($IngestText) { llm-prompt('MermaidDiagram')($IngestText) },

Report => { eval-function => 
    sub ($Title, $Summary, $TopicsTable, $MindMap, $ThinkingHats) { 
        [
            "# $Title",
            '### *LLM summary report*',
            '## Summary',
            $Summary,
            '## Topics',
            to-html(
                from-json($TopicsTable.subst(/ ^ '```json' | '```' $/):g),
                field-names => <theme content>,
                align => 'left'),
            "## Mind map",
            $MindMap,
            '## Thinking hats',
            $ThinkingHats.subst(/ ^ '```html' | '```' $/):g
        ].join("\n\n")
    } 
},

ExportAndOpen => {
    eval-function => sub ($Report) {
       spurt('./Report.md', $Report);
       shell "open ./Report.md" 
    },
    test-function => -> $export-and-open = True { $export-and-open ~~ Bool:D && $export-and-open || $export-and-open.Str.lc ∈ <true yes open> }
}
;

Remark: The LLM graph is specified with functions and prompts of the Raku packages “LLM::Functions”, [AAp2], and “LLM::Prompts”, [AAp3].

Make the graph:

my $gCombinedSummary = LLM::Graph.new(%rules, llm-evaluator => $conf41-mini, :async)

# LLM::Graph(size => 9, nodes => ExportAndOpen, IngestText, MindMap, Report, Summary, ThinkingHats, Title, TopicsTable, TypeOfInput)


Graph evaluation

URL and text statistics:

my $url = 'https://raw.githubusercontent.com/antononcube/RakuForPrediction-blog/refs/heads/main/Data/Graph-neat-examples-in-Raku-Set-2-YouTube.txt';
my $txtFocus = data-import($url);

text-stats($txtFocus)

# (chars => 5957 words => 1132 lines => 157)

Remark: The function data-import is provided by the Raku package “Data::Importers”, [AAp4].

Computation:

$gCombinedSummary.eval({ '$_' => $url, with-title => '«Graph» neat examples, set 2' })

# LLM::Graph(size => 9, nodes => ExportAndOpen, IngestText, MindMap, Report, Summary, ThinkingHats, Title, TopicsTable, TypeOfInput)

Remark: Instead of deriving the title using an LLM, the title is specified as an argument.

After the LLM-graph evaluation on macOs the following window is shown (of the app One Markdown):

Here the corresponding graph is shown:

#% html
$gCombinedSummary.dot(node-width => 1.2, theme => 'ortho'):svg

Remark: The node visualizations of the graph plot are chosen to communicate node functions.

  • Double octagon: Sub spec for LLM execution
  • Rectangular note: String spec for LLM execution
  • Rectangle: Sub spec for Raku execution
  • Parallelogram: Input argument

The summary document can be also embedded into the woven Markdown with the command and cell argument:

```raku, results=asis
$gCombinedSummary.nodes<Report><result>.subst(/'```html' | '```' $/):g
```


References

Blog posts

[AA1] Anton Antonov, “Parameterized Literate Programming”, (2025), RakuForPrediction at WordPress.

Notebooks

[AAn1] Anton Antonov, “LLM comprehensive summary template for large texts”, (2025), Wolfram Community.

Packages

[AAp1] Anton Antonov, LLM::Graph, Raku package, (2025), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions, Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Prompts, Raku package, (2023-2025), GitHub/antononcube.

[AAp4] Anton Antonov, Data::Importers, Raku package, (2024-2025), GitHub/antononcube.

LLM::Graph

This blog post introduces and exemplifies the Raku package “LLM::Graph”, which is used to efficiently schedule and combine multiple LLM generation steps.

The package provides the class LLM::Graph with which computations are orchestrated.

The package follows the design discussed in the video “Live CEOing Ep 886: Design Review of LLMGraph”, [WRIv1], and the corresponding Wolfram Language function LLMGraph, [WRIf1].

The package implementation heavily relies on the package “LLM::Functions”, [AAp1]. Graph functionalities are provided by “Graph”, [AAp3].


Installation

Package installations from both sources use zef installer (which should be bundled with the “standard” Rakudo installation file.)

To install the package from Zef ecosystem use the shell command:

zef install LLM::Graph

To install the package from the GitHub repository use the shell command:

zef install https://github.com/antononcube/Raku-LLM-Graph.git


Design

Creation of an LLM::Graph object in which “node_i” evaluates fun_i with results from parent nodes:

LLM::Graph.new({name_1 => fun_1, ...})

LLM::Graph objects are callables. Getting the result of a graph on input:

LLM::Graph.new(...)(input)

Details and options

  • An LLM::Graph enables efficient scheduling and integration of multiple LLM generation steps optimizing evaluation by managing the concurrency of LLM requests.
  • Using LLM::Graph requires (LLM) service authentication and internet connectivity.
    • Authentication and internet are required if all graph nodes are non-LLM computation specs.
  • Possible values of the node function spec fun_i are:
llm-function(...)an llm-function for LLM submission
sub (...) {...}a sub for Raku computation submission
%(key_i => val_i ...)Map with detailed node specifications nodespec
  • Possible node specifications keys in nodespec are:
“eval-function”arbitrary Raku sub
“llm-function”LLM evaluation via an llm-function
“listable-llm-function”threaded LLM evaluation on list input values
“input”explicit list of nodes required as sub arguments
“test-function”whether the node should run
“test-function-input”explicit list of nodes required as test arguments
  • Each node must be defined with only one of “eval-function”, “llm-function”, or “listable-llm-function”.
  • The “test-function” specification makes a node evaluation conditional on the results from other nodes.
  • Possible “llm-function” specifications prompt_i include:
“text”static text
["text1", ...]a list of strings
llm-prompt("name")a repository prompt
sub ($arg1..) {"Some $arg1 text"}templated text
llm-function(...)an LLM::Function object
  • Any “node_i” result can be provided in input as a named argument.
    input can have one positional argument and multiple named arguments.
  • LLM::Graph objects have the attribute llm-evaluator that is used as a default (or fallback)
    LLM evaluator object. (See [AAp1].)
  • The Boolean option “async” in LLM::Graph.new can be used to specify if the LLM submissions should be made asynchronous.
    • The class Promise is used.

Usage examples

Three poets

Make an LLM graph with three different poets, and a judge that selects the best of the poet-generated poems:

use LLM::Graph;
use Graph;

my %rules =
        poet1 => "Write a short poem about summer.",
        poet2 => "Write a haiku about winter.",
        poet3 => sub ($topic, $style) {
            "Write a poem about $topic in the $style style."
        },
        judge => sub ($poet1, $poet2, $poet3) {
            [
                "Choose the composition you think is best among these:\n\n",
                "1) Poem1: $poet1",
                "2) Poem2: $poet2",
                "3) Poem3: $poet3",
                "and copy it:"
            ].join("\n\n")
        };

my $gBestPoem = LLM::Graph.new(%rules);

# LLM::Graph(size => 4, nodes => judge, poet1, poet2, poet3)

Calculation with special parameters (topic and style) for the 3rd poet:

$gBestPoem(topic => 'hockey', style => 'limerick');

# LLM::Graph(size => 4, nodes => judge, poet1, poet2, poet3)

Remark Instances of LLM::Graph are callables. Instead of $gBestPoem(...)$gBestPoem.eval(...) can be used.

Computations dependency graph:

$gBestPoem.dot(engine => 'dot', node-width => 1.2 ):svg

The result by the terminal node(“judge”):

say $gBestPoem.nodes<judge>;

# {eval-function => sub { }, input => [poet1 poet3 poet2], result => I think Poem1 is the best composition among these. Here's the poem:
# 
# Golden sun above so bright,  
# Warmth that fills the day with light,  
# Laughter dancing on the breeze,  
# Whispers through the swaying trees.  
# 
# Fields alive with blooms in cheer,  
# Endless days that draw us near,  
# Summer’s song, a sweet embrace,  
# Nature’s smile on every face., spec-type => (Routine), test-function-input => [], wrapper => Routine::WrapHandle.new}

Further examples

The following notebooks provide more elaborate examples:

The following notebook gives visual dictionaries for the interpretation of LLM-graph plots:


Implementation notes

LLM functors introduction

  • Since the very beginning, the functions produced by “LLM::Functions” were actually blocks (Block:D). It was in my TODO list for a long time instead of blocks to produce functors (function objects). For “LLM::Graph” that is/was necessary in order to make the node-specs processing more adequate.
    • So, llm-function produces functors (LLM::Function objects) by default now.
    • The option “type” can be used to get blocks.

No need for topological sorting

  • I thought that I should use the graph algorithms for topological sorting in order to navigate node dependencies during evaluation.
  • Turned out, that is not necessary — simple recursion is sufficient.
    • From the nodes specs, a directed graph (a Graph object) is made.
    • Graph‘s method reverse is used to get the directed computational dependency graph.
    • That latter graph is used in the node-evaluation recursion.

Wrapping “string templates”

  • It is convenient to specify LLM functions with “string templates.”
  • Since there are no separate “string template” objects in Raku, subs or blocks are used.
    • For example:
    • sub ($country, $year) {"What is the GDP of $country in $year"} (sub)
    • {"What is the GDP of $^a in $^b?"} (block)
  • String template subs are wrapped to be executed first and then the result is LLM-submitted.
  • Since the blocks cannot be wrapped, currently “LLM::Graph” refuses to process them.
    • It is planned later versions of “LLM::Graph” to process blocks.

Special graph plotting

  • Of course, it is nice to have the LLM-graphs visualized.
  • Instead of the generic graph visualization provided by the package “Graph” (method dot) a more informative graph plot is produced in which the different types of notes have different shapes.
    • The graph vertex shapes help distinguishing LLM-nodes from just-Raku-nodes.
    • Also, test function dependencies are designated with dashed arrows.
    • The shapes in the graph plot can be tuned by the user.
    • See the Jupyter notebook “Graph-plots-interpretation-guide.ipynb”.

References

Blog posts

[AA1] Anton Antonov, “Parameterized Literate Programming”, (2025), RakuForPrediction at WordPress.

Functions, packages

[AAp1] Anton Antonov, LLM::Functions, Raku package, (2023-2025), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Prompts, Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, Graph, Raku package, (2024-2025), GitHub/antononcube.

[WRIf1] Wolfram Research (2025), LLMGraph, Wolfram Language function.

Notebooks

[AAn1] Anton Antonov, “LLM comprehensive summary template for large texts”, (2025), Wolfram Community.

Videos

[WRIv1] Wolfram Research, Inc., “Live CEOing Ep 886: Design Review of LLMGraph”, (2025), YouTube/WolframResearch.

Parameterized Literate Programming

Introduction

Literate Programming (LT), [Wk1], blends code and documentation into a narrative, prioritizing human readability. Code and explanations are interwoven, with tools extracting code for compilation and documentation for presentation, enhancing clarity and maintainability.

LT is commonly employed in scientific computing and data science for reproducible research and open access initiatives. Today, millions of programmers use literate programming tools.

Raku has several LT solutions:

This document (notebook) discusses executable documents parameterization — or parameterized reports — provided by “Text::CodeProcessing”, [AAp1].

Remark: Providing report parameterization has been in my TODO list since the beginning of programming “Text::CodeProcessing”. I finally did it in order to facilitate parameterized Large Language Model (LLM) workflows. See the LLM template “LLM-comprehensive-summary-Raku.md”.

The document has three main sections:

  • Using YAML document header to specify parameters
    • Description and examples
  • LLM templates with parameters
  • Operating System (OS) shell execution with specified parameters

Remark: The programmatically rendered Markdown is put within three-dots separators.


Setup

Load packages:

use Text::CodeProcessing;
use Lingua::NumericWordForms;


YAML front-matter with parameters

For a given text or file we can execute that text or file and produce its woven version using:

  • The sub StringCodeChunksEvaluation in a Raku session
  • The Command Line Interface (CLI) script file-code-chunks-eval in an OS shell

Consider the following Markdown text (of a certain file):

sink my $txt = q:to/END/;
---
title: Numeric word forms generation (template)
author: Anton Antonov
date: 2025-06-19
params:
    sample-size: 5
    min: 100
    max: 10E3
    to-lang: "Russian"
---

Generate a list of random numbers:

```raku
use Data::Generators;

my @ns = random-real([%params<min>, %params<max>], %params<sample-size>)».floor
```

Convert to numeric word forms:

```raku
use Lingua::NumericWordForms;

.say for @ns.map({ ``_ => to-numeric-word-form(``_, %params<to-lang>) })
```
END

The parameters of that executable document are given in YAML format — similar to “parameterized reports” of R Markdown documents. (Introduced and provided by Posit, formerly RStudio.)

  • Declaring parameters:
    • Parameters are declared using the params field within the YAML header of the document.
    • For example, the text above creates the parameter “sample-size” and assigns it the default value 5.
  • Using parameters in code:
    • Parameters are made available within the Raku environment as a read-only hashmap named %params.
    • To access a parameter in code, call %params<parameter-name>.
  • Setting parameter values:
    • To create a report that uses a new set of parameter values add:
      • %params argument to StringCodeChunksEvaluation
      • --params argument to the CLI script file-code-chunks-eval

Here is the woven (or executed) version of the text:

#% markdown
StringCodeChunksEvaluation($txt, 'markdown')
==> { .subst(/^ '---' .*? '---'/) }()


Generate a list of random numbers:

use Data::Generators;

my @ns = random-real([100, 10000], 5)».floor

# [3925 6533 3215 2983 1395]

Convert to numeric word forms:

use Lingua::NumericWordForms;

.say for @ns.map({ $_ => to-numeric-word-form($_, 'Russian') })

# 3925 => три тысячи девятьсот двадцать пять
# 6533 => шесть тысяч пятьсот тридцать три
# 3215 => три тысячи двести пятнадцать
# 2983 => две тысячи девятьсот восемьдесят три
# 1395 => одна тысяча триста девяносто пять


Remark: In order to be easier to read the results, the YAML header ware removed (with subst.)

Here we change parameters — different sample size and language for the generated word forms:

#% markdown
StringCodeChunksEvaluation($txt, 'markdown', params => {:7sample-size, to-lang => 'Japanese'})
==> { .subst(/^ '---' .*? '---'/) }()


Generate a list of random numbers:

use Data::Generators;

my @ns = random-real([100, 10000], 7)».floor

# [8684 5057 7732 2091 7098 7941 6846]

Convert to numeric word forms:

use Lingua::NumericWordForms;

.say for @ns.map({ $_ => to-numeric-word-form($_, 'Japanese') })

# 8684 => 八千六百八十四
# 5057 => 五千五十七
# 7732 => 七千七百三十二
# 2091 => 二千九十一
# 7098 => 七千九十八
# 7941 => 七千九百四十一
# 6846 => 六千八百四十六



LLM application

From LLM-workflows perspective parameterized reports can be seen as:

  • An alternative using LLM functions and prompts, [AAp5, AAp6]
  • Higher-level utilization of LLM functions workflows

To illustrate the former consider this short LLM template:

sink my $llmTemplate = q:to/END/;
---
params:
    question: 'How many sea species?'
    model: 'gpt-4o-mini'
    persona: SouthernBelleSpeak
---

For the question:

> %params<question>

The answer is:

```raku, results=asis, echo=FALSE, eval=TRUE
use LLM::Functions;
use LLM::Prompts;

my $conf = llm-configuration('ChatGPT', model => %params<model>);

llm-synthesize([llm-prompt(%params<persona>), %params<question>], e => $conf)

END

Here we execute that LLM template providing different question and LLM persona:

#% markdown
StringCodeChunksEvaluation(
    $llmTemplate, 
    'markdown', 
    params => {question => 'How big is Texas?', persona => 'SurferDudeSpeak'}
).subst(/^ '---' .* '---'/)


For the question:

‘How big is Texas?’

The answer is:

Whoa, bro! Texas is like, totally massive, man! It’s like the second biggest state in the whole USA, after that gnarly Alaska, you know? We’re talking about around 268,000 square miles of pure, wild vibes, bro! That’s like a whole lot of room for the open road and some epic waves if you ever decide to cruise on over, dude! Just remember to keep it chill and ride the wave of life, bro!



CLI parameters

In order to demonstrate CLI usage of parameters below we:

  • Export the Markdown string into a file
  • Invoke the CLI file-code-chunks-eval
    • In a Raku-Jupyter notebook this can be done with the magic #% bash
    • Alternatively, run and shell can be used
  • Import the woven file and render its content

Export to Markdown file

spurt($*CWD ~ '/LLM-template.md', $llmTemplate)

True

CLI invocation

Specifying the template parameters using the CLI is done with the named argument --params with a value that is a valid hashmap Raku code:

#% bash
file-code-chunks-eval LLM-template.md --params='{question=>"Where is Iran?", persona=>"DrillSergeant"}'

Remark: If the output file is not specified then the output file name is the CLI input file argument with the string ‘_woven’ placed before the extension.

Import and render

Import the woven file and render it (again, remove the YAML header for easier reading):

#% markdown
slurp($*CWD ~ '/LLM-template_woven.md')
==> {.subst(/ '---' .*? '---' /)}()


For the question:

‘Where is Iran?’

The answer is:

YOU LISTEN UP, MAGGOT! IRAN IS LOCATED IN THE MIDDLE EAST, BOUNDED BY THE CASPIAN SEA TO THE NORTH AND THE PERSIAN GULF TO THE SOUTH! NOW GET YOUR HEAD OUT OF THE CLOUDS AND PAY ATTENTION! I DON’T HAVE TIME FOR YOUR LAZY QUESTIONS! IF I SEE YOU SLACKING OFF, YOU’LL BE DOING PUSH-UPS UNTIL YOUR ARMS FALL OFF! DO YOU UNDERSTAND ME? SIR!



References

Packages

[AAp1] Anton Antonov, Text::CodeProcssing Raku package, (2021-2025), GitHub/antononcube.

[AAp2] Anton Antonov, Lingua::NumericWordForms Raku package, (2021-2025), GitHub/antononcube.

[AAp3] Anton Antonov, RakuMode Wolfram Language paclet, (2023), Wolfram Language Paclet Repository.

[AAp4] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[AAp5] Anton Antonov, LLM::Functions Raku package, (2023-2025), GitHub/antononcube.

[AAp6] Anton Antonov, LLM::Prompts Raku package, (2023-2025), GitHub/antononcube.

[BDp1] Brian Duggan, Jupyter::Kernel Raku package, (2017-2024), GitHub/bduggan.

Videos

[AAv1] Anton Antonov, “Raku Literate Programming via command line pipelines”, (2023), YouTube/@AAA4prediction.

LLM function calling workflows (Part 3, Facilitation)

Introduction

This document (notebook) shows how to efficiently do streamlined Function Calling workflows with Large Language Models (LLMs) of Gemini.

The Raku package “WWW::Gemini”, [AAp2], is used.

Examples and big picture

The rest of the document gives concrete code how to do streamline multiple-tool function calling with Gemini’s LLMs using Raku. Gemini’s function calling example “Parallel Function Calling”, [Gem1], is followed.

This document belongs to a collection of documents describing how to do LLM function calling with Raku.

Compared to the previously described LLM workflows with OpenAI, [AA1], and Gemini, [AA2], the Gemini LLM workflow in this document demonstrates:

  • Use of multiple tools (parallel function calling)
  • Automatic generation of hashmap (or JSON) tool descriptors
  • Streamlined computation of multiple tool results from multiple LLM requests

The streamlining is achieved by using the provided by “LLM::Functions”, [AAp3]:

  • Classes LLM::ToolLLM::ToolRequest, and LLM::ToolResult
  • Subs llm-tool-definition and generate-llm-tool-result
    • The former sub leverages Raku’s introspection features.
    • The latter sub matches tools and requests in order to compute tool responses.

Setup

Load packages:

use JSON::Fast;
use Data::Reshapers;
use Data::TypeSystem;
use LLM::Tooling;
use WWW::Gemini;

Choose a model:

my $model = "gemini-2.0-flash";


Workflow

Define a local function

Define a few subs — tools — with sub- and argument descriptions (i.e. attached Pod values, or declarator blocks):

#| Powers the spinning disco ball.
sub power-disco-ball-impl(
    Int:D $power #= Whether to turn the disco ball on or off.
    ) returns Hash {
    return { status => "Disco ball powered " ~ ($power ?? 'on' !! 'off') };
}
#= A status dictionary indicating the current state.

#| Play some music matching the specified parameters.
sub start-music-impl(
    Int:D $energetic, #=  Whether the music is energetic or not.
    Int:D $loud       #= Whether the music is loud or not.
    ) returns Hash {
    my $music-type = $energetic ?? 'energetic' !! 'chill';
    my $volume = $loud ?? 'loud' !! 'quiet';
    return { music_type => $music-type, volume => $volume };
    #= A dictionary containing the music settings.
}

#| Dim the lights.
sub dim-lights-impl(
    Numeric:D $brightness #= The brightness of the lights, 0.0 is off, 1.0 is full.
    ) returns Hash {
    return { brightness => $brightness };
}
#= A dictionary containing the new brightness setting.

Remark: See the corresponding Python definitions in the section “Parallel Function Calling” of [Gem1].

The sub llm-tool-definition can be used to automatically generate the Raku-hashmaps or JSON-strings of the tool descriptors in the (somewhat universal) format required by LLMs:

llm-tool-definition(&dim-lights-impl, format => 'json')

# {
#   "function": {
#     "type": "function",
#     "name": "dim-lights-impl",
#     "strict": true,
#     "description": "Dim the lights.",
#     "parameters": {
#       "required": [
#         "$brightness"
#       ],
#       "additionalProperties": false,
#       "type": "object",
#       "properties": {
#         "$brightness": {
#           "type": "number",
#           "description": "The brightness of the lights, 0.0 is off, 1.0 is full."
#         }
#       }
#     }
#   },
#   "type": "function"
# }

Remark: The sub llm-tool-description is invoked in LLM::Tool.new. Hence (ideally) llm-tool-description would not be user-invoked that often.

These are the tool descriptions to be communicated to Gemini:

my @tools =
{
    :name("power-disco-ball-impl"), 
    :description("Powers the spinning disco ball."), 
    :parameters(
        {
            :type("object")
            :properties( {"\$power" => {:description("Whether to turn the disco ball on or off."), :type("integer")}}), 
            :required(["\$power"]), 
        }), 
},
{
    :name("start-music-impl"), 
    :description("Play some music matching the specified parameters."), 
    :parameters(
        {
            :type("object")
            :properties({
                "\$energetic" => {:description("Whether the music is energetic or not."), :type("integer")}, 
                "\$loud" => {:description("Whether the music is loud or not."), :type("integer")}
            }), 
            :required(["\$energetic", "\$loud"]), 
        }),
},
{
    :name("dim-lights-impl"), 
    :description("Dim the lights."), 
    :parameters(
        {
            :type("object")
            :properties({"\$brightness" => {:description("The brightness of the lights, 0.0 is off, 1.0 is full."), :type("number")}}), 
            :required(["\$brightness"]), 
        }), 
};

deduce-type(@tools)

# Vector(Struct([description, name, parameters], [Str, Str, Hash]), 3)

Here are additional tool-mode configurations (see “Function calling modes” of [Gem1]):

my %toolConfig =
  functionCallingConfig => {
    mode => "ANY",
    allowedFunctionNames => <power-disco-ball-impl start-music-impl dim-lights-impl>
  };

# {functionCallingConfig => {allowedFunctionNames => (power-disco-ball-impl start-music-impl dim-lights-impl), mode => ANY}}

First communication with Gemini

Initialize messages:

# User prompt
my $prompt = 'Turn this place into a party!';

# Prepare the API request payload
my @messages = [{role => 'user',parts => [ %( text => $prompt ) ]}, ];

# [{parts => [text => Turn this place into a party!], role => user}]

Send the first chat completion request:

my $response = gemini-generate-content(
    @messages,
    :$model,
    :@tools,
    :%toolConfig
);

deduce-type($response)

# Struct([candidates, modelVersion, responseId, usageMetadata], [Hash, Str, Str, Hash])

deduce-type($response)

# Struct([candidates, modelVersion, responseId, usageMetadata], [Hash, Str, Str, Hash])

The response is already parsed from JSON to Raku. Here is its JSON form:

to-json($response)

# {
#   "candidates": [
#     {
#       "avgLogprobs": -0.0012976408004760742,
#       "content": {
#         "parts": [
#           {
#             "functionCall": {
#               "name": "start-music-impl",
#               "args": {
#                 "$energetic": 1,
#                 "$loud": 1
#               }
#             }
#           },
#           {
#             "functionCall": {
#               "name": "power-disco-ball-impl",
#               "args": {
#                 "$power": 1
#               }
#             }
#           },
#           {
#             "functionCall": {
#               "args": {
#                 "$brightness": 0.5
#               },
#               "name": "dim-lights-impl"
#             }
#           }
#         ],
#         "role": "model"
#       },
#       "safetyRatings": [
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_HATE_SPEECH"
#         },
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_DANGEROUS_CONTENT"
#         },
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_HARASSMENT"
#         },
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT"
#         }
#       ],
#       "finishReason": "STOP"
#     }
#   ],
#   "usageMetadata": {
#     "candidatesTokensDetails": [
#       {
#         "tokenCount": 30,
#         "modality": "TEXT"
#       }
#     ],
#     "promptTokensDetails": [
#       {
#         "tokenCount": 113,
#         "modality": "TEXT"
#       }
#     ],
#     "promptTokenCount": 113,
#     "candidatesTokenCount": 30,
#     "totalTokenCount": 143
#   },
#   "responseId": "sOxFaOrFF-SfnvgPgITLqQ8",
#   "modelVersion": "gemini-2.0-flash"
# }

Refine the response with functional calls

The following copy of the messages is not required, but it makes repeated experiments easier:

my @messages2 = @messages;

# [{parts => [text => Turn this place into a party!], role => user}]

Let us define an LLM::Tool object for each tool:

my @toolObjects = [&power-disco-ball-impl, &start-music-impl, &dim-lights-impl].map({ LLM::Tool.new($_) });

.say for @toolObjects

# LLMTool(power-disco-ball-impl, Powers the spinning disco ball.)
# LLMTool(start-music-impl, Play some music matching the specified parameters.)
# LLMTool(dim-lights-impl, Dim the lights.)

Make an LLM::Request object for each request from the (first) LLM response:

my @requestObjects = $response<candidates>»<content>»<parts>.&flatten»<functionCall>.map({ LLM::ToolRequest.new( $_<name>, $_<args>) });

.say for @requestObjects

# LLMToolRequest(start-music-impl, :$loud(1), :$energetic(1), :id(Whatever))
# LLMToolRequest(power-disco-ball-impl, :$power(1), :id(Whatever))
# LLMToolRequest(dim-lights-impl, :$brightness(0.5), :id(Whatever))

Using the relevant tool for each request compute tool’s response (which are LLM::ToolResponse objects):

.say for @requestObjects.map({ generate-llm-tool-response(@toolObjects, $_) })».output

# {music_type => energetic, volume => loud}
# {status => Disco ball powered on}
# {brightness => 0.5}

Alternatively, the LLM::ToolResponse objects can be converted into hashmaps structured according a particular LLM function calling style (Gemini in this case):

.say for @requestObjects.map({ generate-llm-tool-response(@toolObjects, $_) })».Hash('Gemini')

# {functionResponse => {name => start-music-impl, response => {content => {music_type => energetic, volume => loud}}}}
# {functionResponse => {name => power-disco-ball-impl, response => {content => {status => Disco ball powered on}}}}
# {functionResponse => {name => dim-lights-impl, response => {content => {brightness => 0.5}}}}

Process the response:

  • Make a request object for each function call request
  • Compute the tool results
  • Form corresponding user message with those results
  • Send the messages to the LLM
my $assistant-message = $response<candidates>[0]<content>;
if $assistant-message<parts> {

    # Find function call parts and make corresponding tool objects
    my @requestObjects;
    for |$assistant-message<parts> -> %part {
        if %part<functionCall> {
            @requestObjects.push: LLM::ToolRequest.new( %part<functionCall><name>, %part<functionCall><args> ) 
        }
    }    

    # Add assistance message
    @messages2.push($assistant-message);

    # Compute tool responses
    my @funcParts = @requestObjects.map({ generate-llm-tool-response(@toolObjects, $_) })».Hash('Gemini');

    # Make and add the user response
    my %function-response =
        role => 'user',
        parts => @funcParts;

    @messages2.push(%function-response);
                
    # Send the second request with function result
    my $final-response = gemini-generate-content(
        @messages2,
        :@tools,
        :$model,
        format => "raku"
    );
                
    say "Assistant: ", $final-response<candidates>[0]<content><parts>».<text>.join("\n");

} else {
    say "Assistant: $assistant-message<content>";
}

# Assistant: Alright! I've started some energetic and loud music, turned on the disco ball, and dimmed the lights to 50% brightness. Let's get this party started!

Remark Compared to the workflows in [AA1, AA2] the code above in simpler, more universal and robust, and handles all tool requests


Conclusion

We can observe and conclude that LLM function calling workflows are greatly simplified by:

  • Leveraging Raku introspection
    • This requires documenting the subs and their parameters.
  • Using dedicated classes that represent tool:
    • Definitions, (LLM::Tool)
    • Requests, (LLM::ToolRequest)
    • Responses, (LLM::ToolResponse)
  • Having a sub (generate-llm-tool-response) that automatically matches request objects to tool objects and produces the corresponding response objects.

Raku’s LLM tools automation is similar to Gemini’s “Automatic Function Calling (Python Only)”.

The Wolfram Language LLM tooling functionalities are reflected in Raku’s “LLM::Tooling”, [WRI1].


References

Articles, blog posts

[AA1] Anton Antonov, “LLM function calling workflows (Part 1, OpenAI)”, (2025), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “LLM function calling workflows (Part 2, Google’s Gemini)”, (2025), RakuForPrediction at WordPress.

[AA3] Anton Antonov, “LLM function calling workflows (Part 3, Facilitation)”, (2025), RakuForPrediction at WordPress.

[Gem1] Google Gemini, “Gemini Developer API”.

[WRI1] Wolfram Research, Inc. “LLM-Related Functionality” guide.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2025), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::Gemini Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2025), GitHub/antononcube.

LLM function calling workflows (Part 2, Google’s Gemini)

Introduction

This document (notebook) shows how to do Function Calling workflows with Large Language Models (LLMs) of Google’s Gemini.

The Raku package “WWW::Gemini”, [AAp2], is used.

Examples and big picture

The rest of the document gives concrete code how to do function calling with Gemini’s LLMs using Raku.

There are similar workflows, [AA1], with other LLM providers. (Like, OpenAI.) They follow the same structure, although there are some small differences. (Say, in the actual specifications of tools.)

This document belongs to a collection of documents describing how to do LLM function calling with Raku.

The Gemini LLM workflow in this document is quite similar to the OpenIA workflow described in [AA1]. While there are variations in the tool configurations and how the elements of the LLM responses are obtained, the overall procedure outline and diagrams in [AA1] also apply to the workflows presented here.


Setup

Load packages:

use WWW::Gemini;
use JSON::Fast;

Choose a model:

my $model = "gemini-2.0-flash";


Workflow

Define a local function

This is the “tool” to be communicated to Gemini. (I.e. define the local function/sub.)

sub get-current-weather(Str:D $location, Str:D $unit = "fahrenheit") returns Str {
    return "It is currently sunny in $location with a temperature of 72 degrees $unit.";
}

Define the function specification (as prescribed in Gemini’s function calling documentation):

my %weather-function = %(
    name => 'get-current-weather',
    description => 'Get the current weather in a given location',
    parameters => %(
        type => 'object',
        properties => %(
            location => %(
                type => 'string',
                description => 'The city and state, e.g., Boston, MA'
            )
        ),
        required => ['location']
    )
);

First communication with Gemini

Initialize messages and tools:

# User prompt
my $prompt = 'What is the weather like in Boston, MA, USA?';

# Prepare the API request payload
my @messages = [{role => 'user',parts => [ %( text => $prompt ) ]}, ];

my @tools = [%weather-function, ];

# [{description => Get the current weather in a given location, name => get-current-weather, parameters => {properties => {location => {description => The city and state, e.g., Boston, MA, type => string}}, required => [location], type => object}}]

Send the first chat completion request:

my $response = gemini-generate-content(
    @messages,
    :$model,
    :@tools
);

The response is already parsed from JSON to Raku. Here is its JSON form:

to-json($response)

# {
#   "usageMetadata": {
#     "totalTokenCount": 50,
#     "promptTokensDetails": [
#       {
#         "tokenCount": 41,
#         "modality": "TEXT"
#       }
#     ],
#     "candidatesTokenCount": 9,
#     "candidatesTokensDetails": [
#       {
#         "tokenCount": 9,
#         "modality": "TEXT"
#       }
#     ],
#     "promptTokenCount": 41
#   },
#   "modelVersion": "gemini-2.0-flash",
#   "candidates": [
#     {
#       "finishReason": "STOP",
#       "safetyRatings": [
#         {
#           "category": "HARM_CATEGORY_HATE_SPEECH",
#           "probability": "NEGLIGIBLE"
#         },
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_DANGEROUS_CONTENT"
#         },
#         {
#           "probability": "NEGLIGIBLE",
#           "category": "HARM_CATEGORY_HARASSMENT"
#         },
#         {
#           "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
#           "probability": "NEGLIGIBLE"
#         }
#       ],
#       "content": {
#         "parts": [
#           {
#             "functionCall": {
#               "args": {
#                 "location": "Boston, MA"
#               },
#               "name": "get-current-weather"
#             }
#           }
#         ],
#         "role": "model"
#       },
#       "avgLogprobs": -3.7914659414026473e-06
#     }
#   ],
#   "responseId": "zDpEaIClFpu97dcPpqOWiA8"
# }

Refine the response with functional calls

The following copy of the messages is not required, but it makes repeated experiments easier:

my @messages2 = @messages;

# [{parts => [text => What is the weather like in Boston, MA, USA?], role => user}]

Process the response — invoke the tool, give the tool result to the LLM, get the LLM answer:

my $assistant-message = $response<candidates>[0]<content>;
if $assistant-message<parts> {

    for |$assistant-message<parts> -> %part {
        if %part<functionCall> {
            
            @messages2.push($assistant-message);

            my $func-name = %part<functionCall><name>;
            my %args = %part<functionCall><args>;

            
            if $func-name eq 'get-current-weather' {
                my $location = %args<location>;
                my $weather = get-current-weather($location);

                my %function-response =
                            role => 'user',
                            parts => [{ 
                                functionResponse => {
                                    name => 'get-current-weather',
                                    response => %( content => $weather )
                                } 
                            }];

                @messages2.push(%function-response);
                
                # Send the second request with function result
                my $final-response = gemini-generate-content(
                    @messages2,
                    :@tools,
                    :$model,
                    format => "raku"
                );
                
                say "Assistant: ", $final-response<candidates>[0]<content><parts>».<text>.join("\n");

                last
            }
        }
    }
} else {
    say "Assistant: $assistant-message<content>";
}

# Assistant: The weather in Boston, MA is currently sunny with a temperature of 72 degrees Fahrenheit.

Remark: Note that if get-current-weather is applied then the loop above immediately finishes.


References

Articles, blog posts

[AA1] Anton Antonov, “LLM function calling workflows (Part 1, OpenAI)”, (2025), RakuForPrediction at WordPress.

[AA2] Anton Antonov,
“LLM function calling workflows (Part 2, Google’s Gemini)”, (2025), RakuForPrediction at WordPress.

Packages

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2025), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::Gemini Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2025), GitHub/antononcube.

LLM function calling workflows (Part 1, OpenAI)

Introduction

This document (notebook) shows how to do Function Calling workflows with Large Language Models (LLMs) of OpenAI.

The Raku package “WWW::OpenAI”, [AAp1], is used.

Outline of the overall process

The overall process is (supposed to be) simple:

  1. Implement a “tool”, i.e. a function/sub
    • The tool is capable of performing (say, quickly and reliably) certain tasks.
    • More than one tool can be specified.
  2. Describe the tool(s) using a certain JSON format
    • The JSON description is to be “understood” by the LLM.
    • JSON-schema is used for the arguments.
    • Using the description, the LLM figures out when to make requests for computations with the tool and with what parameters and corresponding values.
  3. Make a first call to the LLM using suitably composed messages that have the tool JSON description(s).
  4. Examine the response of the LLM:
  5. If the response indicates that the (local) tool has to be evaluated:
    • Process the tool names and corresponding parameters.
    • Make a new message with the tool result(s).
    • Send the messages to the LLM.
    • Goto Step 4.
  6. Otherwise, give that “final” response.

(Currently) OpenAI indicates its tool evaluation requests with having the rule finish_reason => tool_calls in its responses.

Diagram

Here is a Mermaid-JS diagram that shows single-pass LLM-and-tool interaction:

Remark: Instead of a loop — as in the outline above — only one invocation of a local tool is shown in the diagram.

Examples and big picture

The rest of the document gives concrete code how to do function calling with OpenAI’s LLMs using Raku.

There are similar workflows with other LLM providers. (Like, Google’s Gemini.) They follow the same structure, although there are some small differences. (Say, in the actual specifications of tools.)

It would be nice to have:

  • Universal programming interface for those function calling interfaces.
  • Facilitation of tool descriptions derivations.
    • Via Raku’s introspection or using suitable LLM prompts.

This document belongs to a collection of documents describing how to do LLM function calling with Raku.


Setup

Load packages:

use WWW::OpenAI;
use JSON::Fast;

Choose a model:

my $model = "gpt-4.1";


Workflow

Define a local function

This is the “tool” to be communicated to OpenAI. (I.e. define the local function/sub.)

sub get-current-weather(Str $location, Str $unit = "fahrenheit") returns Str {
    return "It is currently sunny in $location with a temperature of 72 degrees $unit.";
}

Define the function specification (as prescribed in OpenAI’s function calling documentation):

my $function-spec = {
    type => "function",
    function => {
        name => "get-current-weather",
        description => "Get the current weather for a given location",
        parameters => {
            type => "object",
            properties => {
                '$location' => {
                    type => "string",
                    description => "The city and state, e.g., San Francisco, CA"
                },
                '$unit' => {
                    type => "string",
                    enum => ["celsius", "fahrenheit"],
                    description => "The temperature unit to use"
                }
            },
            required => ["location"]
        }
    }
};

First communication with OpenAI

Initialize messages and tools:

my @messages =
    {role => "system", content =>  "You are a helpful assistant that can provide weather information."},
    {role => "user", content => "What's the weather in Boston, MA?"}
    ;

my @tools = [$function-spec,];

Send the first chat completion request:

my $response = openai-chat-completion(
    @messages,
    :@tools,
    :$model,
    max-tokens => 4096,
    format => "raku",
    temperature => 0.45
);

# [{finish_reason => tool_calls, index => 0, logprobs => (Any), message => {annotations => [], content => (Any), refusal => (Any), role => assistant, tool_calls => [{function => {arguments => {"$location":"Boston, MA"}, name => get-current-weather}, id => call_ROi3n0iICSrGbetBKZ9KVG4E, type => function}]}}]

Refine the response with functional calls

The following copy of the messages is not required, but it makes repeated experiments easier:

my @messages2 = @messages;

Process the response — invoke the tool, give the tool result to the LLM, get the LLM answer:

my $assistant-message = $response[0]<message>;
if $assistant-message<tool_calls> {

    @messages2.push: {
        role => "assistant",
        tool_calls => $assistant-message<tool_calls>
    };

    my $tool-call = $assistant-message<tool_calls>[0];
    my $function-name = $tool-call<function><name>;
    my $function-args = from-json($tool-call<function><arguments>);
    
    if $function-name eq "get-current-weather" {
        my $result = get-current-weather(
            $function-args{'$location'} // $function-args<location>,
            $function-args{'$unit'} // $function-args<unit> // "fahrenheit"
        );
        @messages2.push: {
            role => "tool",
            content => $result,
            tool_call_id => $tool-call<id>
        };
        
        # Send the second request with function result
        my $final-response = openai-chat-completion(
            @messages2,
            :@tools,
            #tool_choice => "auto",
            :$model,
            format => "raku"
        );
        say "Assistant: $final-response[0]<message><content>";
    }
} else {
    say "Assistant: $assistant-message<content>";
}

# Assistant: The weather in Boston, MA is currently sunny with a temperature of 72

Show all messages:

.say for @messages2

# {content => You are a helpful assistant that can provide weather information., role => system}
# {content => What's the weather in Boston, MA?, role => user}
# {role => assistant, tool_calls => [{function => {arguments => {"$location":"Boston, MA"}, name => get-current-weather}, id => call_ROi3n0iICSrGbetBKZ9KVG4E, type => function}]}
# {content => It is currently sunny in Boston, MA with a temperature of 72 degrees fahrenheit., role => tool, tool_call_id => call_ROi3n0iICSrGbetBKZ9KVG4E}

In general, there should be an evaluation loop that checks the finishing reason(s) in the LLM answers and invokes the tools as many times as it is required. (I.e., there might be several back-and-forth exchanges in the LLM, requiring different tools or different tool parameters.)


References

[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2025), GitHub/antononcube.

[AAp2] Anton Antonov, WWW::Gemini Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2025), GitHub/antononcube.

Military forces interactions graphs

Introduction

Interesting analogies of Rock-Paper-Scissors (RPS) hand games can be made with military forces interactions; see [AAv1]. Those analogies are easily seen using graphs. For example, the extension of the graph of Rock-Paper-Scissors-Lizard-Spock, [Wv1], into the graph “Chuck Norris defeats all” is analogous to the extension of “older” (say, WWII) military forces interactions graphs with drones.

Here is the graph of Rock-Paper-Scissors-Lizard-Spock-ChuckNorris, [AA1]:

Chuck Norris defeats all

In this document (notebook), we use Raku to create graphs that show how military forces interact. We apply the know-how for making graphs for RPS-games detailed in the blog post “Rock-Paper-Scissors extensions”, [AA1].


Setup

The setup is the same as in [AA1] (notebook).


Convenient LLM function

We can define an LLM function that provides the graph edges dataset for different RPS variants. Here is such an LLM function using “LLM::Functions”, [AAp1], and “LLM::Prompts”, [AAv2]:

my sub rps-edge-dataset($description, Str:D $game-name = 'Rock-Paper-Scissors', *%args) {
    llm-synthesize([
        "Give the edges the graph for this $game-name variant description",
        'Give the edges as an array of dictionaries. Each dictionary with keys "from", "to", "label",',
        'where "label" has the action of "from" over "to".',
        $description,
        llm-prompt('NothingElse')('JSON')
        ], 
        e => %args<llm-evaluator> // %args<e> // %args<conf> // $conf4o-mini,
        form => sub-parser('JSON'):drop
    )
}

Remark: We reuse the sub definition  rps-edge-dataset from [AA1].

Remark:: Both “LLM::Functions” and “LLM::Prompts” are pre-loaded in Raku chatbooks.


Rock-Paper-Scissors and its Lizard-Spock extensions

Here is the graph of the standard RPS game and it “Lizard-Spock” extension:

#% html

# Graph edges: LLM-generated and LLM-translates
my @edges-emo =
    { from => '🪨', to => '✂️',   label => 'crushes' },
    { from => '✂️',  to => '📄',  label => 'cuts' },
    { from => '📄', to => '🪨',  label => 'covers' },
    { from => '🪨', to => '🦎',  label => 'crushes' },
    { from => '🦎', to => '🖖',  label => 'poisons' },
    { from => '🖖', to => '✂️',   label => 'smashes' },
    { from => '✂️',  to => '🦎',  label => 'decapitates' },
    { from => '🦎', to => '📄',  label => 'eats' },
    { from => '📄', to => '🖖',  label => 'disproves' },
    { from => '🖖', to => '🪨',  label => 'vaporizes' }
;

# Edge-label rules
my %edge-labels-emo;
@edges-emo.map({ %edge-labels-emo{$_<from>}{$_<to>} = $_<label> });

# RPS-3 Lizard-Spock extension
my $g-emo = Graph.new(@edges-emo, :directed);

# Standard RPS-3 as a subgraph
my $g-rps = $g-emo.subgraph(<🪨 ✂️ 📄>);

# Plot the graphs together
$g-rps.dot(|%opts, edge-labels => %edge-labels-emo, :svg)
~
$g-emo.dot(|%opts, edge-labels => %edge-labels-emo, :svg)


Simple analogy

We consider the following military analogy with RPS:

  • Tanks attack (and defeat) Infantry
  • Guerillas defend against Tanks
  • Infantry attacks Guerillas

Here we obtain the corresponding graph edges using an LLM:

my $war-game = rps-edge-dataset('tanks attack infantry, guerillas defend against tanks, infantry attacks querillas')

# [{from => Tanks, label => attack, to => Infantry} {from => Guerillas, label => defend, to => Tanks} {from => Infantry, label => attack, to => Guerillas}]

Plotting the graphs together:

#% html
my %edge-labels = Empty; 
for |$war-game -> %r { %edge-labels{%r<from>}{%r<to>} = %r<label> };
Graph.new($war-game, :directed).dot(|%opts-plain, :%edge-labels, :svg)
~
$g-rps.dot(|%opts, edge-labels => %edge-labels-emo, :svg)


Military forces interaction

Here is a Mermaid-JS-made graph of a more complicated military forces interactions diagram; see [NM1]:

Using diagram’s Mermaid code here the graph edges are LLM-generated:

#% html
my $mmd-descr = q:to/END/;
graph TD
AT[Anti-tank weapons] --> |defend|Arm[Armor]
Arm --> |attack|IA[Infantry and Artillery] 
Air[Air force] --> |attack|Arm
Air --> |attack|IA
M[Missiles] --> |defend|Air
IA --> |attack|M
IA --> |attack|AT
END

my $war-game2 = rps-edge-dataset($mmd-descr);

$war-game2 ==> to-html(field-names => <from label to>)

Direct assignment (instead of using LLMs):

my $war-game2 = $[
    {:from("Anti-tank weapons"), :label("defend"), :to("Armor")}, {:from("Armor"), :label("attack"), :to("Infantry and Artillery")}, 
    {:from("Air force"), :label("attack"), :to("Armor")}, {:from("Air force"), :label("attack"), :to("Infantry and Artillery")}, 
    {:from("Missiles"), :label("defend"), :to("Air force")}, {:from("Infantry and Artillery"), :label("attack"), :to("Missiles")}, 
    {:from("Infantry and Artillery"), :label("attack"), :to("Anti-tank weapons")}
];

The diagram does not correspond to modern warfare — it is taken from a doctoral thesis, [NM1], discussing reconstruction of historical military data. The corresponding graph can be upgraded with drones in a similar way as the Chuck-Norris-defeats-all upgrade in [AA1].

my $war-forces = Graph.new($war-game2, :directed); 
my $drone = "Air drones";
my $war-game-d = $war-game2.clone.append( $war-forces.vertex-list.map({ %( from => $drone, to => $_, label => 'attack' ) }) );
$war-game-d .= append( ['Missiles', 'Air force'].map({ %(from => $_, to => $drone, label => 'defend') }) );
my $war-forces-d = Graph.new($war-game-d, :directed);

# Graph(vertexes => 6, edges => 14, directed => True)

Here is the corresponding table:

#% html
game-table($war-forces-d, link-value => '⊙', missing-value => '')

Air dronesAir forceAnti-tank weaponsArmorInfantry and ArtilleryMissiles
Air drones
Air force
Anti-tank weapons
Armor
Infantry and Artillery
Missiles

Here is the graph with different coloring for “attack” edges (gray) and “defend” edges (blue):

#% html
$war-forces-d.vertex-coordinates = ($war-forces-d.vertex-list Z=> Graph::Cycle($war-forces-d.vertex-count).vertex-coordinates{^$war-forces-d.vertex-count}.values).Hash;

my %edge-labels;
$war-game-d.map({ %edge-labels{$_<from>}{$_<to>} = $_<label> });

my %highlight = 
    'SlateBlue' => Graph.new( $war-game-d.grep(*<label> eq 'defend'), :directed).edges;

$war-forces-d.dot(
    :%highlight,
    |merge-hash(%opts-plain, {:9graph-size, node-width => 0.7}),
    :%edge-labels, 
    :svg
)

Remark: The graph above is just an example — real-life military forces interactions are more complicated.


Generalized antagonism

Following the article “The General Lanchester Model Defining Multilateral Conflicts”, [SM1], we can make a graph for multiple, simultaneous conflicts (narrated exposition is given in the presentation “Upgrading Epidemiological Models into War Models”, [AAv1]):

#% html

# Graph edges
my @multi-conflict-edges = 
    %(from=>1, to=>5, label=>'Neutrality',   :!directed), %(from=>1, to=>3, label=>'Commensalism', :directed),
    %(from=>1, to=>4, label=>'Commensalism', :directed),  %(from=>2, to=>1, label=>'Coercion',     :directed),
    %(from=>2, to=>3, label=>'Alliance',     :!directed), %(from=>2, to=>4, label=>'Guerilla war', :directed),
    %(from=>3, to=>4, label=>'Conflict',     :!directed), %(from=>5, to=>3, label=>'Avoidance',    :directed),
    %(from=>5, to=>4, label=>'Alliance',     :!directed), %(from=>5, to=>2, label=>'Adaptation',   :directed);

@multi-conflict-edges .= deepmap({ $_ ~~ Bool:D ?? $_ !! $_.Str });

# Edg-label rules
my %edge-labels;
@multi-conflict-edges.map({ %edge-labels{$_<from>}{$_<to>} = $_<label> });

# Make an empty graph
my $mc = Graph.new;

# Add edge depending of its direction specification
my @dir-edges;
for @multi-conflict-edges -> %e { 
    $mc.edge-add(%e<from>, %e<to>, :directed);
    if !%e<directed> {
        $mc.edge-add(%e<to>, %e<from>, :directed)
    }
}

# Vertex coordinates via Cycle graph
$mc.vertex-coordinates = ($mc.vertex-list Z=> Graph::Cycle($mc.vertex-count).vertex-coordinates{^$mc.vertex-count}.values).Hash;

# Graph plot
$mc.dot(|merge-hash(%opts, {node-shape => 'square', :4edge-font-size }), :%edge-labels, highlight => { RosyBrown => <1 3 4>, SlateBlue => <2 5> }, :mixed, :svg)

Remark: The graph above is just for illustration. In order to do mathematical modeling additional interaction data is required; see [AAv1].


References

Articles, books, these

[AA1] Anton Antonov, “Rock-Paper-Scissors extensions”, (2025), RakuForPrediction at WordPress.

[AJ1] Archer Jones, “The Art of War in Western World”, (2000), University of Illinois Press. 768 pages, ISBN-10: 0252069668, ISBN-13: 978-0252069666.

[SM1] Sergei Makarenko et al., “Обобщенная модель Ланчестера, формализующая конфликт нескольких сторон”, [Eng. “The General Lanchester Model Defining Multilateral Conflicts”], (2021), Automation of Control Processes № 2 (64), doi: 10.35752/1991-2927-2021-2-64-66-76.

[NM1] Николай В. Митюков, “Математические модели и программные средства для реконструкции военно-исторических данных”, (2009), disserCat.

Packages

[AAp1] Anton Antonov, Graph Raku package, (2024-2025), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Functions Raku package, (2023-2024), GitHub/antononcube.

[AAp3] Anton Antonov, LLM::Prompts Raku package, (2023-2024), GitHub/antononcube.

[AAp4] Anton Antonov, Jupyter::Chatbook Raku package, (2023-2024), GitHub/antononcube.

[EMp1] Elizabeth Mattijsen, Text::Emoji Raku package, (2024-2025), GitHub/lizmat.

Videos

[AAv1] Anton Antonov, “Upgrading Epidemiological Models into War Models”, (2024), YouTube/@WolframResearch.

[Wv1] Wozamil, “Rock Paper Scissors Lizard Spock (Extended Cut) ~ The Big Bang Theory ~”, (2012), YouTube@Wozamil.