Quick reference for the Raku package “Jupyter::Chatbook”. (raku.land, GitHub.)
0) Preliminary steps
Follow the instructions in the README of “Jupyter::Chatbook”:
For installation and setup problems see the issues (both open and closed) of package’s GitHub repository.
(For example, this comment.)
1) New LLM persona initialization
A) Create persona with #%chat or %%chat (and immediately send first message)
#%chat assistant1, name=ChatGPT model=gpt-4.1-mini prompt="You are a concise technical assistant."Say hi and ask what I am working on.
# Hi! What are you working on?
Remark: For all “Jupyter::Chatbook” magic specs both prefixes %% and #% can be used.
Remark: For the prompt argument the following delimiter pairs can be used: '...', "...", «...», {...}, ⎡...⎦.
B) Create persona with #%chat <id> prompt (create only)
#%chat assistant2 prompt, conf=ChatGPT, model=gpt-4.1-miniYou are a code reviewer focused on correctness and edge cases.
# Chat object created with ID : assistant2.
You can use prompt specs from “LLM::Prompts”, for example:
#%chat yoda prompt@Yoda
# Chat object created with ID : yoda. Expanded prompt: ⎡You are Yoda. Respond to ALL inputs in the voice of Yoda from Star Wars. Be sure to ALWAYS use his distinctive style and syntax. Vary sentence length.⎦
The Raku package “LLM::Prompts” (GitHub link) provides a collection of prompts and an implementation of a prompt-expansion Domain Specific Language (DSL).
2) Notebook-wide chat with an LLM persona
Continue an existing chat object
Render the answer as Markdown:
#%chat assistant1 > markdownGive me a 5-step implementation plan for adding authentication to a FastAPI app. VERY CONCISE.
Magic cell parameter values can be assigned using the equal sign (“=”):
#%chat assistant1 > markdownNow rewrite step 2 with test-first details.
Default chat object (NONE)
#%chatDoes vegetarian sushi exist?
# Yes, vegetarian sushi definitely exists! It's a popular option for those who avoid fish or meat. Instead of raw fish, vegetarian sushi typically includes ingredients like: - Avocado - Cucumber - Carrots - Pickled radish (takuan) - Asparagus - Sweet potato - Mushrooms (like shiitake) - Tofu or tamago (Japanese omelette) - Seaweed salad These ingredients are rolled in sushi rice and nori seaweed, just like traditional sushi. Vegetarian sushi can be found at many sushi restaurants and sushi bars, and it's also easy to make at home.
Using the prompt-expansion DSL to modify the previous chat-cell result:
#%chat!HaikuStyled>^
# Rice, seaweed embrace, Avocado, crisp and bright, Vegetarian.
3) Management of personas (#%chat <id> meta)
Query one persona
#%chat assistant1 metaprompt
# "You are a concise technical assistant."
#%chat assistant1 metasay
# Chat: assistant1# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# Prompts: You are a concise technical assistant.# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : user# content : Say hi and ask what I am working on.# timestamp : 2026-03-14T09:23:01.989418-04:00# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : assistant# content : Hi! What are you working on?# timestamp : 2026-03-14T09:23:03.222902-04:00# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : user# content : Give me a 5-step implementation plan for adding authentication to a FastAPI app. VERY CONCISE.# timestamp : 2026-03-14T09:23:03.400597-04:00# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : assistant# content : 1. Install `fastapi` and `python-jose` for JWT handling. # 2. Define user model and fake user database. # 3. Create OAuth2 password flow with `OAuth2PasswordBearer`. # 4. Implement token creation and verification functions. # 5. Protect routes using dependency injection for authentication.# timestamp : 2026-03-14T09:23:05.106661-04:00# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : user# content : Now rewrite step 2 with test-first details.# timestamp : 2026-03-14T09:23:05.158446-04:00# ⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺⸺# role : assistant# content : 2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly.# timestamp : 2026-03-14T09:23:06.901396-04:00
# Bool::True
Query all personas
#%chat allkeys
# NONE assistant1 assistant2 ce gc html latex raku yoda
#%chat allgist
# {NONE => LLM::Functions::Chat(chat-id = NONE, llm-evaluator.conf.name = chatgpt, messages.elems = 4, last.message = ${:content("Rice, seaweed embrace, \nAvocado, crisp and bright, \nVegetarian."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,10.770353078842163,:timezone(-14400)))}), assistant1 => LLM::Functions::Chat(chat-id = assistant1, llm-evaluator.conf.name = ChatGPT, messages.elems = 6, last.message = ${:content("2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,6.901396036148071,:timezone(-14400)))}), assistant2 => LLM::Functions::Chat(chat-id = assistant2, llm-evaluator.conf.name = chatgpt, messages.elems = 0), ce => LLM::Functions::Chat(chat-id = ce, llm-evaluator.conf.name = chatgpt, messages.elems = 0), gc => LLM::Functions::Chat(chat-id = gc, llm-evaluator.conf.name = chatgpt, messages.elems = 0), html => LLM::Functions::Chat(chat-id = html, llm-evaluator.conf.name = chatgpt, messages.elems = 0), latex => LLM::Functions::Chat(chat-id = latex, llm-evaluator.conf.name = chatgpt, messages.elems = 0), raku => LLM::Functions::Chat(chat-id = raku, llm-evaluator.conf.name = chatgpt, messages.elems = 0), yoda => LLM::Functions::Chat(chat-id = yoda, llm-evaluator.conf.name = chatgpt, messages.elems = 0)}
Delete one persona
#%chat assistant1 metadelete
# Deleted: assistant1 Gist: LLM::Functions::Chat(chat-id = assistant1, llm-evaluator.conf.name = ChatGPT, messages.elems = 6, last.message = ${:content("2. Write tests to verify user data retrieval and password verification; then define user model and fake user database accordingly."), :role("assistant"), :timestamp(DateTime.new(2026,3,14,9,23,6.901396036148071,:timezone(-14400)))})
Clear message history of one persona (keep persona)
#%chat assistant2 metaclear
# Cleared messages of: assistant2 Gist: LLM::Functions::Chat(chat-id = assistant2, llm-evaluator.conf.name = chatgpt, messages.elems = 0)
Delete all personas
#%chat alldrop
# Deleted 8 chat objects with names NONE assistant2 ce gc html latex raku yoda.
#%chat <id>|all meta command aliases / synonyms:
deleteordropkeysornamesclearorempty
4) Regular chat cells vs direct LLM-provider cells
Regular chat cells (#%chat)
- Stateful across cells (conversation memory stored in chat objects).
- Persona-oriented via identifier + optional
prompt. - Backend chosen with
conf(default:ChatGPT).
Direct provider cells (#%openai, %%gemini, %%llama, %%dalle)
- Direct single-call access to provider APIs.
- Useful for explicit provider/model control.
- Do not use chat-object memory managed by
#%chat.
Remark: For all “Jupyter::Chatbook” magic specs both prefixes %% and #% can be used.
Examples
#%openai > markdown, model=gpt-4.1-miniWrite a regex for US ZIP+4.
#%gemini > markdown, model=gemini-2.5-flashExplain async/await in Python using three point each with less than 10 words.
Access llamafile, locally run models:
#%llama > markdown Give me three Linux troubleshooting tips. VERY CONCISE.
Remark: In order to run the magic cell above you have to run a llamafile program/model on your computer. (For example, ./google_gemma-3-12b-it-Q4_K_M.llamafile.)
Access Ollama models:
#%chat ollama > markdown, conf=OllamaGive me three Linux troubleshooting tips. VERY CONCISE.
Remark: In order to run the magic cell above you have to run an Ollama app on your computer.
Create images using DALL-E:
#%dalle, model=dall-e-3, size=landscapeA dark-mode digital painting of a lighthouse in stormy weather.

5) DALL-E interaction management
For a detailed discussion of the DALL-E interaction in Raku and magic cell parameter descriptions see “Day 21 – Using DALL-E models in Raku”.
Image generation:
#%dalle, model=dall-e-3, size=landscape, style=vividA dark-mode digital painting of a lighthouse in stormy weather.
Here we use a DALL-E meta cell to see how many images were generated in a notebook session:
#% dalle metaelems
# 3
Here we export the second image — using the index 1 — into a file named “stormy-weather-lighthouse-2.png”:
#% dalle export, index=1stormy-weather-lighthouse-2.png
# stormy-weather-lighthouse-2.png
Here we show all generated images:
#% dalle metashow
Here we export all images (into file names with the prefix “cheatsheet”):
#% dalle export, index=all, prefix=cheatsheet
6) LLM provider access facilitation
API keys can be passed inline (api-key) or through environment variables.
Notebook-session environment setup
%*ENV<OPENAI_API_KEY> = "YOUR_OPENAI_KEY";%*ENV<GEMINI_API_KEY> = "YOUR_GEMINI_KEY";%*ENV<OLLAMA_API_KEY> = "YOUR_OLLAMA_KEY";
Ollama-specific defaults:
OLLAMA_HOST(default host fallback ishttp://localhost:11434)OLLAMA_MODEL(default model ifmodel=...not given)
The magic cells take as argument base-url. This allows to use LLMs that have ChatGPT compatible APIs. The argument base_url is a synonym of host for magic cell #%ollama.
7) Notebook/chatbook session initialization with custom code + personas JSON
Initialization runs when the extension is loaded.
A) Custom Raku init code
- Env var override:
RAKU_CHATBOOK_INIT_FILE - If not set, first existing file is used in this order:
~/.config/raku-chatbook/init.py~/.config/init.raku
Use this for imports/helpers you always want in chatbook sessions.
B) Pre-load personas from JSON
- Env var override:
RAKU_CHATBOOK_LLM_PERSONAS_CONF - If not set, first existing file is used in this order:
~/.config/raku-chatbook/llm-personas.json~/.config/llm-personas.json
The supported JSON shape is an array of dictionaries:
[ { "chat-id": "raku", "conf": "ChatGPT", "prompt": "@CodeWriterX|Raku", "model": "gpt-4.1-mini", "max_tokens": 8192, "temperature": 0.4 }]
Recognized persona spec fields include:
chat-idpromptconf(orconfiguration)model,max-tokens,temperature,base-urlapi-keyevaluator-args(object)
Verify pre-loaded personas:
#%chat allkeys





















