This repo contains simple, deployable apps that solve specific problems quickly using AI coding assistants, like Augment.
Each application follows these principles:
- Self-contained: All CSS and JavaScript inline or from CDN
- Single HTML file: No external dependencies within the repo
- Responsive: Works on mobile and desktop
- Complete: Standalone functionality with no navigation dependencies
I wanted to find a quick way to chose the best free to use Ollama model to download and experiment with. This app visualises the model capabilities (the Arena score) and public opinion (number of votes) based on the LLMArena leaderboard, breaks down proprietary and open source text models and whether they are available for download from Ollama, and which ones are 7B or smaller given my 8GB installed RAM.
As per Ollama's docs:
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
Use an interactive app to learn the "official" Kimball Group Dimensional Modeling Techniques and four step process
- Create a new HTML file following the naming pattern:
[app-name]-app.html - Make it completely self-contained (all CSS/JS inline or from CDN)
- Include proper meta tags for title, description, and viewport
- Test locally using a simple HTTP server:
python -m http.server 8000 - Commit and push - GitHub Actions will deploy automatically
To test apps locally:
# Start a local HTTP server
python -m http.server 8000
# Open in browser
http://localhost:8000/models-app.htmlThis repository uses GitHub Actions for automatic deployment to GitHub Pages. Any HTML file pushed to the main branch will be automatically available at:
https://kimnewzealand.github.io/apps/[filename].html