🚀 We're on Product Hunt! If you find Note67 useful, please consider upvoting us on Product Hunt — it helps others discover the project!
A private, local meeting notes assistant. Capture audio, transcribe locally with Whisper, and generate AI-powered summaries — all on your device.
- Meeting management (create, end, delete)
- SQLite database for local storage
- Audio recording (microphone)
- Local transcription with Whisper
- Speaker distinction (You vs Others) on macOS
- Echo deduplication for speaker usage
- Live transcription during recording
- Pause/Resume recording
- Continue recording on existing notes (Listen)
- Voice Activity Detection (VAD) for mic input
- Automatic filtering of blank/noise segments
- Transcript viewer with search and speaker filter
- AI-powered summaries via Ollama
- Export to Markdown
- Settings with Profile, Whisper, Ollama, System tabs
- Dark mode support
- Custom context menus
- System tray support
- Cross-platform system audio (Windows via WASAPI)
- Linux system audio support
| Light Mode | Dark Mode | Settings |
|---|---|---|
![]() |
![]() |
![]() |
AI Summary
Note67 can distinguish between your voice and other meeting participants:
| Source | Speaker Label | How it works |
|---|---|---|
| Microphone | "You" | Your voice via mic input |
| System Audio | "Others" | Meeting participants via system audio capture |
- macOS 13.0 (Ventura) or later
- Screen Recording permission (System Settings → Privacy & Security → Screen Recording)
- Microphone permission
- Windows 10 or later
- Microphone permission
- No additional permissions needed for system audio (WASAPI loopback)
When using speakers instead of headphones, your microphone picks up audio from your speakers, causing duplicate transcriptions. Note67 handles this with a multi-layer approach:
How it works:
- Voice Activity Detection (VAD) - Mic audio is only transcribed if RMS energy exceeds threshold, filtering silence and ambient noise
- Echo Deduplication - Mic transcripts are compared against a 30-second rolling history of system audio segments
- Text Similarity Matching - If mic text shares 3+ words with overlapping system audio, it's filtered as echo
For best results:
- Headphones are still recommended for optimal quality
- Works automatically when system audio capture is enabled
| Layer | Technology |
|---|---|
| Frontend | React 18 + TypeScript + Tailwind CSS v4 |
| Backend | Rust (Tauri v2) |
| State | Zustand |
| Database | SQLite (rusqlite) |
| Transcription | whisper-rs (local Whisper models) |
| AI Summaries | Ollama (local LLMs) |
| System Audio | ScreenCaptureKit (macOS), WASAPI loopback (Windows) |
| Echo Handling | VAD + post-processing deduplication |
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install Tauri CLI
cargo install tauri-cli
# Install Ollama and pull a model
brew install ollama
ollama pull llama3.2# Install dependencies
npm install
# Run dev server (opens app window)
npm run tauri dev
# Build for production
npm run tauri build| Command | Description |
|---|---|
npm run tauri dev |
Run Tauri app in dev mode |
npm run tauri build |
Build production app |
npm run lint |
Run ESLint |
npm run format |
Format code with Prettier |
| Permission | Purpose | When prompted |
|---|---|---|
| Microphone | Record your voice | First recording |
| Screen Recording | Capture system audio (others' voices) | When enabling speaker distinction |
| Permission | Purpose | When prompted |
|---|---|---|
| Microphone | Record your voice | First recording |
Note: Windows system audio capture via WASAPI loopback does not require additional permissions.
AGPL-3.0




