🌐 Official Website | 🖥️ GitHub | 🤗 Model | 📑 Blog |
Advanced forced alignment and subtitle generation powered by 🤗 Lattice-1 model.
- Core Capabilities
- Installation
- Quick Start
- CLI Reference
- Python SDK Reference
- Advanced Features
- Architecture Overview
- Performance & Optimization
- Supported Formats
- Supported Languages
- Roadmap
- Development
LattifAI provides comprehensive audio-text alignment powered by the Lattice-1 model:
| Feature | Description | Status |
|---|---|---|
| Forced Alignment | Precise word-level and segment-level synchronization with audio | ✅ Production |
| Multi-Model Transcription | Gemini (100+ languages), Parakeet (24 languages), SenseVoice (5 languages) | ✅ Production |
| Speaker Diarization | Automatic multi-speaker identification with label preservation | ✅ Production |
| Audio Preprocessing | Multi-channel selection, device optimization (CPU/CUDA/MPS) | ✅ Production |
| Streaming Mode | Process audio up to 20 hours with minimal memory footprint | ✅ Production |
| Smart Text Processing | Intelligent sentence splitting and non-speech element separation | ✅ Production |
| Universal Format Support | 30+ caption/subtitle formats with text normalization | ✅ Production |
| Configuration System | YAML-based configs for reproducible workflows | ✅ Production |
Key Highlights:
- 🎯 Accuracy: State-of-the-art alignment precision with Lattice-1 model
- 🌍 Multilingual: Support for 100+ languages via multiple transcription models
- 🚀 Performance: Hardware-accelerated processing with streaming support
- 🔧 Flexible: CLI, Python SDK, and Web UI interfaces
- 📦 Production-Ready: Battle-tested on diverse audio/video content
Using pip:
pip install lattifaiUsing uv (Recommended - 10-100x faster):
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create a new project with uv
uv init my-project
cd my-project
source .venv/bin/activate
# Install LattifAI
uv pip install lattifaiLattifAI API Key (Required)
Get your free API key at https://lattifai.com/dashboard/api-keys
Option A: Environment variable (recommended)
export LATTIFAI_API_KEY="lf_your_api_key_here"Option B: .env file
# .env
LATTIFAI_API_KEY=lf_your_api_key_hereGemini API Key (Optional - for transcription)
If you want to use Gemini models for transcription (e.g., gemini-2.5-pro), get your free Gemini API key at https://aistudio.google.com/apikey
# Add to environment variable
export GEMINI_API_KEY="your_gemini_api_key_here"
# Or add to .env file
GEMINI_API_KEY=your_gemini_api_key_here # AIzaSyxxxxNote: Gemini API key is only required if you use Gemini models for transcription. It's not needed for alignment or when using other transcription models.
# Align local audio with subtitle
lai alignment align audio.wav subtitle.srt output.srt
# Download and align YouTube video
lai alignment youtube "https://youtube.com/watch?v=VIDEO_ID"from lattifai import LattifAI
client = LattifAI()
caption = client.alignment(
input_media="audio.wav",
input_caption="subtitle.srt",
output_caption_path="aligned.srt",
)That's it! Your aligned subtitles are saved to aligned.srt.
-
Install the web application (one-time setup):
lai-app-install
This command will:
- Check if Node.js/npm is installed (and install if needed)
- Install frontend dependencies
- Build the application
- Setup the
lai-appcommand globally
-
Start the backend server:
lai-server # Custom port (default: 8001) lai-server --port 9000 # Custom host lai-server --host 127.0.0.1 --port 9000 # Production mode (disable auto-reload) lai-server --no-reload
Backend Server Options:
-p, --port- Server port (default: 8001)--host- Host address (default: 0.0.0.0)--no-reload- Disable auto-reload for production-h, --help- Show help message
-
Start the frontend application:
lai-app # Custom port (default: 5173) lai-app --port 8080 # Custom backend URL lai-app --backend http://localhost:9000 # Don't auto-open browser lai-app --no-open
Frontend Application Options:
-p, --port- Frontend server port (default: 5173)--backend- Backend API URL (default: http://localhost:8001)--no-open- Don't automatically open browser-h, --help- Show help message
The web interface will automatically open in your browser at
http://localhost:5173.
Features:
- ✅ Drag-and-Drop Upload: Visual file upload for audio/video and captions
- ✅ Real-Time Progress: Live alignment progress with detailed status
- ✅ Multiple Transcription Models: Gemini, Parakeet, SenseVoice selection
| Command | Description |
|---|---|
lai alignment align |
Align local audio/video with caption |
lai alignment youtube |
Download & align YouTube content |
lai transcribe run |
Transcribe audio/video or YouTube URL to caption |
lai transcribe align |
Transcribe audio/video and align with generated transcript |
lai caption convert |
Convert between caption formats |
lai caption normalize |
Clean and normalize caption text |
lai caption shift |
Shift caption timestamps |
# Basic usage
lai alignment align <audio> <caption> <output>
# Examples
lai alignment align audio.wav caption.srt output.srt
lai alignment align video.mp4 caption.vtt output.srt alignment.device=cuda
lai alignment align audio.wav caption.srt output.json \
caption.split_sentence=true \
caption.word_level=true# Basic usage
lai alignment youtube <url>
# Examples
lai alignment youtube "https://youtube.com/watch?v=VIDEO_ID"
lai alignment youtube "https://youtube.com/watch?v=VIDEO_ID" \
media.output_dir=~/Downloads \
caption.output_path=aligned.srt \
caption.split_sentence=truePerform automatic speech recognition (ASR) on audio/video files or YouTube URLs to generate timestamped transcriptions.
# Basic usage - local file
lai transcribe run <input> <output>
# Basic usage - YouTube URL
lai transcribe run <url> <output_dir>
# Examples - Local files
lai transcribe run audio.wav output.srt
lai transcribe run audio.mp4 output.ass \
transcription.model_name=nvidia/parakeet-tdt-0.6b-v3
# Examples - YouTube URLs
lai transcribe run "https://youtube.com/watch?v=VIDEO_ID" output_dir=./output
lai transcribe run "https://youtube.com/watch?v=VIDEO_ID" output.ass output_dir=./output \
transcription.model_name=gemini-2.5-pro \
transcription.gemini_api_key=YOUR_GEMINI_API_KEY
# Full configuration with keyword arguments
lai transcribe run \
input=audio.wav \
output_caption=output.srt \
channel_selector=average \
transcription.device=cuda \
transcription.model_name=iic/SenseVoiceSmallParameters:
input: Path to audio/video file or YouTube URL (required)output_caption: Path for output caption file (for local files)output_dir: Directory for output files (for YouTube URLs, defaults to current directory)media_format: Media format for YouTube downloads (default: mp3)channel_selector: Audio channel selection - "average", "left", "right", or channel index (default: "average")- Note: Ignored when transcribing YouTube URLs with Gemini models
transcription: Transcription configuration (model_name, device, language, gemini_api_key)
Supported Transcription Models (More Coming Soon):
gemini-2.5-pro- Google Gemini API (requires API key)- Languages: 100+ languages including English, Chinese, Spanish, French, German, Japanese, Korean, Arabic, and more
gemini-3-pro-preview- Google Gemini API (requires API key)- Languages: 100+ languages (same as gemini-2.5-pro)
nvidia/parakeet-tdt-0.6b-v3- NVIDIA Parakeet model- Languages: Bulgarian (bg), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Finnish (fi), French (fr), German (de), Greek (el), Hungarian (hu), Italian (it), Latvian (lv), Lithuanian (lt), Maltese (mt), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovenian (sl), Spanish (es), Swedish (sv), Russian (ru), Ukrainian (uk)
iic/SenseVoiceSmall- Alibaba SenseVoice model- Languages: Chinese/Mandarin (zh), English (en), Japanese (ja), Korean (ko), Cantonese (yue)
- More models will be integrated in future releases
Note: For transcription with alignment on local files, use lai transcribe align instead.
Transcribe audio/video file and automatically align the generated transcript with the audio.
This command combines transcription and alignment in a single step, producing precisely aligned captions.
# Basic usage
lai transcribe align <input_media> <output_caption>
# Examples
lai transcribe align audio.wav output.srt
lai transcribe align audio.mp4 output.ass \
transcription.model_name=nvidia/parakeet-tdt-0.6b-v3 \
alignment.device=cuda
# Using Gemini transcription with alignment
lai transcribe align audio.wav output.srt \
transcription.model_name=gemini-2.5-pro \
transcription.gemini_api_key=YOUR_KEY \
caption.split_sentence=true
# Full configuration
lai transcribe align \
input_media=audio.wav \
output_caption=output.srt \
transcription.device=mps \
transcription.model_name=iic/SenseVoiceSmall \
alignment.device=cuda \
caption.word_level=trueParameters:
input_media: Path to input audio/video file (required)output_caption: Path for output aligned caption file (required)transcription: Transcription configuration (model_name, device, language, gemini_api_key)alignment: Alignment configuration (model_name, device)caption: Caption formatting options (split_sentence, word_level, etc.)
lai caption convert input.srt output.vtt
lai caption convert input.srt output.json
# Enable normalization to clean HTML entities and special characters:
lai caption convert input.srt output.json normalize_text=truelai caption shift input.srt output.srt 2.0 # Delay by 2 seconds
lai caption shift input.srt output.srt -1.5 # Advance by 1.5 secondsfrom lattifai import LattifAI
# Initialize client (uses LATTIFAI_API_KEY from environment)
client = LattifAI()
# Align audio/video with subtitle
caption = client.alignment(
input_media="audio.wav", # Audio or video file
input_caption="subtitle.srt", # Input subtitle file
output_caption_path="output.srt", # Output aligned subtitle
split_sentence=True, # Enable smart sentence splitting
)
# Access alignment results
for segment in caption.supervisions:
print(f"{segment.start:.2f}s - {segment.end:.2f}s: {segment.text}")from lattifai import LattifAI
client = LattifAI()
# Download YouTube video and align with auto-downloaded subtitles
caption = client.youtube(
url="https://youtube.com/watch?v=VIDEO_ID",
output_dir="./downloads",
output_caption_path="aligned.srt",
split_sentence=True,
)LattifAI uses a config-driven architecture for fine-grained control:
from lattifai import LattifAI, ClientConfig
client = LattifAI(
client_config=ClientConfig(
api_key="lf_your_api_key", # Or use LATTIFAI_API_KEY env var
timeout=30.0,
max_retries=3,
)
)from lattifai import LattifAI, AlignmentConfig
client = LattifAI(
alignment_config=AlignmentConfig(
model_name="Lattifai/Lattice-1",
device="cuda", # "cpu", "cuda", "cuda:0", "mps"
)
)from lattifai import LattifAI, CaptionConfig
client = LattifAI(
caption_config=CaptionConfig(
split_sentence=True, # Smart sentence splitting (default: False)
word_level=True, # Word-level timestamps (default: False)
normalize_text=True, # Clean HTML entities (default: True)
include_speaker_in_text=False, # Include speaker labels (default: True)
)
)from lattifai import (
LattifAI,
ClientConfig,
AlignmentConfig,
CaptionConfig
)
client = LattifAI(
client_config=ClientConfig(
api_key="lf_your_api_key",
timeout=60.0,
),
alignment_config=AlignmentConfig(
model_name="Lattifai/Lattice-1",
device="cuda",
),
caption_config=CaptionConfig(
split_sentence=True,
word_level=True,
output_format="json",
),
)
caption = client.alignment(
input_media="audio.wav",
input_caption="subtitle.srt",
output_caption_path="output.json",
)from lattifai import (
# Client classes
LattifAI,
# AsyncLattifAI, # For async support
# Config classes
ClientConfig,
AlignmentConfig,
CaptionConfig,
DiarizationConfig,
MediaConfig,
# I/O classes
Caption,
)LattifAI provides powerful audio preprocessing capabilities for optimal alignment:
Channel Selection
Control which audio channel to process for stereo/multi-channel files:
from lattifai import LattifAI
client = LattifAI()
# Use left channel only
caption = client.alignment(
input_media="stereo.wav",
input_caption="subtitle.srt",
channel_selector="left", # Options: "left", "right", "average", or channel index (0, 1, 2, ...)
)
# Average all channels (default)
caption = client.alignment(
input_media="stereo.wav",
input_caption="subtitle.srt",
channel_selector="average",
)CLI Usage:
# Use right channel
lai alignment align audio.wav subtitle.srt output.srt \
media.channel_selector=right
# Use specific channel index
lai alignment align audio.wav subtitle.srt output.srt \
media.channel_selector=1Device Management
Optimize processing for your hardware:
from lattifai import LattifAI, AlignmentConfig
# Use CUDA GPU
client = LattifAI(
alignment_config=AlignmentConfig(device="cuda")
)
# Use specific GPU
client = LattifAI(
alignment_config=AlignmentConfig(device="cuda:0")
)
# Use Apple Silicon MPS
client = LattifAI(
alignment_config=AlignmentConfig(device="mps")
)
# Use CPU
client = LattifAI(
alignment_config=AlignmentConfig(device="cpu")
)Supported Formats
- Audio: WAV, MP3, M4A, AAC, FLAC, OGG, OPUS, AIFF, and more
- Video: MP4, MKV, MOV, WEBM, AVI, and more
- All formats supported by FFmpeg are compatible
LattifAI now supports processing long audio files (up to 20 hours) through streaming mode. Enable streaming by setting the streaming_chunk_secs parameter:
Python SDK:
from lattifai import LattifAI
client = LattifAI()
# Enable streaming for long audio files
caption = client.alignment(
input_media="long_audio.wav",
input_caption="subtitle.srt",
output_caption_path="output.srt",
streaming_chunk_secs=600.0, # Process in 30-second chunks
)CLI:
# Enable streaming with chunk size
lai alignment align long_audio.wav subtitle.srt output.srt \
media.streaming_chunk_secs=300.0
# For YouTube videos
lai alignment youtube "https://youtube.com/watch?v=VIDEO_ID" \
media.streaming_chunk_secs=300.0MediaConfig:
from lattifai import LattifAI, MediaConfig
client = LattifAI(
media_config=MediaConfig(
streaming_chunk_secs=600.0, # Chunk duration in seconds (1-1800), default: 600 (10 minutes)
)
)Technical Details:
| Parameter | Description | Recommendation |
|---|---|---|
| Default Value | 600 seconds (10 minutes) | Good for most use cases |
| Memory Impact | Lower chunks = less RAM usage | Adjust based on available RAM |
| Accuracy Impact | Virtually zero degradation | Our precise implementation preserves quality |
Performance Characteristics:
- ✅ Near-Perfect Accuracy: Streaming implementation maintains alignment precision
- 🚧 Memory Efficient: Process 20-hour audio with <10GB RAM (600-sec chunks)
Enable word_level=True to get precise timestamps for each word:
from lattifai import LattifAI, CaptionConfig
client = LattifAI(
caption_config=CaptionConfig(word_level=True)
)
caption = client.alignment(
input_media="audio.wav",
input_caption="subtitle.srt",
output_caption_path="output.json", # JSON preserves word-level data
)
# Access word-level alignments
for segment in caption.alignments:
if segment.alignment and "word" in segment.alignment:
for word_item in segment.alignment["word"]:
print(f"{word_item.start:.2f}s: {word_item.symbol} (confidence: {word_item.score:.2f})")The split_sentence option intelligently separates:
- Non-speech elements (
[APPLAUSE],[MUSIC]) from dialogue - Multiple sentences within a single subtitle
- Speaker labels from content
caption = client.alignment(
input_media="audio.wav",
input_caption="subtitle.srt",
split_sentence=True,
)Speaker diarization automatically identifies and labels different speakers in audio using state-of-the-art models.
Core Capabilities:
- 🎤 Multi-Speaker Detection: Automatically detect speaker changes in audio
- 🏷️ Smart Labeling: Assign speaker labels (SPEAKER_00, SPEAKER_01, etc.)
- 🔄 Label Preservation: Maintain existing speaker names from input captions
- 🤖 Gemini Integration: Extract speaker names intelligently during transcription
How It Works:
- Without Existing Labels: System assigns generic labels (SPEAKER_00, SPEAKER_01)
- With Existing Labels: System preserves your speaker names during alignment
- Formats:
[Alice],>> Bob:,SPEAKER_01:,Alice:are all recognized
- Formats:
- Gemini Transcription: When using Gemini models, speaker names are extracted from context
- Example: "Hi, I'm Alice" → System labels as
Aliceinstead ofSPEAKER_00
- Example: "Hi, I'm Alice" → System labels as
Speaker Label Integration:
The diarization engine intelligently matches detected speakers with existing labels:
- If input captions have speaker names → Preserved during alignment
- If Gemini transcription provides names → Used for labeling
- Otherwise → Generic labels (SPEAKER_00, etc.) assigned
- 🚧 Future Enhancement:
- AI-Powered Speaker Name Inference: Upcoming feature will use large language models combined with metadata (video title, description, context) to intelligently infer speaker names, making transcripts more human-readable and contextually accurate
- 🚧 Future Enhancement:
CLI:
# Enable speaker diarization during alignment
lai alignment align audio.wav subtitle.srt output.srt \
diarization.enabled=true
# With additional diarization settings
lai alignment align audio.wav subtitle.srt output.srt \
diarization.enabled=true \
diarization.device=cuda \
diarization.min_speakers=2 \
diarization.max_speakers=4
# For YouTube videos with diarization
lai alignment youtube "https://youtube.com/watch?v=VIDEO_ID" \
diarization.enabled=truePython SDK:
from lattifai import LattifAI, DiarizationConfig
client = LattifAI(
diarization_config=DiarizationConfig(enabled=True)
)
caption = client.alignment(
input_media="audio.wav",
input_caption="subtitle.srt",
output_caption_path="output.srt",
)
# Access speaker information
for segment in caption.supervisions:
print(f"[{segment.speaker}] {segment.text}")- under development
Create reusable configuration files:
# config/alignment.yaml
model_name: "Lattifai/Lattice-1"
device: "cuda"
batch_size: 1lai alignment align audio.wav subtitle.srt output.srt \
alignment=config/alignment.yamlLattifAI uses a modular, config-driven architecture for maximum flexibility:
┌─────────────────────────────────────────────────────────────┐
│ LattifAI Client │
├─────────────────────────────────────────────────────────────┤
│ Configuration Layer (Config-Driven) │
│ ├── ClientConfig (API settings) │
│ ├── AlignmentConfig (Model & device) │
│ ├── CaptionConfig (I/O formats) │
│ ├── TranscriptionConfig (ASR models) │
│ └── DiarizationConfig (Speaker detection) │
├─────────────────────────────────────────────────────────────┤
│ Core Components │
│ ├── AudioLoader → Load & preprocess audio │
│ ├── Aligner → Lattice-1 forced alignment │
│ ├── Transcriber → Multi-model ASR │
│ ├── Diarizer → Speaker identification │
│ └── Tokenizer → Intelligent text segmentation │
├─────────────────────────────────────────────────────────────┤
│ Data Flow │
│ Input → AudioLoader → Aligner → Diarizer → Caption │
│ ↓ │
│ Transcriber (optional) │
└─────────────────────────────────────────────────────────────┘
Component Responsibilities:
| Component | Purpose | Configuration |
|---|---|---|
| AudioLoader | Load audio/video, channel selection, format conversion | MediaConfig |
| Aligner | Forced alignment using Lattice-1 model | AlignmentConfig |
| Transcriber | ASR with Gemini/Parakeet/SenseVoice | TranscriptionConfig |
| Diarizer | Speaker diarization with pyannote.audio | DiarizationConfig |
| Tokenizer | Sentence splitting and text normalization | CaptionConfig |
| Caption | Unified data structure for alignments | CaptionConfig |
Data Flow:
- Audio Loading:
AudioLoaderloads media, applies channel selection, converts to numpy array - Transcription (optional):
Transcribergenerates transcript if no caption provided - Text Preprocessing:
Tokenizersplits sentences and normalizes text - Alignment:
Aligneruses Lattice-1 to compute word-level timestamps - Diarization (optional):
Diarizeridentifies speakers and assigns labels - Output:
Captionobject contains all results, exported to desired format
Configuration Philosophy:
- ✅ Declarative: Describe what you want, not how to do it
- ✅ Composable: Mix and match configurations
- ✅ Reproducible: Save configs to YAML for consistent results
- ✅ Flexible: Override configs per-method or globally
Choose the optimal device for your hardware:
from lattifai import LattifAI, AlignmentConfig
# NVIDIA GPU (recommended for speed)
client = LattifAI(
alignment_config=AlignmentConfig(device="cuda")
)
# Apple Silicon GPU
client = LattifAI(
alignment_config=AlignmentConfig(device="mps")
)
# CPU (maximum compatibility)
client = LattifAI(
alignment_config=AlignmentConfig(device="cpu")
)Performance Comparison (30-minute audio):
| Device | Time |
|---|---|
| CUDA (RTX 4090) | ~18 sec |
| MPS (M4) | ~26 sec |
Streaming Mode for long audio:
# Process 20-hour audio with <10GB RAM
caption = client.alignment(
input_media="long_audio.wav",
input_caption="subtitle.srt",
streaming_chunk_secs=600.0, # 10-minute chunks
)Memory Usage (approximate):
| Chunk Size | Peak RAM | Suitable For |
|---|---|---|
| 600 sec | ~5 GB | Recommended |
| No streaming | ~10 GB+ | Short audio only |
- Use GPU when available: 10x faster than CPU
- WIP: Enable streaming for long audio: Process 20+ hour files without OOM
- Choose appropriate chunk size: Balance memory vs. performance
- Batch processing: Process multiple files in sequence (coming soon)
- Profile alignment: Set
client.profile=Trueto identify bottlenecks
LattifAI supports virtually all common media and subtitle formats:
| Type | Formats |
|---|---|
| Audio | WAV, MP3, M4A, AAC, FLAC, OGG, OPUS, AIFF, and more |
| Video | MP4, MKV, MOV, WEBM, AVI, and more |
| Caption/Subtitle Input | SRT, VTT, ASS, SSA, SUB, SBV, TXT, Gemini, and more |
| Caption/Subtitle Output | All input formats + TextGrid (Praat) |
Tabular Formats:
- TSV: Tab-separated values with optional speaker column
- CSV: Comma-separated values with optional speaker column
- AUD: Audacity labels format with
[[speaker]]notation
Note: If a format is not listed above but commonly used, it's likely supported. Feel free to try it or reach out if you encounter any issues.
LattifAI supports multiple transcription models with different language capabilities:
Models: gemini-2.5-pro, gemini-3-pro-preview, gemini-3-flash-preview
Supported Languages: English, Chinese (Mandarin & Cantonese), Spanish, French, German, Italian, Portuguese, Japanese, Korean, Arabic, Russian, Hindi, Bengali, Turkish, Dutch, Polish, Swedish, Danish, Norwegian, Finnish, Greek, Hebrew, Thai, Vietnamese, Indonesian, Malay, Filipino, Ukrainian, Czech, Romanian, Hungarian, Swahili, Tamil, Telugu, Marathi, Gujarati, Kannada, and 70+ more languages.
Note: Requires Gemini API key from Google AI Studio
Model: nvidia/parakeet-tdt-0.6b-v3
Supported Languages:
- Western Europe: English (en), French (fr), German (de), Spanish (es), Italian (it), Portuguese (pt), Dutch (nl)
- Nordic: Danish (da), Swedish (sv), Norwegian (no), Finnish (fi)
- Eastern Europe: Polish (pl), Czech (cs), Slovak (sk), Hungarian (hu), Romanian (ro), Bulgarian (bg), Ukrainian (uk), Russian (ru)
- Others: Croatian (hr), Estonian (et), Latvian (lv), Lithuanian (lt), Slovenian (sl), Maltese (mt), Greek (el)
Model: iic/SenseVoiceSmall
Supported Languages:
- Chinese/Mandarin (zh)
- English (en)
- Japanese (ja)
- Korean (ko)
- Cantonese (yue)
from lattifai import LattifAI, TranscriptionConfig
# Specify language for transcription
client = LattifAI(
transcription_config=TranscriptionConfig(
model_name="nvidia/parakeet-tdt-0.6b-v3",
language="de", # German
)
)CLI Usage:
lai transcribe run audio.wav output.srt \
transcription.model_name=nvidia/parakeet-tdt-0.6b-v3 \
transcription.language=deTip: Use Gemini models for maximum language coverage, Parakeet for European languages, and SenseVoice for Asian languages.
Visit our LattifAI roadmap for the latest updates.
| Date | Model Release | Features |
|---|---|---|
| Oct 2025 | Lattice-1-Alpha | ✅ English forced alignment ✅ Multi-format support ✅ CPU/GPU optimization |
| Nov 2025 | Lattice-1 | ✅ English + Chinese + German ✅ Mixed languages alignment ✅ Speaker Diarization ✅ Multi-model transcription (Gemini, Parakeet, SenseVoice) ✅ Web interface with React 🚧 Advanced segmentation strategies (entire/transcription/hybrid) 🚧 Audio event detection ([MUSIC], [APPLAUSE], etc.) |
| Q1 2026 | Lattice-2 | ✅ Streaming mode for long audio 🔮 40+ languages support 🔮 Real-time alignment |
Legend: ✅ Released | 🚧 In Development | 📋 Planned | 🔮 Future
git clone https://github.com/lattifai/lattifai-python.git
cd lattifai-python
# Using uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
source .venv/bin/activate
# Or using pip
pip install -e ".[test]"
pre-commit installpytest # Run all tests
pytest --cov=src # With coverage
pytest tests/test_basic.py # Specific test- Fork the repository
- Create a feature branch
- Make changes and add tests
- Run
pytestandpre-commit run - Submit a pull request
Apache License 2.0
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Discord: Join our community


