Open source · Building now · 2026

NoteTurbo AI notebook at full speed

NotebookLM killer. Private, open-source,
10× faster — built on turbo Flutter backend.
Your docs. Your models. Zero Google.

10×
Faster builds
140ms
Cold start
0
Google servers
5+
AI models

// app preview

noteturbo.app
Notebooks
Research 2026
Q3 Report
ML Papers
Product Docs
Legal Files
Research 2026
PDF
transformer_architecture.pdf · 48 pages · indexed in 0.3s
PDF
llm_benchmarks_2025.pdf · 120 pages · indexed in 0.8s
What's the key difference between GPT-4 and Llama 3.1?
Based on your docs: GPT-4 is closed-source RLHF, Llama 3.1 is open-weight 405B. Per benchmark PDF, Llama 3.1 hits 92.3% MMLU with 40% lower inference cost on local hardware...
Ask anything about your documents...
AI Model
Llama 3.1 · local
Claude 3.5
Gemini Pro
Mistral
100% local · no cloud
Build time
8.2s vs 72s (Google)
Cold start
140ms vs 800ms
// why noteturbo

Everything NotebookLM does.
Without the compromises.

01
Fully private
All processing on your device or your server. Your documents never touch Google's infrastructure. Ever.
02
Turbo backend
Custom Rust + Flutter engine. 8–12s builds vs 60–90s. Zero cold-start. Arena allocator eliminates GC spikes.
03
Any AI model
Switch Llama, Claude, Gemini, Mistral, Phi-4 within one notebook. No lock-in. Local or cloud — your call.
04
Infinite context
Load thousands of pages. Stack + arena objects keep memory 3× lower than competitors. No OOM crashes.
05
Audio overviews
Two AI voices debate your documents in real time. Zero-latency generation via PGO-optimized audio hot paths.
06
Open source
Apache 2.0 licensed. Fork it, self-host it, audit it. No subscriptions. No vendor lock-in. No black boxes.
// benchmark results

The numbers don't lie.

Metric NoteTurbo NotebookLM (Google) Delta
Build time (release) 6–12s 60–90s ×8–10 faster
Hot restart <1s 3–5s ×5 faster
Cold start 120–180ms 600–900ms ×5 faster
Memory (AI chat) Stack + arena High pressure ×3 lower
Binary size 18–22 MB 35–45 MB ×2 smaller
Shader stutter Zero Occasional ∞ better
Data privacy 100% local Google servers Full control
// audio overviews

Two AI voices.
One real debate.

Upload a paper. NoteTurbo generates a podcast where two AI personalities explore, challenge, and synthesize your content — in real time, fully on-device.

Powered by PGO-optimized hot paths and arena allocator. Zero GC pauses during generation. Audio starts streaming in under 200ms.

// live audio generation
Voice A
Speaking now...
Voice B
Listening
// open source

Built in public.
Forever free.

Apache 2.0. Fork it, audit it, deploy it yourself. No hidden subscriptions, no data harvesting, no vendor lock-in.

Apache 2.0 Flutter Turbo Rust backend Self-host
Star on GitHub →
// early access

Get in
before launch.

NoteTurbo is in active development.
Join the waitlist — be first when beta drops.

No spam · Open source forever · Apache 2.0