alpha · work in progress · open source
Open source · Built in Rust

The AI database
that reasons

ReasonDB introduces Hierarchical Reasoning Retrieval — LLM-guided tree traversal over your documents. Not chunks. Not embeddings. Structure.

$ brew tap brainfish-ai/reasondb-tap && brew install reasondb
100%Benchmark accuracy
6.1sMedian latency
90%Context recall

RAG fails at scale

Flat chunks can't model document hierarchy. Vector similarity can't resolve cross-references. Black-box retrieval can't be audited. Standard RAG wasn't built for complex, regulated documents.

Challenge
Standard RAG
ReasonDB HRR
Document structure
Hierarchy, subsections, tables
Flat chunks lose hierarchy
Hierarchical tree nodes — full structure preserved
Cross-reference chains
§3.3 → §3.1 → Appendix
No cross-ref graph
cross_ref_node_ids built at ingestion
Domain vocabulary
'big C' vs 'critical illness'
No per-cohort vocab translation
Per-cohort domain_vocab · auto-extracted
Auditability
Regulated industries, compliance
Black-box, no audit trail
4-phase trace · logged · replayable
Determinism
Same query → same answer
Non-deterministic by default
Query cache · identical results
Formula handling
Financial calculations in docs
LLM hallucinates arithmetic
Formula nodes → Python tool-call
Query accuracy
Regulated document benchmarks
55–70%typical
90%+avg context recall
tree · policy_doc_47.pdf847 nodes
policy_doc_47.pdf847 nodes · 3 cross-refs
├── §1 General Provisions
│ ├── §1.1 Definitions
│ └── §1.2 Scope
├── §3 Coverage
│ ├── §3.1 Base Benefitscross_ref target
│ └── §3.8 Waiting Periodscross_ref target
├── §4 Eligibility
│ ├── §4.1 Requirements
│ └── §4.2 Conditions● beam
confidence: 0.94 · uses §3.1, §3.8
└── Appendix A
Architecture

Structure, not chunks.

Standard RAG flattens documents into chunks ranked by embedding similarity. HRR preserves the original hierarchy as a tree — the LLM navigates it deterministically, with full cross-reference awareness built at ingestion.

  • Sections stored as tree nodes — parent/child relationships intact
  • Cross-references resolved at ingestion: §4.2 knows it links to §3.8
  • LLM traverses the tree via beam search instead of ranking flat chunks
  • Identical structure every run — no embedding drift, no approximation
trace · query_id: 7f3a9c6.1s total
Phase 1 · BM25 candidate selection0.2s
12 docs matched
0 LLM calls
Phase 2 · tree_grep structural filter0.8s
4 docs · 18 nodes shortlisted
0 LLM calls
Phase 3 · LLM ranking1.4s
top 2 docs selected by summary + snippets
1 LLM call
Phase 4 · Beam search3.7s
§4.2 + §3.8 (cross-ref) · confidence 0.94
3 LLM calls
Reasoning: "§4.2 defines eligibility. §3.8 cross-ref provides waiting period. Confidence: 0.94"
Auditability

Every decision.
Fully visible.

ReasonDB logs every phase of every query. See which nodes were selected, why, and with what confidence. Replay any query identically — critical for regulated industries.

  • Full 4-phase trace per query — stored and replayable
  • Export to OpenTelemetry, Splunk, or any SIEM
  • Query cache ensures deterministic replay
  • Designed for SOC 2, HIPAA, SEC, and FedRAMP audit requirements

“We’d spent months trying to solve policy retrieval accuracy internally. RAG wasn’t cutting it — only 60% accurate on insurance benchmarks. ReasonDB is exactly what we’d been trying to build.”

Enterprise Architect
Top 10 global insurer · name withheld
Quick start

Up in minutes

Single Rust binary. Deploy on your own infra. No data leaves your environment.

01 · Install
$ terminal
# Install via Homebrew brew tap brainfish-ai/reasondb-tap brew install reasondb # Or run via Docker docker run --rm --pull always -p 4444:4444 \ -e REASONDB_LLM_PROVIDER=openai \ -e REASONDB_LLM_API_KEY=sk-... \ brainfishai/reasondb:latest serve
02 · Ingest & query
$ terminal
# Start the server reasondb serve # Create a table and ingest a document curl -X POST localhost:4444/v1/tables \ -d '{"name":"policies"}' curl -X POST localhost:4444/v1/tables/policies/ingest/file \ -F 'file=@policy.pdf' # Query with full reasoning trace curl -X POST localhost:4444/v1/tables/policies/query \ -d '{ "query": "SELECT answer, confidence, trace FROM docs REASON \'Total disability benefit conditions?\'" }'

Ready to go beyond RAG?

Open source. Single binary. Deploy anywhere. Bring your own LLM.