Skip to content

Where Claude Code actually stores your sessions

5 min readcitizencapet

Claude Code writes every conversation to ~/.claude/projects/ as JSONL. Here is exactly what lives there, what each file contains, and why it is useless by default.

Every Claude Code session you have ever run is already on your disk. Anthropic's CLI silently appends each turn of the conversation to a JSONL file under ~/.claude/projects/. The data is there. It is just shaped in a way that makes it impossible to use.

This post opens the box. We look at the exact path, the exact record types, and then we talk about why a file that has every answer you ever got from Claude is still not useful until something indexes it.

The path

Open a terminal and run this:

ls ~/.claude/projects/

You will see a directory per working directory you have ever used Claude Code in, with the path encoded into the folder name. A repo at /Users/you/code/api becomes -Users-you-code-api. Inside each folder sits one JSONL file per session:

ls ~/.claude/projects/-Users-you-code-api/
# 0f3c9a11-7b2e-4a8d-9f10-2c6b1d4e8a77.jsonl
# 3d8b2c4e-1f5a-4b9c-8d7e-6a3f2b1c9d4e.jsonl
# 7e1a4c6b-9d3f-4a8e-b2c7-1f5d8a9e3b4c.jsonl

The filename is the session UUID. Claude Code assigns it when the session starts and never changes it. That UUID is the only stable identifier for a conversation. Everything Claude Recall does on top of these files (aliases, tags, notes, pins, collections) hangs off that UUID, and never mutates it.

What lives inside a JSONL

Open one with less and you will see lines that look roughly like this, one JSON object per line:

{"type":"user","uuid":"...","timestamp":"2026-04-18T13:22:04Z","message":{"role":"user","content":"add a migration for the orders table"}}
{"type":"assistant","uuid":"...","timestamp":"...","message":{"role":"assistant","content":[{"type":"text","text":"I will create a new migration..."},{"type":"tool_use","name":"Edit","input":{"file_path":"..."}}]}}
{"type":"tool_result","uuid":"...","timestamp":"...","toolUseID":"...","content":"..."}
{"type":"system","subtype":"init","cwd":"/Users/you/code/api","model":"claude-opus-4-7"}

Five broad record categories show up:

  1. user — what you typed.
  2. assistant — Claude's reply, usually a mixed array of text and tool_use blocks.
  3. tool_result — the output that a tool returned, keyed back to a tool_use by ID.
  4. system — session init, model selection, cwd, and bookkeeping.
  5. summary — compaction checkpoints when the session gets long.

Every record carries a timestamp and a UUID. The assistant records interleave prose and tool calls, so grepping for text without understanding the nesting will miss things or duplicate them. And when a session crosses the context window, Claude Code writes a summary record and then keeps appending. The file is append-only, but it is not append-friendly.

Why scrolling it hurts

Try this on a real session:

less ~/.claude/projects/-Users-you-code-api/0f3c9a11-7b2e-4a8d-9f10-2c6b1d4e8a77.jsonl

You get a wall of escaped JSON. Newlines inside messages are \n literals, code blocks are escaped, tool results can be 40 KB each. jq helps a little:

jq -r 'select(.type=="user") | .message.content' session.jsonl

But now you have lost the assistant replies. Add them back and the output is still thousands of lines of context-free text, with no search, no filter by project, no filter by date, no way to see "the session where I fixed the race condition in the billing worker". That session is in there. You just cannot find it.

This is the gap Claude Recall fills. Anthropic ships the raw truth. Recall ships the index.

The indexer

recall is a CLI plus a local daemon plus a web UI. It tails every file under ~/.claude/projects/, parses each record, extracts the user and assistant text, strips tool noise, redacts secrets at ingest, and writes structured rows to a local SQLite database. The source JSONLs are never modified. That is one of the project's three hard invariants: read-only on the raw data.

Once it is indexed you can do the things that were impossible a minute ago:

recall search "billing race condition"
recall list -p api --since 7d
recall show 0f3c9a11 --full
recall context 0f3c9a11 --since 2h | claude

That last line is the interesting one. It takes an earlier session, condenses it, and pipes it into a fresh Claude Code invocation as starting context. We cover that workflow in depth in another post.

The shape of a real day

A typical power-user day writes between 5 and 20 JSONLs. Multiply by weeks and you get a corpus that is genuinely large: hundreds of megabytes of your own reasoning, tool calls, and decisions. This is why simple grep-based approaches do not scale. Full-text search with ranking, filters by project, filters by tag, and date windowing are not nice-to-haves once the corpus crosses 500 sessions. They are the difference between the data being an asset and the data being dead weight on your disk.

A few things worth noting about the format that affect tooling choices:

  1. Append-only, but not atomic. A partially-written record can appear at the tail of a file if Claude Code crashes mid-write. Any indexer has to handle EOF mid-line gracefully.
  2. Per-session, never merged. Two sessions in the same cwd are two files. There is no "project-level" JSONL.
  3. Messages can be large. A single tool_result from a file-read can be tens of kilobytes. Indexers that load whole files into memory will hit ceilings on a busy day.
  4. Timestamps are wall-clock, not monotonic. Sessions that cross a DST boundary or a clock adjustment can appear to go backward. Sort by UUID-plus-timestamp, not timestamp alone.

Recall handles all of this on your behalf. The point of the indexer is that you do not think about any of it.

Where to go from here

If you have never looked at these files before, take 60 seconds to ls ~/.claude/projects/ and count them. Most power users are sitting on 500 to 2000 sessions they did not know they had. The raw data is yours. Making it searchable takes one install:

Head to the install page, or jump straight to the quickstart for the two commands that get you searching your own history inside a minute.