entropik.

EzyLegal — a drafting and case-memory platform for Australian litigators.

Extracts facts and builds case context from legal documents, then drafts pleadings and procedural work against that context — the lawyer's edits are the system's memory

The problem EzyLegal exists to solve

Litigation work drowns in paper. A single matter carries hundreds of emails, contracts, medical reports, and affidavits, and the lawyer's job is to hold the chronology, the causation, and the credibility of each fact in their head while drafting. When a matter sits for six weeks between hearings, that mental model evaporates. Junior lawyers rebuild it from scratch each time, senior lawyers bill for the re-reading, and clients pay for work that was already done.

On the drafting side the problem compounds. Pleadings, amended pleadings, affidavits, and briefing notes run under rules (UCPR, jurisdiction-specific court procedure) that punish drift with struck-out paragraphs and wasted costs. The junior does the scut work: re-reading the brief, chasing the right rule number, particularising each allegation, checking the form. The partner has no time to review until the night before filing. The lawyer's judgement — which facts matter, which cause of action survives, which evidence supports which paragraph — lives in one head and leaves the file the moment attention moves on.

What dies in both cases is the same thing: the case brief the lawyer wrote last month is not connected to the pleading they draft today.

What EzyLegal does differently

EzyLegal runs two loops over the same case, each one propose-then-approve.

The case-context loop turns documents into reviewable facts. During ingestion, agents extract candidate facts from every chunk, each one structured — statement, date, legal theme, status — and cited back to an exact source excerpt. Lawyers work a review surface that shows the suggestion next to the source document; they approve, edit the language, downgrade confidence, or reject. Every decision is recorded against the fact and the prompt version that produced it. A second set of agents — discovery, articulation, validation, timeline, quality — compose approved facts into a chronological narrative and flag gaps where evidence is missing.

The drafting loop turns that context into procedurally-correct documents. For each pleading type (SOC, amended SOC, affidavit, briefing note) retrieval pulls document text, case briefs, and person rosters from the case memory, plus the applicable UCPR rules from a scraped, embedded rule corpus; analysis proposes the causes of action and the particulars; generation drafts against the jurisdiction's rule set; a QA stage checks the output against UCPR and case-brief facts before the lawyer sees it. The lawyer is the decision surface — they accept, edit, or reject paragraph-by-paragraph, and those edits are captured as structured deltas against the proposal.

Deltas flow back as signal in both loops. Correction patterns tune future retrieval, recalibrate the QA judge, and feed the case brief that grounds the next pleading in the same matter. The lawyer never accepts the agent output as the answer. They correct it, and the corrections are what the system remembers the next time the case is opened.

// architecture · simplified
intakeharnessoutputmemoryhumanloop

Why the architecture matters here

  • events over CRUD — facts, documents, timelines, pleadings, and case briefs are downstream of discrete domain events (FactApproved, DocumentProcessed, SessionEnded, ParagraphAccepted). Modules subscribe through an event bus; the brief regenerates when the facts behind it change; a pleading regenerates when the brief does. Nothing mutates in place without leaving a trail.
  • three-tier context — Redis holds live session state, Qdrant holds conversation summaries and semantic vectors, Postgres holds the permanent case brief, person rosters, and session history, Pinecone holds document chunks and UCPR rules across nine jurisdictions. Each tier is addressed for what it is good at; no tier pretends to be another. A lawyer returning after weeks resumes in under three seconds because the restoration pulls from all tiers at once.
  • skills over controllerspleading_pipeline, affidavit_agent, timeline_agent, ucpr_scraper, briefing_note_agent are composable skill modules, not REST handlers. A pleading type is a skill graph, not an endpoint.
  • harness over model — retrieval → analysis → generation → QA wraps every drafting LLM call; extraction → articulation → validation wraps every fact-extraction call. A single bad completion is caught before it reaches the lawyer.
  • feedback as interaction — every approved fact, edited phrasing, downgraded status, accepted paragraph, or rejected draft attaches to the prompt version and retrieval pass that produced it. That is the training signal the platform compounds on — quality is measured against lawyer behaviour, not benchmark accuracy.
  • projections over features — the case brief, the timeline, the evidence checklist, the "changes since last visit" panel, and the per-pleading draft history are not stored entities; they are projections rebuilt from facts, events, and session history. A new view is a new projection, not a new table.
  • boundary skills — the UCPR scraper and document importer are isolated at the edge, so corpus changes and upstream schema drift never leak into the drafting pipeline or the case memory.
kr8 · next

// Want another platform?

TOPOLOGY