Blog

The Verifiable Claude Playbook for Australian Financial Services

May 2026 · 6 min read · Industry Guide

A financial firm's filing room viewed through a doorway, with deep timber shelves stacked with manila folders and bound annual reports
← Back to all posts

Your analyst ran the research. The AI returned a figure. The compliance team asked where it came from. Nobody could point at the page.

That is the friction point blocking AI adoption in Australian financial services right now. Not model capability. Not integration complexity. The inability to prove, to an auditor or a risk committee, exactly where each output came from.

What Kepler built in three months

Kepler is a 2025-founded fintech that built a financial-services AI platform on Claude and indexed 26 million-plus SEC filings, 50 million-plus public documents, 1 million-plus private documents, and 14,000-plus companies across 27 global markets. They did it in under three months.

  • 26 million-plus SEC filings, earnings call transcripts, and IR presentations indexed.

  • 50 million-plus total public documents across 14,000-plus companies in 27 markets.

  • 1 million-plus private documents indexed alongside the public corpus.

  • Every AI output traced to source document, page number, and line item.

The scale matters. What makes it usable for finance is the verification layer: every figure the AI returns is traced back to its exact source document, page number, and line item.

Technically: AWS infrastructure, Rust for the indexing and serving layer, Python for the AI orchestration. Founded by Vinoo Ganesh (ex-Palantir, CEO) and John McRaven (CTO). They published their architecture publicly, which is what makes Kepler worth studying rather than just admiring. The combination of a compiled, memory-efficient language for data-heavy tasks alongside a flexible scripting language for model orchestration is a pattern worth examining for anyone planning a similar build.

Why verifiability is the precondition, not a feature

APRA's CPS 230 sets explicit operational risk requirements for regulated entities, covering material third-party dependencies including AI systems. CPS 234 addresses information security. Both apply to banks, insurers, and superannuation funds. Australia's Privacy Act 1988 and the Australian Privacy Principles add further obligations around data handling, particularly for AI systems that process personal financial information. For Australian FS teams deploying Claude into research, due diligence, or client advisory workflows, these are not background compliance considerations. They are the conditions under which the product either ships or does not.

Compliance teams will not accept 'trust the model' as an audit posture. That is not cynicism on their part. It is their job. An AI system that generates a recommendation without a traceable source is not auditable under Australian financial services law. It does not matter how accurate the model is in testing. If the risk committee cannot follow the chain of evidence from output back to primary source, the answer is no.

The cost of not solving this is calculable. An analyst fully loaded at $120 per hour, spending 20 hours a week on manual due diligence and document research, is costing $124,800 per year on one workflow. That figure sits on every FS CFO's cost sheet already. The AI replacement exists. The missing piece is the audit trail that makes it deployable in a regulated context.

The Australian financial services playbook

Kepler's approach distils to three principles. None of them are particularly glamorous. All of them are the difference between an AI deployment that compliance approves and one that stalls in a risk committee for six months.

1. Build the verification layer before the AI feature

Decide your audit format before writing any AI orchestration code: source document, page number, line item, confidence score. Get sign-off from risk and compliance on that format before you build. This feels slow in the planning phase. It is actually the fastest path to production. Retrofitting compliance after a proof of concept is built fails nine times out of ten, because the underlying architecture made assumptions that auditors cannot accept. The classic failure mode is a retrieval pipeline that returns relevant text but loses the exact source reference in the process. Reconstruction is possible; rewiring the architecture after the fact is expensive and usually means starting over.

2. Match the tool to the job

Kepler used Rust for the indexing and serving layer because it gives predictable latency and throughput at financial data scale, with low memory overhead per connection. Python handled the AI orchestration, where ecosystem flexibility matters more than raw speed. The split is deliberate: each language is used where its properties are a genuine advantage. Australian financial services teams that try to do everything in a single stack typically end up with a system that is either too rigid to extend or too slow for production. The engineering trade-off is not about language preference. It is about what the architecture needs at each layer.

3. Index your private data alongside public data

Kepler's 1 million-plus private documents are indexed alongside 50 million-plus public ones. The ratio is skewed toward public, but the competitive value is in the private. Your firm's actual advantage is not in accessing public data. Every competitor has Bloomberg, every firm has access to the same ASX filings. The advantage is in internal research notes, client portfolio history, proprietary analyst assessments, and the decade of transaction data sitting in systems that nobody has queried with a language model. A Claude deployment that only sees public sources is a commodity product. The one that also indexes your private corpus is a defensible position.

Three-step framework: verify first, match the tool to the layer, index private data alongside public

When this playbook does not apply

If your AI use case does not touch regulated outputs, a full citation enforcement layer is overkill. This applies more often than you might expect. Summarising internal meeting notes, drafting first-pass client communications for human review, generating research summaries for internal consumption. For those workflows, source citation is a useful feature, not a compliance requirement. Match the governance overhead to the actual risk.

Likewise, if your document corpus is under roughly 10,000 items, a lightweight retrieval setup with basic source tracking typically works fine. You do not need a Rust-based indexing pipeline to validate references across a 500-document research library. Build the architecture the problem requires. A $30,000 to $60,000 initial deployment covering one or two high-value workflows will tell you more about what you actually need than six months of architecture planning.

Automata AI builds verifiable Claude deployments for Australian financial services, with citation enforcement and APRA-aligned governance built in from the start, not added after a compliance review.

The businesses that get this right will not necessarily be the largest teams in Australian financial services. They will be the ones that made a deliberate decision, early, that verifiability was the feature, not the afterthought. That decision is available to any FS team right now.

Ready to move from AI pilot to production?

We help mid-market Australian businesses deploy AI automations that actually reach production and deliver measurable ROI.