Approach Cobalt Research Team Docs Get In Touch

See Inside
Your AI.

We build interpretable AI models that replace the black box. You get a model whose reasoning you can read directly. Not a summary it wrote about itself. Not an approximation. A direct read of what it actually computed.

We work with teams in banking, financial services, and insurance where AI decisions carry real consequences and need to be understood.

Get started: pip install cobalt-ai

Trusted By
The Problem

Your AI works. You just can't prove how.

Organizations across banking, financial services, and insurance are deploying AI in lending, collections, claims, fraud detection, and compliance. The models perform. But when someone asks what actually drove a specific decision, the answer is either silence or a story the model told about itself.

Existing explanation tools were built for an older generation of models. They do not extend faithfully to the architectures being deployed today. As AI moves from pilot to production in high-stakes environments, this gap becomes the difference between a model that ships and one that stalls.

What We Do

From Workflow to Proof to Production

01

Build an Interpretable Model for Your Workflow

We work with your team to understand your current systems and build a model customized to your specific workflow. It bolts onto your existing infrastructure and is architected so its internal reasoning is directly readable. This is not post-hoc explanation. It is built into the model itself.

02

Prove It on Your Data

The model runs alongside your production system in shadow mode. No workflow disruption. No new integrations beyond a data feed. You get a side-by-side accuracy comparison on live data before making any production commitment. We prove improvement in your environment through a controlled comparison.

03

Give Your Stakeholders a Way to Interrogate Decisions

Cobalt is our interrogation platform. Business leaders can ask why the model recommended a specific action. Compliance teams can check consistency across similar cases. Technical teams can audit behavior at the level of internal representations. The answers come from what the model actually computed, not from a narrative it generated after the fact.

Industries

Where Interpretability Matters Most

We partner with teams in banking, financial services, and insurance to bring interpretability to the AI workflows where it matters most.

Collections

When agents follow the AI recommendation most of the time, the model's reasoning needs to be documented. We provide the per-decision attribution record that risk and compliance teams need to see.

Lending & Credit

Automated lending decisions need transparent reasoning, particularly when built on non-traditional data. We surface patterns in model behavior that post-hoc tools miss.

Fraud Detection

The operational savings in fraud triage depend on scaling auto-close decisions. That requires a per-case attribution record at the point of disposition. We build the model that produces it.

AML & Compliance

When a model draws a conclusion from months of transaction history, you need confidence the conclusion was driven by the data. We make that reasoning visible and auditable.

Our Approach

Founded in Mathematics

Our approach combines two of the most rigorous methods for understanding AI systems:

Topological Data Analysis

TDA reveals the shape of high-dimensional model behavior without imposing assumptions. It surfaces clusters, transitions, and failure modes that standard evaluation misses.

Mechanistic Interpretability

We decompose model activations into interpretable features using sparse autoencoders and cross-layer transcoders. We map the circuits and concepts inside your model: not just what it predicts, but why, at the level of internal representations.

TDA MI

Auditable by Design

Together, these methods produce models whose reasoning is structurally readable. The result is interpretability that compliance and risk teams can verify, not just trust.

Cobalt: topological analysis of model activations
Cobalt: mechanistic feature mapping
Cobalt: anomaly detection
Cobalt

Interrogate Your AI

Cobalt is how your teams inspect, interrogate, and verify AI behavior. Built on topological data analysis and mechanistic decomposition, Cobalt gives business, compliance, and technical stakeholders direct access to a model's internal representations.

Ask a question about a decision. Get an answer grounded in what the model actually computed.

Install: pip install cobalt-ai

LLM Explorer

Mechanistic Interpretability in Action

Inspect the internals of Qwen3 models. Trace circuits. Map concept evolution across layers. A free, interactive demonstration of an LLM's brain.

Research

From the Lab

Our research drives what we build. Every method we publish is one we are actively applying to real problems with real clients.

All Posts  →

The Team

Sachin Khanna, CEO

Sachin Khanna

CEO

Gunnar Carlsson, Founder

Gunnar Carlsson

Founder

Jakob Hansen, Head of Data Science

Jakob Hansen

Head of Data Science

John Carlsson, Principal Scientist

John Carlsson

Principal Scientist

David Fooshee, Principal Scientist

David Fooshee

Principal Scientist

Founded by Dr Gunnar Carlsson, one of the inventors of Topological Data Analysis at Stanford. The founding team combines pioneering research in TDA and mechanistic interpretability with decades of enterprise software execution across global organizations.

Our advisory board brings deep BFSI credibility spanning tier-1 banking CTOs, AI governance leadership at global financial institutions, and PhD-level expertise in explanation-based AI. We navigate both the scientific complexity of interpretability and the operational reality of deploying AI in high-stakes environments.