Cobalt Approach Docs Blog About Get In Touch
About Us

Make Your AI
Auditable.

BluelightAI is an artificial intelligence interpretability company pioneering the application of topological data analysis and mechanistic interpretability to make AI safe for regulated financial environments.

We have launched Cobalt, a Python package for AI interpretability in banking, insurance, and financial services. Available now.

Get started: pip install cobalt-ai

Trusted By
Applied Topology · Mechanistic Interpretability · Financial Services

Current AI evaluation does not directly incorporate internal model structure as a first-class primitive at the inspection level. In order to build genuinely auditable AI we must combine topological analysis and mechanistic decomposition within the interpretability framework itself. We believe this is the most important unsolved problem in AI safety for regulated industries. Several fundamental capabilities are required to achieve this. Cobalt makes them available today.

Our Approach

Founded in Mathematics

BluelightAI is a pioneering AI interpretability company founded in 2022. We are a team of mathematicians and researchers leveraging topology, a branch of abstract mathematics, to deliver genuine mechanistic insight into AI models deployed in high-stakes, regulated environments.

Topological Data Analysis

TDA reveals the shape of high-dimensional model behavior without imposing assumptions. It surfaces clusters, transitions, and failure modes that standard metrics miss, because it captures what the data actually looks like, not what evaluation suites expect.

Mechanistic Interpretability

Unlike output-based evaluation, our methods decompose model activations into interpretable features using sparse autoencoders and cross-layer transcoders. We map the circuits and concepts inside your model: not just what it predicts, but why, at the level of internal representations.

Continuous Monitoring

By grounding monitoring in topological baselines, we establish structural fingerprints of model behavior and detect deviations in real time, with guardrails that adapt as models evolve and evidence your risk and compliance teams can act on.

Cobalt: topological analysis of model activations
Cobalt: mechanistic feature mapping
Cobalt: anomaly detection
Product

Introducing Cobalt

Cobalt provides teams with the ability to iteratively inspect, interrogate, and verify AI behavior by equipping them with the single most rigorous mechanism: direct access to internal representations.

Leveraging topology and mechanistic decomposition, two of the fundamental building blocks of interpretability science, we enable model risk officers, data scientists, and compliance teams to build audit-ready oversight for high-stakes AI systems on top of any model.

Install: pip install cobalt-ai

Open Tool

LLM Explorer

An open, interactive environment for inspecting the internal representations of the Qwen3 family of models via cross-layer transcoders and topological analysis. See how concepts evolve across layers. Trace circuits. A working demonstration of mechanistic interpretability in practice.

Stanford, CA & San Francisco The Team

The Team.

Sachin Khanna, CEO

Sachin Khanna

CEO

Gunnar Carlsson, Founder

Gunnar Carlsson

Founder

Jakob Hansen, Head of Data Science

Jakob Hansen

Head of Data Science

John Carlsson, Principal Scientist

John Carlsson

Principal Scientist

David Fooshee, Principal Scientist

David Fooshee

Principal Scientist

BluelightAI is a world-class team of mathematicians, AI researchers, and domain experts, with roots at Stanford University and offices in San Francisco. We are united by deep experience in building mathematically grounded AI systems, bringing expertise from Stanford Mathematics, academic interpretability research, and financial services.

Founded by Gunnar Carlsson, whose research created the field of Topological Data Analysis, BluelightAI was built on a simple conviction: that the most powerful tools for understanding AI are mathematical ones. If you've been kept awake wondering how to make AI genuinely auditable, we want to hear from you.