Skip to main content

About Noether

We are a research lab focused on understanding reliability in AI-native systems.

AI systems are moving into production environments that behave less like static software and more like evolving processes. Failures have structure. Incidents follow trajectories. Most tools still treat them as logs and dashboards. We're modeling the underlying dynamics—reconstructing system trajectories and turning incident behavior into verified, learnable structure.

Research Culture

Right now, we are thinking about:

  • What the true causal structure of a failure looks like
  • How much of system reliability can be learned from data
  • What "safe automation" means when systems are non-deterministic
  • How to represent state for counterfactual reasoning about incidents

These questions drive the experiments we run. When a hypothesis breaks, we update. When something surprising emerges from data, we lean into it. We take the foundational questions seriously and follow ideas until the system tells us otherwise.

There are no house beliefs. If two people on the team disagree about an idea, that's a feature of the research environment. Progress depends on multiple lines of thought exploring different edges of the space.

Research Prioritization

The rule of thumb is: if the idea won't matter in five years, it doesn't matter today. The anchor is always the central problem: reconstructing system trajectories and turning incident dynamics into verified learnable structure.

We are always on the lookout for exceptional people who can add to our team. So if you are interested in working with us but don't see a role that's a fit, email crew@noether.one.

We might reach out if you seem like an unusually good fit, or if we open a hiring round for a relevant role.