Thaddeus

Talking with Simulations

Using a chatbot to explore a simulation

Published on 5/28/2025

#physics#simulation#MDX#AG-UI

Embedding Apps in Writing

I recently found this insightful writing on the transient period where we're currently building products with AI but mimicking the old ways of doing things, similar to how initial engine cars were designed to look like horse carriages.

This post relates to that, but what really got me started on this was how the author embedded mini apps throughout the app to illustrate his point. A demo speaks a million words.

How did he do it?! TLDR; MDX or Markdown-JSX is how he did it.

Now, I have always wondered how to make simulations more accessible to the layperson, which leads to the question:

What if we could actually talk, interact, and modify live simulations in ordinary language?

Three Body Problem Example

Here's an example simulation of a 3-body problem below. You can manipulate the simulation in two ways:

  1. Graphical UI buttons, the conventional approach
  2. Chatbox approach
    • Provide your own test API key by clicking the "Chat with Simulations" button.
    • Click on the chatbox on bottom right
Drag bodies to move them • Click empty space to pause/play • Adjust parameters above
Simulation Assistant

Need help with the simulation? Try:

  • Pause the simulation
  • Reset the simulation
  • Double the mass of the red body
  • Double the size of the blue body
  • Reset the simulation

Try speak in your normal language, and I'll help you out.

I was hoping to build a direct voice-to-simulation, but the CopilotKit SDK doesn't support it yet, and this quick demo did not warrant the time. Maybe next time.

Simulations for the Layman

When I say layman, I just mean someone without domain expertise. Let's call him John. John could be a manager, boss, peer, or even your own curious child.

The Conventional Way

To develop and use a simulation model effectively, modelers have to cycle through the following:

Model Development Cycle

After going through this process multiple times, the modeler presents their results to John. Sometimes, John is happy with the conclusion presented, but other times John wants to explore the model and its results for himself.

The latter requires the modeler to:

  1. abstract away unnecessary complexites of the model
  2. repackage the UI in a context appropriate to John

In a commercial product, step 2 is no longer bespoke. Instead, the UI has to be generally understandable to as many customers as possible. This becomes challenging as one tries to straddle a compromise between model simplicity and optionality.

When I was with a commercial simulation company, there was a team of Customer Excellence (CE) Engineers to help customers navigate products and simultaneously provide feedback to technical developers. A lot of the feedback related to model and UI/UX improvements. Most of the time, customers had an idea of what they wanted to do, but learning the UI and the model at the same time was a friction.

The knowledge path in this case was:

The way I think about AI taking away jobs is whether or not it displaces org structures previously required for useful information transfer. In this case, the CE Engineer acts as a two-way informational filter, passing on helpful information required to make/sustain sales, and silo-ing sufficient information to not jeapordize organizational functionality on both sides (company secrets etc.)

A New Way

IMO, AI stands to gain a lot in areas which shorten the knowledge path to successful sales (why is this product useful for you) - which is possibly why I keep seeing is teams of AI sales agents as an application.

A simulation that you talk with serves to augment that informational pathway in a parallel way, potentially short-circuiting the graphical UI (i.e., buttons) altogether. Diagramatically:

With simulations, there really are a lot of pre-requisites before one is able to actually understand or make one. For example, trigonometry, linear algebra, tensors, quaternions, calculus, and programming. All these used to take years of learning, months if one is highly talented and driven.

A New Learning Approach

Generative AI really flips this paradigm. I was able to vibe code 80% of this 3 body simulation in 10 minutes, with the last 20% taking a few more hours as I finetuned user experience. The fidelity is not the best, but the fact that John is now able to "reproduce" a phenomenon and then learn by navigating the concept stack by building is very powerful from a horizontal scaling of scientific development.

Better yet, if one is able to use ordinary language to probe the model, one effectively:

  • skips jargon-gated learning
  • skips graphics UI learning barrier
  • enables new entry point into the learning process

The last point is actually what I'm excited about!

Context Windows as Short Term Memory

It is not just chatbot functionality. Since LLMs come with a context window, that effectively serves as a short term memory bank for the combinations of things you have done with the simulations. For example, even though the implementation I have in the simulation above has the atomic ability to:

  1. Modify the size of the bodies
  2. Toggle the boundaries of the simulation
  3. Reset the simulation to default parameters

I'm able to tell the chatbot to "reset the simulation with the current parameters" and have it do the right combination of actions. There's limitations of course, but nothing that can't be solved without engineering imo.

Accelerating Model Improvements

I'm still trying to follow developments in this area, but generally, it seems like we're trending towards an agent-to-agent (A2A) world. The "agent" I have here helps you with navigating and manipulating the simulations. There are other agents that submit code and yet other agents that help review the code as well.

Connecting the pieces, it does seems like a user's feature requests directly through the chatbot could be chained in this agent-to-agent ecosystem and ultimately contribute to code contribution both in open and closed sourced environments. Exciting times ahead!