Motor logs. Force feedback. The causal structure of physics.
Every robot moving in the real world today — Atlas, Optimus, Figure, 1X — shares one thing in common: they're trained on embodied data. Teleoperation. Motor logs. Physical feedback loops.
Not video. Not observation. Interaction.
Why? Because motor logs capture what cameras can't: force, resistance, torque, friction — the full causal structure of the physical world.
When you crack an egg, a camera sees fingers move and a shell break.
It doesn't see:
Video shows what physics looks like. Not what physics is.
And robots need to understand what physics is.
The field keeps asking: How do we get more data?
We're asking a different question: How do we make our data physics-aware?
More volume
More richness
The path forward isn't more volume. It's more richness. Data that encodes forces, contacts, materials, and dynamics.
Data that captures what the world demands, not just what it looks like.