LikePhys: Evaluating Intuitive Physics Understanding in Video Diffusion Models via Likelihood Preference

1University of Oxford   2MBZUAI   3University of Chicago   4UWE Bristol
LikePhys Results Overview

We benchmark 9 state-of-the-art video diffusion models across 12 physics scenarios. Our evaluation metric, Plausibility Preference Error (PPE), demonstrates strong alignment with human preferences.

Abstract

Intuitive physics understanding in video diffusion models plays an essential role in building general-purpose physically plausible world simulators, yet accurately evaluating such capacity remains a challenging task due to the difficulty in disentangling physics correctness from visual appearance in generation. To the end, we introduce LikePhys, a training-free method that evaluates intuitive physics in video diffusion models by distinguishing physically valid and impossible videos using the denoising objective as an ELBO-based likelihood surrogate on a curated dataset of valid-invalid pairs. By testing on our constructed benchmark of twelve scenarios spanning over four physics domains, we show that our evaluation metric, Plausibility Preference Error (PPE), demonstrates strong alignment with human preference, outperforming state-of-the-art evaluator baselines. We then systematically benchmark intuitive physics understanding in current video diffusion models. Our study further analyses how model design and inference settings affect intuitive physics understanding and highlights domain-specific capacity variations across physical laws. Empirical results show that, despite current models struggling with complex and chaotic dynamics, there is a clear trend of improvement in physics understanding as model capacity and inference settings scale.

Method Overview

LikePhys Method Overview

LikePhys evaluates intuitive physics understanding by computing likelihood preferences between physically valid and invalid video pairs using the denoising objective as an ELBO-based surrogate. This training-free approach enables direct assessment of physics comprehension without requiring additional model training or fine-tuning.

Dataset

LikePhys Dataset Overview

Our benchmark covers 12 physics scenarios across 4 domains: Rigid Body Dynamics (ball drop, collision, pendulum, block slide, pyramid collapse), Fluid Dynamics (fluid droplet, faucet flow, river flow), Deformable Materials (cloth drape, flag motion), and Optics (shadow casting, camera motion). Each scenario includes physically valid videos and carefully designed violations to test physics understanding.

📦 Download the Dataset:

huggingface-cli download JianhaoDYDY/LikePhys-Benchmark --repo-type dataset --local-dir ./data

Or visit: 🤗 Hugging Face Dataset Page

Example Video Pairs

Each scenario contains physically valid videos and carefully designed violations to test physics understanding.

🔴 Rigid Body Dynamics (Valid)

Stacked spheres with realistic rigid-body dynamics

🔴 Rigid Body Dynamics (Violation)

Physics violation: sphere fusion

🟠 Fluid Mechanics (Valid)

Faucet flow with realistic fluid dynamics

🟠 Fluid Mechanics (Violation)

Physics violation: fracturing fluid

🟢 Optical Effects (Valid)

Correct shadow casting and light behavior

🟢 Optical Effects (Violation)

Physics violation: missing shadow

🔵 Continuum Mechanics (Valid)

Cloth draping with realistic deformation

🔵 Continuum Mechanics (Violation)

Physics violation: cloth penetration through surface

Each domain shows a valid-invalid pair to demonstrate physics understanding evaluation.

BibTeX

@article{yuan2025likephys,
  title={LikePhys: Evaluating Intuitive Physics Understanding in Video Diffusion Models via Likelihood Preference},
  author={Yuan, Jianhao and Pizzati, Fabio and Pinto, Francesco and Kunze, Lars and Laptev, Ivan and Newman, Paul and Torr, Philip and De Martini, Daniele},
  journal={arXiv preprint arXiv:2510.11512},
  year={2025}
}