May 11 – 14, 2026
Maskawa Building for Education and Research
Asia/Tokyo timezone

Forward chaos and backward diffusion in multi-layer perceptrons

May 13, 2026, 2:20 PM
40m
Maskawa Hall (Maskawa Building for Education and Research)

Maskawa Hall

Maskawa Building for Education and Research

Kitashirakawa Oiwakecho, Sakyo-ku, Kyoto 606-8502 Japan

Speaker

Hajime Yoshino (Osaka University)

Description

Multi-layer perceptrons (MLP) are feed-forward neural networks that operate deterministically. The forward deterministic process becomes chaotic with strong enough randomness and non-linearity [1]. In this talk we discuss the corresponding backward stochastic process in the MLPs. Using statistical mechanics tools, including the replica method, we found that the forward and backward processes exhibit very similar statistical properties. We discuss implications of the result on machine learning by MLPs [2,3].

[1] B. Poole, et al. "Exponential expressivity in deep neural networks through transient chaos", Advances in neural information processing systems 29 (2016).
[2] H. Yoshino,"From complex to simple : hierarchical free-energy landscape renormalized in deep neural networks", SciPost Phys Core 2, 005 (2020).
[3] H. Yoshino,"Spatially heterogeneous learning by a deep student machine", Phys. Rev. Research 5, 033068 (2023).

Presentation materials

There are no materials yet.