Announcing Awesome Memory VLA
We are releasing Awesome Memory VLA, a curated and continually updated collection of research on memory-augmented Visual Language Action models. The list focuses on methods for long-horizon, partially observable, and embodied decision making, bringing together recent work at the frontier of VLMs, control, and reinforcement learning.
The repository tracks papers, benchmarks, datasets, and other resources relevant to building VLA systems that integrate memory for robust generalization and long-term reasoning.
Community contributions are welcome. If you have a relevant paper or resource, feel free to submit a pull request.