Lili Chen

I'm a Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Deepak Pathak. My research is graciously supported by the Department of Defense NDSEG Fellowship.

Previously, I received my B.A. in Computer Science from UC Berkeley. I was an undergraduate researcher at the Robot Learning Lab, where I was advised by Pieter Abbeel and Kimin Lee.

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
Selected Papers

I'm broadly interested in reinforcement learning and its applications to language models. See more papers here.

Self-Questioning Language Models
Lili Chen, Mihir Prabhudesai, Katerina Fragkiadaki, Hao Liu, Deepak Pathak
arXiv preprint, 2025.
pdf / website / code

We explore the possibility that LLMs could improve without external data, via asymmetric self-play: by generating their own questions as proposers and answering them as solvers.

Maximizing Confidence Alone Improves Reasoning
Mihir Prabhudesai*, Lili Chen*, Alex Ippoliti*, Katerina Fragkiadaki, Hao Liu, Deepak Pathak
arXiv preprint, 2025.
pdf / website / code

We present an unsupervised reinforcement learning method that improves LLM reasoning performance by using the model's own confidence as a reward.

PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play
Lili Chen*, Shikhar Bahl*, Deepak Pathak
Conference on Robot Learning (CoRL), 2023.
pdf / website

We present a language-conditioned diffusion model which can learn visuomotor policies from language-annotated play data.

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas*, Igor Mordatch*
Neural Information Processing Systems (NeurIPS), 2021.
pdf / website / code / video (by Yannic Kilcher)

We propose to replace traditional offline RL algorithms with a simple transformer model trained on sequences of returns, states, and actions with an autoregressive prediction loss.

State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo*, Lili Chen*, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
International Conference on Machine Learning (ICML), 2021.
pdf / website / code

We tackle exploration for high-dimensional observation spaces using a k-NN state entropy estimator in the low-dimensional representation space of a randomly intialized CNN.

Teaching

I hope to improve the accessibility of computer science education, at all levels.

(CMU) 10-716: Advanced Machine Learning (PhD)
Teaching Assistant: Spring 2024
(CMU) 16-831: Introduction to Robot Learning
Teaching Assistant: Fall 2023
(UC Berkeley) CS 70: Discrete Mathematics and Probability Theory

Head Teaching Assistant: Spring 2021, Fall 2020
Teaching Assistant: Spring 2020
Reader: Fall 2019
(UC Berkeley) Computer Science Mentors [Website]

Mentor: Fall 2019, Spring 2019
(UC Berkeley) Berkeley ANova [Website]

Mentor: Spring 2019, Fall 2018, Spring 2018

Website template from here.