Lili Chen

I'm a Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Deepak Pathak. My research is graciously supported by the Department of Defense NDSEG Fellowship.

Previously, I received my B.A. in Computer Science from UC Berkeley. I was an undergraduate researcher at the Robot Learning Lab, where I was advised by Pieter Abbeel and Kimin Lee.

Email  /  Google Scholar  /  GitHub  /  LinkedIn  /  Twitter

profile photo
Research

I'm broadly interested in reinforcement learning, self-supervised learning, and LLM reasoning.

Self-Questioning Language Models
Lili Chen, Mihir Prabhudesai, Katerina Fragkiadaki, Hao Liu, Deepak Pathak
arXiv preprint, 2025.
pdf / website / code

We explore the possibility that LLMs could improve without external data, via asymmetric self-play: by generating their own questions as a proposer and answering them as a solver.

Maximizing Confidence Alone Improves Reasoning
Mihir Prabhudesai*, Lili Chen*, Alex Ippoliti*, Katerina Fragkiadaki, Hao Liu, Deepak Pathak
arXiv preprint, 2025.
pdf / website / code

We present an unsupervised reinforcement learning method that improves LLM reasoning performance by using the model's own confidence as a reward.

PlayFusion: Skill Acquisition via Diffusion from Language-Annotated Play
Lili Chen*, Shikhar Bahl*, Deepak Pathak
Conference on Robot Learning (CoRL), 2023.
pdf / website

We present a language-conditioned diffusion model which can learn visuomotor policies from language-annotated play data.

Affordances from Human Videos as a Versatile Representation for Robotics
Shikhar Bahl*, Russell Mendonca*, Lili Chen, Unnat Jain, Deepak Pathak
Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
pdf / website / code

We train a visual affordance model on human videos to estimate how a human is likely to interact with objects and deploy this model in robotic control tasks.

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas*, Igor Mordatch*
Neural Information Processing Systems (NeurIPS), 2021.
pdf / website / code / video (by Yannic Kilcher)

We propose to replace traditional offline RL algorithms with a simple transformer model trained on sequences of returns, states, and actions with an autoregressive prediction loss.

State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo*, Lili Chen*, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
International Conference on Machine Learning (ICML), 2021.
pdf / website / code

We tackle exploration for high-dimensional observation spaces using a k-NN state entropy estimator in the low-dimensional representation space of a randomly intialized CNN.

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Lili Chen, Kimin Lee, Aravind Srinivas, Pieter Abbeel
Neural Information Processing Systems (NeurIPS), 2021.
pdf / code

We present a compute- and memory-efficient modification of off-policy visual RL methods by freezing lower layers of CNN encoders and storing low-dimensional embeddings.

Ising Model Optimization Problems on a FPGA Accelerated Restricted Boltzmann Machine
Saavan Patel, Lili Chen, Philip Canoza, Sayeef Salahuddin
arXiv preprint, 2020.
pdf

We demonstrate usage of RBMs to solve NP-Hard problems efficiently by mapping the RBM onto a reconfigurable FPGA.

Teaching

I hope to improve the accessibility of computer science education, at all levels.

(CMU) 10-716: Advanced Machine Learning (PhD)
Teaching Assistant: Spring 2024
(CMU) 16-831: Introduction to Robot Learning
Teaching Assistant: Fall 2023
(UC Berkeley) CS 70: Discrete Mathematics and Probability Theory

Head Teaching Assistant: Spring 2021, Fall 2020
Teaching Assistant: Spring 2020
Reader: Fall 2019
(UC Berkeley) Computer Science Mentors [Website]

Mentor: Fall 2019, Spring 2019
(UC Berkeley) Berkeley ANova [Website]

Mentor: Spring 2019, Fall 2018, Spring 2018

Website template from here.