Tianjin University
Deep Reinforcement
Learning Laboratory
Our lab has several Ph.D. and Master positions. If you are interested in our research, please send us your CV. (jianye.hao@tju.edu.cn / yanzheng@tju.edu.cn)

实验室长期接受优秀同学交流学习,攻读硕士/博士学位的同学加入。同时欢迎感兴趣学部(院)夏令营活动的同学进行邮件联系!
News
lightbulb
Sept 26, 2024 - Nine papers accepted by NeurIPS 2024:
"PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation", "iVideoGPT: Interactive VideoGPTs are Scalable World Models", "Iteratively Refined Behavior Regularization for Offline Reinforcement Learning", "FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation", "Unlock the Intermittent Control Ability of Model Free Reinforcement Learning", "DiffuserLite: Towards Real-time Diffusion Planning", "The Ladder in Chaos: Improving Policy Learning by Harnessing the Parameter Evolving Path in A Low-dimensional Space", "Towards Next-Generation Logic Synthesis: A Scalable Neural Circuit Generation Framework", "CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making"
stars
May 2, 2024 - Twelve papers accepted by ICML 2024:
"HarmonyDream: Task Harmonization Inside World Models","Rethinking Decision Transformer via Hierarchical Reinforcement Learning","Sample-Efficient Multiagent Reinforcement Learning with Reset Replay","Improving Generalization in Offline Reinforcement Learning via Adversarial Data Splitting","Imagine Big from Small: Unlock the Cognitive Generalization of Deep Reinforcement Learning from Simple Scenarios","EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search","A Circuit Domain Generalization Framework for Efficient Logic Synthesis in Chip Design","KISA: A Unified Keyframe Identifier and Skill Annotator for Long-Horizon Robotics Demonstrations","Value-Evolutionary-Based Reinforcement Learning","Reinforcement Learning within Tree Search for Fast Macro Placement","Towards General Algorithm Discovery for Combinatorial Optimization: Learning Symbolic Branching Policy from Bipartite Graph", "A Hierarchical Adaptive Multi-Task Reinforcement Learning Framework for Multiplier Circuit Design"
rocket_launch
Apr 21, 2024 - Two papers accepted by IJCAI 2024:
"ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles", "vMFER: von Mises-Fisher Experience Resampling Based on Uncertainty of Gradient Directions for Policy Improvement"
lightbulb
Feb 28, 2024 - Two papers accepted by CVPR 2024:
"Generate Subgoal Images before Act: Unlocking the Chain-of-Thought Reasoning in Diffusion Model for Robot Manipulation with Multimodal Prompts","Improving Unsupervised Hierarchical Representation with Reinforcement Learning"
breaking_news
Jan 15, 2024 - Four papers accepted by ICLR 2024:
"Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback","Sample-Efficient Quality-Diversity by Cooperative Coevolution","AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model","Rethinking Branching on Exact Combinatorial Optimization Solver: The First Deep Symbolic Discovery Framework"
READ MORE
Recent Research
Multiagent Gumbel MuZero: Efficient Planning in Combinatorial Action Spaces
2024-05-05: To address the challenge of the exponential growth of the joint action space and the resulting exponential increase in search space, this paper extends the AlphaZero and MuZero algorithms to more complex multi-agent Markov decision processes, proposing two algorithms, Multiagent Gumbel AlphaZero and Multiagent Gumbel MuZero. These algorithms are based on resettable environment simulators and multi-agent world models constructed by neural networks, enabling efficient search and decision-making. An efficient without-replacement top-$k$ sampling algorithm is proposed for exponential action space, along with a high-quality policy improvement operator adapted to it, reducing the complexity and computational burden of Monte Carlo tree search while enhancing exploration capabilities. Besides, we propose a Centralized Planning with Decentralized Execution (CPDE) paradigm which uses centralized planning to accelerate policy learning while enabling decentralized execution for deployment. The algorithms achieve the best performance in testing environments such as StarCraft compared to model-free multi-agent reinforcement learning algorithms, with a 10-fold increase in learning sample efficiency while maintaining win rates.
Rethinking Decision Transformer via Hierarchical Reinforcement Learning
2024-05-05: In this work we introduce a general sequence modeling framework for studying sequential decision making through the lens of Hierarchical RL. At the time of making decisions, a high-level policy first proposes an ideal prompt for the current state, a low-level policy subsequently generates an action conditioned on the given prompt. We show DT emerges as a special case of this framework with certain choices of high-level and low-level policies, and discuss the potential failure of these choices. Inspired by these observations, we study how to jointly optimize the high-level and low-level policies to enable the stitching ability, which further leads to the development of new offline RL algorithms. .
What About Inputting Policy in Value Function: Policy Representation and Policy-extended Value Function Approximator
2023-10-21: We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., *value generalization among policies*.
ERL-Re2: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
2023-09-10: Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. However, existing works on combining Deep RL and EA have two common drawbacks: 1) the RL agent and EA agents learn their policies individually, neglecting efficient sharing of useful common knowledge; 2) parameter-level policy optimization guarantees no semantic level of behavior evolution for the EA side. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re^2), a novel solution to the aforementioned two drawbacks. The key idea of ERL-Re^2 is two-scale representation: all EA and RL policies share the same nonlinear state representation while maintaining individual} linear policy representations. The state representation conveys expressive common features of the environment learned by all the agents collectively; the linear policy representation provides a favorable space for efficient policy optimization, where novel behavior-level crossover and mutation operations can be performed. Moreover, the linear policy representation allows convenient generalization of policy fitness with the help of the Policy-extended Value Function Approximator (PeVFA), further improving the sample efficiency of fitness estimation. The experiments on a range of continuous control tasks show that ERL-Re^2 consistently outperforms advanced baselines and achieves the State Of The Art (SOTA).
READ MORE