Tianjin University
Deep Reinforcement
Learning Laboratory
Our lab has several Ph.D. and Master positions. If you are interested in our research, please send us your CV. (jianye.hao@tju.edu.cn / yanzheng@tju.edu.cn)

实验室长期接受优秀同学交流学习,攻读硕士/博士学位的同学加入。同时欢迎感兴趣学部(院)夏令营活动的同学进行邮件联系!
News
campaign
Dec 5, 2024 - 祝贺李鹏翼(博士生)获得2024年度国家自然科学基金青年学生基础研究项目(博士研究生)资助
祝贺李鹏翼(博士生)获得2024年度国家自然科学基金青年学生基础研究项目(博士研究生),获资助经费30万元
campaign
Nov 4, 2024 - 祝贺袁逸夫(博士生)获得2024年度(首届)中国电子学会—腾讯博士生科研激励计划资助
祝贺袁逸夫(博士生)获得2024年度(首届)中国电子学会—腾讯博士生科研激励计划资助,全国仅17名,获10万元科研激励基金资助
campaign
Sep 26, 2024 - Nine papers accepted by NeurIPS 2024:
"PERlA: Perceive, Reason lmadine, Act via Holistic l anouage and vision Planning for Manipulation", "ivideoGpT: nteractive videoGPTs areScalable World Models", "iteraively Refined Behavior Reqularization for Offine Reinforcement Learning", "FlexPlanner: flexible 3DFloorplanning via Deep Reinforcement Learningin Hybrid Action Space with Multi-Modaity Representation", "Unlock the intermittent ControlAbility of ModelFree Reinforcement Learning", "Difuserlite: Towards Real-time Difusion Planning", "The Ladder in Chaos: mproving PolicyLearning by Harnessing the Parameter Evoling Path in A Low-dimensional Space","owards Next-Generation Locic Synthesis: A scalableNeural circuit Generation Framework", CleanDiffuser: An Easy-to-use Modularized library for Diffusion Models in Decision Making"
stars
May 2, 2024 - Twelve papers accepted by ICML 2024:
"HarmonyDream: Task Harmonization Inside World Models","Rethinking Decision Transformer via Hierarchical Reinforcement Learning","Sample-Efficient Multiagent Reinforcement Learning with Reset Replay","Improving Generalization in Offline Reinforcement Learning via Adversarial Data Splitting","Imagine Big from Small: Unlock the Cognitive Generalization of Deep Reinforcement Learning from Simple Scenarios","EvoRainbow: Combining Improvements in Evolutionary Reinforcement Learning for Policy Search","A Circuit Domain Generalization Framework for Efficient Logic Synthesis in Chip Design","KISA: A Unified Keyframe Identifier and Skill Annotator for Long-Horizon Robotics Demonstrations","Value-Evolutionary-Based Reinforcement Learning","Reinforcement Learning within Tree Search for Fast Macro Placement","Towards General Algorithm Discovery for Combinatorial Optimization: Learning Symbolic Branching Policy from Bipartite Graph", "A Hierarchical Adaptive Multi-Task Reinforcement Learning Framework for Multiplier Circuit Design"
READ MORE
Recent Research
Multiagent Gumbel MuZero: Efficient Planning in Combinatorial Action Spaces
2024-05-05: To address the challenge of the exponential growth of the joint action space and the resulting exponential increase in search space, this paper extends the AlphaZero and MuZero algorithms to more complex multi-agent Markov decision processes, proposing two algorithms, Multiagent Gumbel AlphaZero and Multiagent Gumbel MuZero. These algorithms are based on resettable environment simulators and multi-agent world models constructed by neural networks, enabling efficient search and decision-making. An efficient without-replacement top-$k$ sampling algorithm is proposed for exponential action space, along with a high-quality policy improvement operator adapted to it, reducing the complexity and computational burden of Monte Carlo tree search while enhancing exploration capabilities. Besides, we propose a Centralized Planning with Decentralized Execution (CPDE) paradigm which uses centralized planning to accelerate policy learning while enabling decentralized execution for deployment. The algorithms achieve the best performance in testing environments such as StarCraft compared to model-free multi-agent reinforcement learning algorithms, with a 10-fold increase in learning sample efficiency while maintaining win rates.
Rethinking Decision Transformer via Hierarchical Reinforcement Learning
2024-05-05: In this work we introduce a general sequence modeling framework for studying sequential decision making through the lens of Hierarchical RL. At the time of making decisions, a high-level policy first proposes an ideal prompt for the current state, a low-level policy subsequently generates an action conditioned on the given prompt. We show DT emerges as a special case of this framework with certain choices of high-level and low-level policies, and discuss the potential failure of these choices. Inspired by these observations, we study how to jointly optimize the high-level and low-level policies to enable the stitching ability, which further leads to the development of new offline RL algorithms. .
What About Inputting Policy in Value Function: Policy Representation and Policy-extended Value Function Approximator
2023-10-21: We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., *value generalization among policies*.
ERL-Re2: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
2023-09-10: Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. However, existing works on combining Deep RL and EA have two common drawbacks: 1) the RL agent and EA agents learn their policies individually, neglecting efficient sharing of useful common knowledge; 2) parameter-level policy optimization guarantees no semantic level of behavior evolution for the EA side. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re^2), a novel solution to the aforementioned two drawbacks. The key idea of ERL-Re^2 is two-scale representation: all EA and RL policies share the same nonlinear state representation while maintaining individual} linear policy representations. The state representation conveys expressive common features of the environment learned by all the agents collectively; the linear policy representation provides a favorable space for efficient policy optimization, where novel behavior-level crossover and mutation operations can be performed. Moreover, the linear policy representation allows convenient generalization of policy fitness with the help of the Policy-extended Value Function Approximator (PeVFA), further improving the sample efficiency of fitness estimation. The experiments on a range of continuous control tasks show that ERL-Re^2 consistently outperforms advanced baselines and achieves the State Of The Art (SOTA).
READ MORE