[2502.00850] Dual Alignment Maximin Optimization for Offline Model-based RL


View a PDF of the paper titled Dual Alignment Maximin Optimization for Offline Model-based RL, by Chi Zhou and 5 other authors

View PDF
HTML (experimental)

Abstract:Offline reinforcement learning agents face significant deployment challenges due to the synthetic-to-real distribution mismatch. While most prior research has focused on improving the fidelity of synthetic sampling and incorporating off-policy mechanisms, the directly integrated paradigm often fails to ensure consistent policy behavior in biased models and underlying environmental dynamics, which inherently arise from discrepancies between behavior and learning policies. In this paper, we first shift the focus from model reliability to policy discrepancies while optimizing for expected returns, and then self-consistently incorporate synthetic data, deriving a novel actor-critic paradigm, Dual Alignment Maximin Optimization (DAMO). It is a unified framework to ensure both model-environment policy consistency and synthetic and offline data compatibility. The inner minimization performs dual conservative value estimation, aligning policies and trajectories to avoid out-of-distribution states and actions, while the outer maximization ensures that policy improvements remain consistent with inner value estimates. Empirical evaluations demonstrate that DAMO effectively ensures model and policy alignments, achieving competitive performance across diverse benchmark tasks.

Submission history

From: Chi Zhou [view email]
[v1]
Sun, 2 Feb 2025 16:47:35 UTC (21,151 KB)
[v2]
Sat, 10 May 2025 04:42:40 UTC (16,218 KB)



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Review Your Cart
0
Add Coupon Code
Subtotal

 
Scroll to Top