Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning
Autonomous navigation of wheeled robots in rough terrain environ-ments has been a long standing challenge. In these environments, predicting therobot’s trajectory is challenging due to the complexity of terrain interactions andthe divergent dynamics that cause model uncertainty to compound. This inhibitsthe robot’s long horizon decision making capabilities and often lead to short-sighted navigation strategies. We propose a model-based reinforcement learningalgorithm for rough terrain traversal that trains a probabilistic dynamics model toconsider the propagating effects of uncertainty. Our method increases predictionaccuracy and precision by using a tracking controller and by using constrainedoptimization to find trajectories with low divergence. Using this method, wheeledrobots can find non-myopic control strategies to reach destinations with higherprobability of success. We show results on simulated and real world robots navi-gating through rough terrain environments.