FrontierNet: Learning Visual Cues to Explore

RA-L 2025

1ETH Zürich, 2Technical University of Munich, 3Microsoft, 4University of Bonn

FrontierNet learns to detect frontiers (the known-unknown boundary) and predict their information gains from visual appearance, enabling highly efficient autonomous exploration of unknown environments.

Abstract

Exploration of unknown environments is crucial for autonomous robots; it allows them to actively reason and decide on what new data to acquire for different tasks, such as mapping, object discovery, and environmental assessment. Existing solutions, such as frontier-based exploration approaches, rely heavily on 3D map operations, which are limited by map quality and, more critically, often overlook valuable context from visual cues. This work aims at leveraging 2D visual cues for efficient autonomous exploration, addressing the limitations of extracting goal poses from a 3D map. We propose a visual-only frontier-based exploration system, with FrontierNet as its core component. FrontierNet is a learning-based model that (i) proposes frontiers, and (ii) predicts their information gain, from posed RGB images enhanced by monocular depth priors. Our approach provides an alternative to existing 3D-dependent goal-extraction approaches, achieving a 15% improvement in early-stage exploration efficiency, as validated through extensive simulations and real-world experiments.

Video

More Results

Simulated Exploration

Real-world Exploration

BibTeX

@article{boysun2025frontiernet,
    author={Sun, Boyang and Chen, Hanzhi and Leutenegger, Stefan and Cadena, Cesar and Pollefeys, Marc and Blum, Hermann},
    journal={IEEE Robotics and Automation Letters}, 
    title={FrontierNet: Learning Visual Cues to Explore}, 
    year={2025},
    volume={10},
    number={7},
    pages={6576-6583},
    doi={10.1109/LRA.2025.3569122}
}