Multi-robot mapping is becoming an always more demanded application for targeting a large variety of problems, such as site inspection, human search and rescue, and orchard monitoring, due to the flexibility of having multiple robotic agents acting cooperatively for completing the same task. In this view, the robots need to communicate one with another to share information on their relative position and their surrounding. Integrating this information allows for obtaining a global overview of the environment and the progress status of the mission. One of the simpler examples of shared information is represented by the local maps built by individual robots during their operations. In the literature, multiple maps integration is usually cast as a map merging problem, in which maps are represented as occupancy grids and stitched together by looking for overlapping parts, or as a multi-robot Simultaneous Localisation and Mapping (SLAM) problem, where graphs of individual robots' pose are built and connected using graph theory. However, this is still an interesting and open research problem despite the effort put into robot localisation in the last twenty years. Differently from the single robot use case, multi-robot maps integration needs a higher level of abstraction to identify which elements are common across multiple maps so to make the integration efficient and real-time. In particular, within this project we call "Ground-Aerial maps Integration for increased Autonomy Outdoors" (GAIA), we target the scenario of having a fleet of heterogeneous robots characterised by complementary behaviours, movements and perception capabilities, making the maps integration problem even harder than homogeneous fleet (e.g., using only ground robots). To solve this problem, this project focuses on exploiting the human understanding of a scene so as to integrate multi-perspective robotic maps.
In particular, we plan to use the semantic information in robotic observations to have a better understanding of the scene, to identify which entities are present in it, and to leverage such information so as to integrate multi-perspective observations into a single map representation. More specifically, within GAIA and the scope of this call, we tackle the problem of multi-robot maps integration in the agricultural domain, where robotics solutions can represent a game-changer technology. The possibility of deploying autonomous agents in the field to assist, if not replace, human workers in monitoring and harvesting tasks opens up a new revolution focused on precision agriculture and sustainability. Indeed, robots are equipped with dedicated software and hardware that can assist farmers by collecting data on rainfall, soil moisture and soil composition, so to help them make more target interventions. More specifically, the ground robot offers a closer and more detailed inspection point of view over the crops, while the UAV allows observing a larger field in a shorter time. The UAV offers active sensing capabilities to complete and update a partially complete map on demand, while improving its level of confidence (how much we trust the map) by, for example, mapping human workers or any other agriculture-related tools (e.g., tractors or trolleys) located in the fields. This updated information can be exploited by the UGV's path planner to make the ground platform's deployment more efficient, avoiding those obstructed paths.
|