What is simultaneous localization and mapping? Simultaneous localization and mapping, or SLAM, is an important technique in the world of robotics. The method allows a robot to use information from its sensors to create a map of its surroundings while simultaneously keeping track of where it is in that environment.
SLAM software has seen widespread use for many applications requiring robots to operate autonomously or semi-autonomously in previously unknown environments without constant human input – including household cleaning robots, operating within pre-mapped locations.
There are multiple different approaches when working on solutions for simultaneous localization and mapping. These can be categorized into nonparametric methods like particle filters (PF) based algorithms which work by keeping track of the probability distribution over all possible configurations; and parametric methods that rely on a model of the robot and its environment to determine positions. In this article, we will take a deeper look at the latter – specifically, a method known as ‘Graph SLAM’.
How Graph SLAM Works
In graph SLAM, the map is represented as a graph where the nodes are 3D points or features from an object-based coordinate system. Each point in this graph has associated with it a probability of being visible from some other node in the graph. The nodes which have a high degree can be considered corner points while those with lower degrees represent more middling points on objects in the scene.
The process starts out by taking a set of starting feature points then creating connections between them according to certain criteria. For example, the robot may choose to create a connection between two points if they are horizontally or vertically adjacent. Then, it will compute the probability of being visible from each feature point in turn, ultimately creating what is known as the observation model.
The next step is to use this data to estimate positions for all of the feature points. To do this, the robot starts at one node and uses its sensor readings to determine which nodes it can see directly along with their probabilities of visibility – essentially using Bayesian inference with its preexisting map (i.e., the graph). It then does this again for every other node on the graph by moving there and repeating the process until it reaches some pre-determined goal state.
This process has multiple advantages over other localization and mapping techniques. One of the most notable is its accuracy, which increases as more sensor data is fed back into the system – allowing it to update its map and adjust for inaccuracies and other factors that can throw off localization such as changing lighting conditions. This also means that should the robot find itself in a previously unencountered environment, it will still be able to keep track of where it is by using what it knows about surrounding points from previous maps it has created.
This type of SLAM system is not only used on household robots but also has been applied to aerial drones with limited computational resources to determine their position without having to rely on external sensors or communication signals. In the latter case, the drones use this method to create a map of their surroundings that is limited to landmarks visible from their onboard camera – which they then feed back into their system as they move.
The algorithm can also be scaled down for indoor localization with relative ease by choosing an appropriate set of features or modifying how it chooses connections between feature points based on the size and shape of objects inside the space it is mapping.