Monte-Carlo Localization in Gazebo Using a Laser Range Sensor

Jonas N. Schwertfeger <js@cs.brown.edu>
Brown University


Abstract:

As a building block for the final Roomba Pac-Man project we extended the reactive controller described in [1] with a Monte-Carlo Localization (MCL) algorithm. Currently the algorithm runs in a simulated Gazebo environment and uses laser range sensor data for estimating the pose likelihoods. A move to the real-world environment and vision-based localization is part of the upcoming project Path Planning. We present our approach to MCL and show experimentally that it converges fast and is robust.

INTRODUCTION

In [1] we focused on a purely reactive approach to the Roomba Pac-Man problem and navigated towards power-ups and food pellets as they appeared in the robot's field of view. Once the robot lost sight of these objects it returned to a wandering state and drove straight on until it hit a wall. It then turned left or right and proceeded with the wandering. As a consequence of this haphazard navigation the robot did not systematically explore its environment. Besides that, the robot had neither a memory of visited objects nor a plan of objects it would like to visit.

A much more sophisticated and promising approach for robot control would be a deliberative controller. Such a controller has a notion of state and plans ahead instead of basing its actions solely on the current state. In this project we concentrate on one of the basic prerequisites for a deliberative controller: Localization. In order to plan ahead and make strategic decisions, a robot needs to be able to estimate its location based on a map of the environment and sensory input data.

APPROACH

Monte-Carlo Localization

This project uses Monte-Carlo Localization (MCL) [2], a very popular sampling-based localization algorithm, in order to estimate the robot's current pose from laser range data. We run the reestimation process as often as computationally possible and in each iteration we are returned a number of hypothetical robot poses with assigned likelihoods. We then pick the pose with the highest likelihood as the most likely pose.


Motion Model

Most modern robot platforms provide odometric feedback, that is they provide information about the distances the robot traveled and the angle it turned. Odometry is usually obtained by integrating wheel encoder information at rotation axes over time. Even though measuring the actual amount a wheel turned might suggest perfect knowledge of the distance traveled, one cannot rely on these measurements because they suffer from drift and slippage. However, by explicitly modeling the noise in odometry we can incorporate it into MCL.

Our motion model implements the odometry motion model shown in [3] where the relative distance traveled between to odometric measurements is decomposed into a sequence of three steps: a rotation ( $ \delta_{\rm {rot1}}$ ), followed by a translation ( $ \delta_{\rm {trans}}$ ), and another rotation ( $ \delta_{\rm {rot2}}$ ). The motion model then assumes that each of these three parameters is corrupted by a certain amount of noise. In contrast to the simplest possible motion model where one just adds some amount of noise to the current odometric position in 2D space, this method allows us to model the estimated rotational and translational error more realistically. Fig. 1 illustrates the motion decomposition we just explained. The upper-left drawing shows the decomposed motion and the remaining three drawings are examples where different amounts of Gaussian white noise were added to the three parameters. In the upper-right example some noise is added to the rotational components and a bit less noise to the translation component. In the lower-left example much noise is added to the translation and in the lower-right example much noise to the rotation.

Figure 1: Upper left figure: Decomposition of the difference of two odometric poses into a rotation, a translation and another rotation. Remaining figures: Different amounts of Gaussian noise added to the rotational and the translational components. These figures are taken from [3].
\includegraphics[width=0.15\paperwidth]{obsmodel1} \includegraphics[width=0.15\paperwidth]{obsmodel2s}  
\includegraphics[width=0.15\paperwidth]{obsmodel3s} \includegraphics[width=0.15\paperwidth]{obsmodel4s}  

By comparing true robot poses, as given by the Gazebo simulator, and odometric measurements returned by the wandering robot we found that in our case the rotational error approximately resembles a Gaussian distribution and that there is nearly no translational error. In order to estimate the robot/environment-specific Gaussian noise parameters $ (\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ proposed in [3] we applied gradient ascend on the parameter likelihood surface. As expected, the parameters $ \alpha_2$ and $ \alpha_3$ that account for translational error are practically zero, that is they have no influence on the standard deviation of the Gaussian noise.

Measurement Model

In this project the measurement likelihood function solely uses distance information from a laser range sensor. For each hypothetical pose (particle) the likelihood function calculates the distance information that would be returned by the laser range sensor if the robot actually was at that pose. This is done by virtually sending out rays along the laser sensor's field of view for each particle. We follow the rays pixel by pixel and stop each ray when it hit an object or the maximum range of the sensor has been reached. The distances the rays traveled give us a hypothetical distance landscape for each particle which we can then compare to the actual laser scan results.

We define the range information of each particle as well as the real scan results to be distributions for which we want to calculate a similarity value. This similarity is calculated using a variant of the Earth Mover's Distance [4]. EMD is a distance measure that reflects the minimal amount of work that must be performed to transform one distribution into the other by moving ``distribution mass'' around and seemed to capture the similarities between hypothetical and real scans pretty well.

Adaptive Random Particle MCL

It is common practice to inject a specific amount of random particles into the MCL importance sampling step in order to recover from global localization failures or from the (usually unlikely) kidnapped robot problem. In the simplest case one just adds a fixed amount of random particles at each iteration. [3] proposes a method where the percentage of random particles is based on some estimate of the localization performance. That is, the better we localize, the less random particles are added. The algorithm we implemented is adaptive in the sense that it keeps record of the short-term and the long-term average of the measurement likelihood and samples random particles based on the ratio between them. The percentage of random particles added in each iteration is given by $ \min\{0.1, 1-w_{\rm {short}}/w_{\rm {long}}\}$ . If the short-term likelihood $ w_{\rm {short}}$ is better or equal than the long-term likelihood $ w_{\rm {long}}$ , then the ratio is $ \geq 1$ and the minimum percentage of random particles (10% in our case) is added. However, if the short-term likelihood is worse than the long-term likelihood, random samples proportional to the quotient are added. The average likelihoods are updated after each iteration according to

$\displaystyle w_{\rm {long}}$ $\displaystyle =w_{\rm {long}}+\alpha_{\rm {long}}(w_{\rm {average}}-w_{\rm {long}})$    
$\displaystyle w_{\rm {short}}$ $\displaystyle =w_{\rm {short}}+\alpha_{\rm {short}}(w_{\rm {average}}-w_{\rm {short}}) \;$,    

where $ w_{\rm {average}}$ is the average measurement likelihood of all the particles in the current iteration and $ \alpha_{\rm {long}}$ and $ \alpha_{\rm {short}}$ are decay rates; they are $ 0.05$ and $ 0.2$ in our case, respectively.

Software Architecture

The general MCL algorithm is implemented in the abstract class MCLocalizer. That is, importance sampling, the motion model and the methods for reestimating the poses as well as fetching the hypothetical poses. Depending on the robot on which MCL is run different sensors are available and thus, different measurement likelihood functions are required. For that reason there exist two child classes of MCLocalizer, namely GazeboLocalizer and RoombaLocalizer. The former implements the laser-based measurement model whereas the latter will be used for the vision-based measurement model in the next project.

Localization is clearly a perceptual task and thus belongs to the perception module of the robot control loop. Therefore, an instance of MCLocalizer is encapsulated inside MPerceptor and all localization queries are delegated through MPerceptor to MCLocalizer (MPerceptor is a child class of Perceptor that is enhanced with localization methods).

Visualization

For visualization purposes we draw the map, the particles, the robot's true pose and the most likely particle in a window after each MCL iteration. Particles are drawn as spiky triangles where the spike indicates the particle's bearing. The color of a triangle reflects the particle's relative likelihood compared to all the other particles; the more red the higher the likelihood, the more blue the lower the likelihood. Around the most likely particle we draw a red circle. The robot's true pose is indicated by a transparent green arrow where the arrow shows the robot's bearing. Fig. 2 shows an example of this visualization.

Figure 2: Visualization of MCL in Gazebo. Spiky triangles are particles, where the color indicates likelihood of a particle relative to all the other particles; blue being an unlikely particle and red being a likely one. The red circle indicates the most likely particle and the robot's true pose is denoted by the green transparent arrow.
\includegraphics[width=0.35\paperwidth]{vis}

EVALUATION

In order to evaluate the performance of our localizer we ran MCL in Gazebo and let the robot wander around randomly. While the robot was moving and the localizer was producing pose estimates we recorded the Euclidean distance between the robot's true position and the most likely estimated pose as well as the difference in bearing between these two poses. At the same time we continuously plotted the true position and the estimated position on the map. This allowed us to visually judge the accuracy of our method and identify potential weaknesses. Fig. 3 and 4 show the true paths, the estimated paths and the estimation errors of a run with 150 particles and 1500 particles, respectively. On these maps the robot's true path is drawn as green ``train tracks'' and the estimated path as red lines.

Notice how it takes more MCL iterations to converge when the size of the particle set is small. Even though MCL runs faster with less particles, that is it performs more iterations per second, the overall time MCL needed to converge was longer in the case of 150 particles compared to 1500 particles.

The results of these two runs are not very representative for the overall performance of MCL because of the stochastic nature of this algorithm. Therefore we present results of five runs with 1500 and 150 particles, respectively, in Sec. VI-B. If we define ``correct pose estimation'' as the estimation error being $ \leq 1$ meter for the position and $ \leq 5$ degrees for the bearing, then our MCL implementation localized correct during 84.74% of the first 800 iterations in the case of 1500 particles and during 74.55% of the first 1600 iterations in the case of 150 particles. It is reasonable to assume that these percentage values would be even higher if we continued recording the error estimations in our experiments. This is because the influence of the initial convergence phase on the overall performance decreases as we record more measurements.

Figure 3: Top: Map of the simulated environment with the robot's true path drawn as green ``train tracks'' and the estimated path as red lines. Bottom: Plots of the pose estimation error 1500 particles over subsequent MCL iterations. The errors in the plot correspond to the run shown on the map
Image MclDemo_1500s_0690  
\includegraphics[width=0.35\paperwidth]{1500s}  

Figure 4: Top: Map of the simulated environment with the robot's true path drawn as green ``train tracks'' and the estimated path as red lines. Bottom: Plots of the pose estimation error with 150 particles over subsequent MCL iterations. The errors in the plot correspond to the run shown on the map.
Image MclDemo_0150s_2294  
\includegraphics[width=0.35\paperwidth]{150s}  

DISCUSSION

The general idea of MCL is relatively simple to understand and implement. We found that getting ``some'' localizer working is not that difficult, but getting it to work most of the time and with high precision involves a lot of parameter tweaking and optimization.

Motion Model Parameter Estimation

The first set of parameters we needed to find were the motion model parameters $ (\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ . They are specific to the robot and to some degree to the particular environment the robot is acting in. In Sec. VI-A we describe how these parameters can be found in terms of maximizing a mathematical expression. According to this maximization we found the parameters to be $ \alpha_1=5.651$ , $ \alpha_2=0.002$ , $ \alpha_3=0.0$ and $ \alpha_4=0.05$ . We played with these values in Gazebo and concluded that $ \alpha_1$ is way too large in practice. The values that gave the best results experimentally were $ \alpha_1=0.05$ and $ \alpha_3=0.15$ .

Percentage of Random Particles

The number of added random particles is mainly determined by the decay rates $ \alpha_{\text{long}}$ and $ \alpha_{\text{short}}$ . If a decay rate is high MCL reduces the amount of random particles rapidly if the recent average likelihood is good (which is desirable) but at the same time MCL is getting sensitive to measurement noise and adds random particles too rash if the recent likelihood is bad. Hence, by choosing a values for $ \alpha_{\text{long}}$ and $ \alpha_{\text{short}}$ we face a trade off between low sensitivity but slow convergence and fast convergence but high sensitivity. By choosing a good combination of $ \alpha_{\text{long}}$ and $ \alpha_{\text{short}}$ this effect should be reduced so that MCL converges relatively fast but is not sensible to short-term measurement noise. Deciding on these values turned out to be difficult, especially because of the non-deterministic nature of MCL. We experimentally found that $ \alpha_{\text{long}}=0.05$ and $ \alpha_{\text{short}}=0.2$ work reasonably well for us.

Computational Costs

Another problem we faced was the computational costs of calculating the measurement likelihood for a large number of particles. We implemented two optimizations in the likelihood function that reduced the amount of calculations, but we would still not achieve more than 3-4 MCL iterations per second on a dual-core P4 3GHz computer.

Particle Groups

The computations inside the likelihood function could be reduced if we consider the fact that most of the time, once MCL converged, we calculate the likelihood for particles that are so close to each other, that their likelihoods barely differ. We could exploit this fact and assign the same likelihood of one particle to its very close neighbors, thus reducing the number of effectively calculated measurement likelihoods.

Adaptive Particle Set Size

Another approach to speed optimization consists of adapting the size of the particle set over time. This extension to MCL has already been proposed in literature [5] and is known as KLD-sampling where KLD stands for Kullback-Leibler divergence.

CONCLUSION

In this work we developed a Monte-Carlo Localizer based on laser range data and tested it in the Gazebo simulator. Our tests showed that the MCL implementation localizes the robot very accurate most of the time. When it looses the correct pose MCL converges again after some short time. The convergence time is closely related to size of the random particle set; the larger the number of particles the faster it converges.

APPENDIX


Finding the Motion Model Parameters

As noted in II-B we observed from the odometry error measurements that the rotational error resembles a Gaussian distribution and that there is nearly no translational error. According to the odometry motion model in [3] we add noise to the three move operations in the following way:

$\displaystyle \hat{\delta}_{\text{rot1}}$ $\displaystyle = \delta_{\text{rot1}} - \nu_{\text{rot1}}$    
$\displaystyle \hat{\delta}_{\text{trans}}$ $\displaystyle = \delta_{\text{trans}} - \nu_{\text{trans}}$    
$\displaystyle \hat{\delta}_{\text{rot2}}$ $\displaystyle = \delta_{\text{rot2}} - \nu_{\text{rot2}}$    

where

$\displaystyle \nu_{\text{rot1}}$ $\displaystyle \sim \mathcal{N}(\nu_{\text{rot1}},0,\alpha_1\mid \delta_{\text{rot1}}\mid +\alpha_2 \delta_{\text{trans}})$    
$\displaystyle \nu_{\text{trans}}$ $\displaystyle \sim \mathcal{N}(\nu_{\text{trans}},0,\alpha_3\delta_{\text{trans...
...lpha_4\left(\mid \delta_{\text{rot1}}\mid+\mid \delta_{\text{rot2}}\mid\right))$    
$\displaystyle \nu_{\text{rot2}}$ $\displaystyle \sim \mathcal{N}(\nu_{\text{rot2}},0,\alpha_1\mid \delta_{\text{rot2}}\mid +\alpha_2 \delta_{\text{trans}})$    

and $ \mathcal{N}(\nu,0,\sigma)$ is the zero-mean Gaussian (normal) distribution with standard deviation $ \sigma$ . We want to find values for $ (\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ that result in Gaussian distributions that best fit our observed rotational and translational errors. Formally we want to maximize

\begin{multline*}
\operatornamewithlimits{argmax}_{\alpha_1,\alpha_2} \prod^N_{...
...^{(i)}_{\text{rot2}}\mid +\alpha_2 \delta^{(i)}_{\text{trans}})
\end{multline*}

and

$\displaystyle \operatornamewithlimits{argmax}_{\alpha_3,\alpha_4} \prod^N_{i=1}...
...lpha_4\left(\mid \delta_{\text{rot1}}\mid+\mid \delta_{\text{rot2}}\mid\right))$    

where $ N$ is the number of collected odometry error measurements and $ \epsilon^{(i)}_{\text{rot1}}$ , $ \epsilon^{(i)}_{\text{trans}}$ and $ \epsilon^{(i)}_{\text{rot2}}$ are the $ i$ -th error measurements of the respective move operation.


Performance of MCL over several runs

The plots in Fig. 5 and 6 show the pose estimation error (Euclidean distance of positions and difference in bearing) over subsequent MCL iterations of five runs with 1500 and 150 particles, respectively. In each run the robot started of at a different, random position. We observe that with 150 particles it takes the algorithm longer to converge than with 1500 particles. The average over only five runs is misleading in the sense that if four times the algorithm localized correctly over a specific period of iterations one time the localization is poor, the average in this period is high. In the upper plot of Fig. 5 we can observe this fact between iterations $ \approx$ 480 and $ \approx$ 630. For this reason we also plot the median which gives a less distorted picture of the estimation errors.

Figure 5: Top: Pose estimation errors over five runs of MCL with 1500 particles where the errors are averaged per iteration. Bottom: Same underlying data as in the top plot but in this case the median is shown instead of the average.
\includegraphics[width=0.35\paperwidth]{1500s_mean_of5_to800}  
\includegraphics[width=0.35\paperwidth]{1500s_median_of5_to800}  

Figure 6: Top: Pose estimation errors over five runs of MCL with 150 particles where the errors are averaged per iteration. Bottom: Same underlying data as in the top plot but in this case the median is shown instead of the average.
\includegraphics[width=0.35\paperwidth]{150s_mean_of5_to1600}  
\includegraphics[width=0.35\paperwidth]{150s_median_of5_to1600}  

Bibliography

1
Jonas N. Schwertfeger.
Monte-carlo localization in gazebo using a laser range sensor, 2006.

2
D. Fox, W. Burgard, F. Dellaert, and S. Thrun.
Monte carlo localization: Efficient position estimation for mobile robots.
In Proceedings of the National Conference on Artificial Intelligence, 1999.

3
Sebastian Thrun, Wolfram Burgard, and Dieter Fox.
Probabilistic Robotics.
MIT Press, 2005.

4
C. Tomasi Y. Rubner and L. J. Guibas.
A metric for distributions with applications to image databases.
In Proceedings of the 1998 IEEE International Conference on Computer Vision, pages 59-66, 1998.

5
D. Fox.
Kld-sampling: Adaptive particle filters.
In Advances in Neural Information Processing Systems 14. MIT Press, 2001.

About this document ...

Monte-Carlo Localization in Gazebo Using a Laser Range Sensor

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -dir /ltmp/report_example/ js_mcl.tex

The translation was initiated by Chad Jenkins on 2007-09-17

Chad Jenkins 2007-09-17