e., wheel velocities) to act as the metric reference.The general diagram of the algorithm proposed in this paper is shown in Figure 1. It shows a Brefeldin A ARFs clear division of the processes involved in obtaining the pose of the robot: first we denote as ��Initialization of Pose and Geometry�� to those processes Inhibitors,Modulators,Libraries necessary to start up the system, such as the 3D model of the robot and the initial pose it occupies. The initialization consists of a batch processing algorithm where the robot is commanded to follow a certain trajectory so that the camera is able to track some points of the robot��s structure under different viewpoints jointly with the recording of the odometry information. All this information is combined to give the 3D model of the robot and the initial pose it occupies.Figure 1.
General diagram of the proposed localization system using a vision sensor and odometry readings.Given the initialization information, the second group of processes, named ��Sequential Localization��, provides the pose of the robot in a sequential manner. It is composed of a Inhibitors,Modulators,Libraries pose estimator, given odometry readings and a pose correction block which combines the estimation of the pose with image measurements to accurately give a coherent pose with the measurements. This algorithm operates entirely on-line and thus the pose is available at each time sample.Both group of processes are supplied with two main sources of information:Image measurements: they consist of the projection in the camera��s image plane of certain points of the robot��s 3D structure.
The measurement process is in charge of searching coherent correspondences through images with different perspective changes due to the movement Inhibitors,Modulators,Libraries of the robot.Motion estimation of the robot: The odometry sensors built on-board the robot supply the localization system with an accurate motion estimation in short trajectories but that is prone to accumulative errors in large ones.1.1. Previous WorksDespite the inherent potential of using external cameras to localize robots, there are relative few attempts to solve it compared to the approach that consider the camera on-board the robot [4, 5]. However, some examples of robot localization with cameras can be found in the literature, where the robot is equipped with artificial landmarks, either active [6, 7] or passive ones [8, 9].
In other works a model of the robot, either geometrical or of appearance [10, 11], is learnt previously to the tracking task. In [12, 13], the position of static and dynamic objects is obtained by multiple camera fusion inside an occupancy grid. An appearance model is used afterwards to ascertain which object is each robot. Despite the technique used Inhibitors,Modulators,Libraries for tracking, the common Batimastat point of many of the proposals found in the topic comes from the fact that rich knowledge nearly is obtained previously to the tracking, in a supervised task.