Cooperative Robots Architecture For An Assistive Scenario
Cooperative Robots Architecture For An Assistive Scenario
Cooperative Robots Architecture For An Assistive Scenario
Scenario
L. Ciuccarelli∗ , A. Freddi∗ , S. Longhi∗ , A. Monteriù∗ , D. Ortenzi† and D. Proietti Pagnotta∗
∗ Dipartimento di Ingegneria dell’Informazione, Università Politecnica delle Marche, Ancona, Italy
Email: {a.freddi, sauro.longhi, a.monteriu}@univpm.it, d.proietti@pm.univpm.it
† Dipartimento di Ingegneria dell’Energia Elettrica e dell’Informazione, Università di Bologna, Bologna, Italy
Email: davide.ortenzi@unibo.it
III. M ETHODS
using Eq. (1) in [8]. Finally, the desired object is picked by
A. SWC Localization Algorithm
the arm and placed on the user’s hand. This is accomplished
This phase is articulated in two steps. Firstly, an Unscented by means of the external Kinect sensor, which is able to track
Kalman Filter (UKF) is applied to remove the noise and to online the skeleton of the user, and thus the location of his
determine the position of the SWC with respect to an absolute hand.
coordinate reference frame. The filter carries out a sensor
fusion of the data gathered from the encoders, the IMU and IV. E XPERIMENTAL T RIAL
the webcam. In detail, the IMU is exploited to improve the The proposed system is evaluated both in simulation and in
heading estimation, while the webcam is used to compensate a real scenario. Regarding the simulation, the 3D model of the
the drift in the estimation of the pose obtained from the SWC is developed in the Gazebo 7 simulator, complete with
proprioceptive sensors (encoders and IMU). QR codes have the webcam, IMU and laser scanner models. Moreover, 9 QR
been used as landmarks, placed on the ceiling, and encapsulate codes have been positioned in the simulated laboratory, with
the information of the real pose with respect to the absolute a density of about 0.18 codes/m2 (Figure 1(a)). A graphical
reference frame in the form X#Y#Z#room. When a code is user interface has been realized by which the user can choose
focused by the webcam, this information feeds the UKF which the desired object (Figure 1(b)). Once the object is chosen, the
can correct the SWC pose estimation. SWC autonomously navigates to the desired RW, where the
Secondly, the Monte Carlo algorithm employs the data from robotic arm picks the user selected object and places it in the
the laser scanner and the output of the UKF to localize the neighborhood of the wheelchair. The user can then choose a
SWC in the map. new point on the navigation map. Finally, the real system is
tested in our laboratory, as shown in Figure 2.
B. SWC Navigation Algorithm
R EFERENCES
The aim of the navigation algorithm is to compute the
velocity commands to send to the wheels controller. The inputs [1] D. Feil-Seifer and M. J. Mataric, “Defining socially assistive robotics,” in
IEEE 9th International Conference on Rehabilitation Robotics (ICORR),
are the estimated pose of the SWC, from the localization 2005, pp. 465–468.
algorithm, the transformations between the considered frames [2] A. Sciutti and G. Sandini, “Interacting with robots to investigate the
and the laser scanner data. The algorithm is based on the bases of social interaction,” IEEE Transactions on Neural Systems and
Rehabilitation Engineering, vol. 25, no. 12, pp. 2295–2304, 2017.
Dynamic Window Approach (DWA), which associates the scan [3] L. Ciabattoni, F. Ferracuti, G. Foresi, A. Freddi, A. Monteriù, and D. P.
map to a grid, where each cell assumes a number that identifies Pagnotta, “Real-time fall detection system by using mobile robots in smart
the probability of its occupation. The trajectory is generated homes,” in IEEE 7th International Conference on Consumer Electronics-
Berlin (ICCE-Berlin), 2017, pp. 15–16.
considering the original global path, the presence of obstacles [4] G. Chance, A. Camilleri, B. Winstone, P. Caleb-Solly, and S. Dogramadzi,
and the location of the goal. “An assistive robot to support dressing-strategies for planning and error
handling,” in IEEE 6th International Conference on Biomedical Robotics
C. RW Pick and Place Algorithm and Biomechatronics (BioRob), 2016, pp. 774–780.
[5] H. H. Le, M. J. Loomes, and R. C. Loureiro, “Group interaction through
A visual servoing pick and place is carried out by the Baxter a multi-modal haptic framework,” in IEEE 12th International Conference
arm when the SWC completed the navigation. In detail, the on Intelligent Environments (IE), 2016, pp. 62–67.
[6] F. Achic, J. Montero, C. Penaloza, and F. Cuellar, “Hybrid bci system
algorithm is articulated in the following steps. Firstly, the to operate an electric wheelchair and a robotic arm for navigation and
object location is estimated in the camera frame from the manipulation tasks,” in IEEE Workshop on Advanced Robotics and its
image data acquired via the camera mounted at the end of the Social Impacts (ARSO), 2016, pp. 249–254.
[7] G. Foresi, A. Freddi, A. Monteriù, D. Ortenzi, and D. P. Pagnotta,
Baxter arm. In order to find the object center of gravity, the rgb “Improving mobility and autonomy of disabled users via cooperation
image is converted into a black and white one to find the object of assistive robots,” in IEEE International Conference on Consumer
contours. Then, via the moments calculus, the center of gravity Electronics (ICCE), 2018, pp. 1–2.
[8] G. Foresi, A. Freddi, S. Iarlori, A. Monteriù, D. Ortenzi, and D. P. Pag-
is computed. Secondly, the position of the object is expressed notta, “Human-robot cooperation via brain computer interface,” in IEEE
in the Baxter base frame by knowing the arm joint angles, its 7th International Conference on Consumer Electronics-Berlin (ICCE-
height from the object and the camera calibration factor by Berlin), 2017, pp. 1–2.
129