Model and Control of A Hexapod Robot

Download as pdf or txt
Download as pdf or txt
You are on page 1of 123

Intelligent Autonomous Systems

8th Semester
Fredrik Bajers Vej 7
Telephone 96 35 86 90
http://es.aau.dk

Title: Abstract:
Modelling and Control of a Hexapod Robot

Theme:
Robots with wheels or tracks are currently used for
Modelling and Control exploring dangerous areas in search and rescue mis-
sions. To improve the abillity to traverse terrains it
Project period:
is proposed to use a walking robot.
Spring 2009
A random generated terrain is set up to represent a
Project group: search and rescue terrain, and a hexapod walking
09gr832 robot is applied. A kinematic model is set up of the
hexapod robot, and it is determined how the robot
Group members: can be moved while sustaining static stabillity.
Mads Jensen A gait generator for tripedal walk is developed us-
Rasmus Pedersen ing artificial potential functions, and a trigonomet-
Jeppe Juel Petersen ric inverse kinematics solver is used for translating
the gait into servo angles.
Casper Lyngesen Mogensen
Collisions with the surface are detected using a 3D
Henrik Dalsager Christensen
model, and a system is set up to avoid unwanted ter-
Supervisor: rain contacts. The robot is controlled in open loop
Ph.D. scholar Mads Sølver Svendsen to evaluate the models.
While traversing the terrain singularities can appear
Publications: 7 when solving inverse kinematics. It was attempted
to modify the gait to avoid the singularities, this was
Pages: 75 (113 including appendix)
however not achieved satisfactory.
Finished: 2nd of June, 2009
Intelligente Autonome Systemer
8. semester
Fredrik Bajers Vej 7
Tlf: 96 35 86 90
http://es.aau.dk

Synopsis:

Titel:
Modellering og kontrol af en hexapod robot Kørende robotter på hjul eller larvefødder bruges
allerede i dag til at udforske farlige områder i efter-
Tema:
søgning og rednings opgaver. For at forbedre robot-
Modellering og kontrol ters evne til at komme frem i uvejsomt terræn fores-
Projekt periode: låes det at bruge robotter der går på ben.
For at repræsentere et uvejsomt terræn er et til-
Foråret 2009
fældigt terræn fremstillet. En seksbenet robot er
Projekt gruppe: brugt som eksempel på en gående robot.
09gr832 Der er opstillet en kinematisk model af robotten, og
det er undersøgt hvordan man kan bevæge robotten
Gruppe medlemmer: imens man opretholder statisk stabillitet.
Mads Jensen En algoritme baseret på kunstige potentialer bruges
Rasmus Pedersen til at beregne robottens gang. En trigonometrisk løs-
Jeppe Juel Petersen ning af den inverse kinematik bruges til at omsætte
Casper Lyngesen Mogensen bevægelserne til vinklerne for den enkelte servo.
For at sikre at terrænet kun har kontakt med
Henrik Dalsager Christensen
robottens fødder er der anvendt en 3D model til
Vejleder: at beregne kollisioner med robottens overflade.
Ph.D. scholar Mads Sølver Svendsen Robotten er styret i open loop og de kinematiske
modeller er verificeret.
Oplag: 7 Ved gang i det tilfældigt genererede terræn, opstår
der gangarter som ikke kan gennemføres kinema-
Sider: 75 (113 inklusiv appendiks)
tisk. I de situationer er det forsøgt at ændre gan-
Færdiggjort: d. 2 Juni, 2009 garten så videre gang er muligt, det lykkes dog ikke
altid tilfredsstillende.
Preface
This report concerns the modelling and control of a hexapod robot. The project is composed in the period
from the 1st of February to the 2nd of June. The work was carried out at Aalborg University by five 8th
semester students in the Section of Automation and Control at the Department of Electronic Systems. The
project proposal is given by Ph.D scholar Mads Sølver Svendsen from the Mobile Robotics Group at Aalborg
University.

The report is written as a research paper that can provide information to further development on the subject of
modelling and controlling hexapod robots.

The source code for both the report and the developed software can be found on the enclosed CD see Ap-
pendix H, as well as other materials used in the making of the project.

Throughout the project, MATLAB has been used for data processing and presenting results. Simulink and Real
Time Target has been used for implementing, modelling and simulating the developed models.

In Chapter 7 a complete list of the acronyms used in the report are presented.

Group 09gr832, Aalborg University

Casper Lyngesen Mogensen Mads Jensen

Rasmus Pedersen Henrik Dalsager Christensen

Jeppe Juel Petersen


Contents

1 Introduction 1
1.1 Preliminary Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Preliminary Knowledge 3
2.1 Search and Rescue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Robot Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Preliminary Analysis of Six-Legged Gaits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Objectives and Delimitations 9


3.1 System Functionality Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Modelling 13
4.1 Introduction to Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Terrain Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 Selection of Coordinate Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 Kinematic Model of Hexapod Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.5 Dynamic Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.6 Test of Inverse Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

5 System Design 29
5.1 System Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.2 Gait Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.3 Collision Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.4 Verifying Collision Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.5 Avoiding Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

6 Epilogue 73
6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Bibliography 75

7 Acronyms 77

Appendices 79
A Measurement and Simulation of Worst-Case ZMP Movement 81

B Finding the Center of Mass of Each Link 83

C Test of Inverse Kinematics 85

D User interface 95

E Motion Tracking Lab 99

F Robot limitations 103

G Configuration space 107

H CD 113
Chapter 1
Introduction

Throughout history, disasters, both the naturally occuring and the ones inflicted by humans, have troubled the
population of the Earth. When a disaster strikes, urban regions are often struck the hardest, since the density of
people is higher in urban than in rural regions. Buildings, skyscrapers and other structures could be turned into
rubble and debris with people trapped inside. Hence potentially making the urban Search and Rescue (SAR)
operations far more dangerous than rural SAR missions.

Estimations indicate that the urbanization of the world will continue, meaning more and more people will move
into confined areas. Hence the need for urban SAR, due to both natural and human inflicted disasters, will still
be there. Estimations show that somewhere in 2008-2009 the percentage of urban dwellers will pass the 50%
mark, and increase rapidly in the years to come [1].

Robots pose an attractive solution to the problem of finding missing persons in disaster areas while keeping the
rescue personel safe. However the task of using robots for SAR operations, requires a precise model of, and an
advanced control mechanisms, for the robot, especially if the robots should reach some degree of autonomity.

1.1 Preliminary Problem

A premapping of the environment improves the robots possibilities of traversing the area, in a potential search
and rescue mission. It allows the control system of the robot to perform some sort of route planning through
debris and rubble, but it is also the first step in making the robot more autonomous. The proposed scenario of
the project is shown on Figure 1.1, with an autonomous robot facing a mapped/known environment.

To allow the robot to traverse the area, further knowledge of the robot, environment and general issues con-
cerning both SAR and general robotics is analysed.
2 1.1 Preliminary Problem

Figure 1.1: Robot and mapped landscape


Chapter 2
Preliminary Knowledge

In the following chapter an analysis of the elements used in the project will be carried out. General knowledge
of SAR should be obtained in order to get knowledge of the environment the robot works in. General knowledge
about locomotion used for movement of the robot should also be obtained.

The robot used throughout the project, will be presented. Following the robot presentation, the knowledge of
locomotion is used to determine a suitable gait, to use when traversing the area.

The knowledge obtained in this chapter, is used to determine a set of objectives which is to be reached during
this project.

2.1 Search and Rescue

Shortly after a disaster the primary objective is to find survivors hidden or trapped in the debris, however the
task of searching trough the debris is both tedious and dangerous. The idea of letting robots perform the task
has been suggested several times, and in recent time robots have been used. During the World Trade Center
collapse in 2001 several robots took part in the search of survivors, and the usage of backpack sized robots
proved to have a unique capability to scavenge the debris searching for people [2][3].

One of the robots advantages, in scavenging debris, is that they, due to their size, can dive into the debris,
searching inside a pile of rubble, where humans have no chance of going. Robots can work for several hours
without getting tired, whereas the longevity of humans is limited. When traversing unknown terrain, robots
need an advanced system guiding them through the debris, otherwise it might get stuck or damaged. Hence the
primitive robots used, must very often have minimum one and often more persons working alongside it, placing
humans in the danger zone [2].

Recent studies, trying to reduce the human to robot ratio, aim to improve the autonomy of the robot, while
integrating robots in a larger network with an overall objective. The solutions often result in a human being the
limiting factor in the search, as the human determines what to be done and where [2][3].
4 2.1 Search and Rescue

2.1.1 Human-Robot Collaborative Search

Where the optimal solution might be to have a large number of autonomous robots searching an area, system-
atically or at random, for people hidden in debris, a more likely system would be a joint operation between
humans and robots, making the search and rescue operation robot-assisted.

Suppose the search area consisted of an even surface of e.g. 20 by 20 meters, this area could be mapped into a
computer, and thereby given to the individual robots. Placing a robot in a fixed position somewhere in the map
as a start position, would be the beginning of a search operation. One possibility could be to provide the robot
with some limitations indicating where it can go, hence constraining its movement to be inside the search area,
as shown on Figure 2.1(Left).

Now suppose that two points of interest are placed in the search area, as illustrated on Figure 2.1(Center) and
the goal for the robot is to find the two points. The robot could be left inside the area and allowed to search at
random, and over time finding the points. However the operator could have some knowledge of where it might
be wiser to begin the search and perhaps a subarea in the search area is inaccessible for the robot. This could
result in a map of the search area as shown in Figure 2.1(Right).

Figure 2.1: (Left) A robot placed in a search area (Center) Simple route planning (Right) Complex route planning

The degree of autonomy implemented on the robot could include a navigational system, using the map of the
area and the constraints given by the operator to choose the fastest route to the point of interest.

When changing the scenario from a two dimensional terrain to a three dimensional, the navigation becomes
more difficult. In the two dimensional example, the constraints given by the operator was somewhat simple,
but as the third dimension is added, the constraints should also be brought into a three dimensional map. A
terrain in three dimensions is more realistic as an environment involved in a SAR operation. Here the variations
in terrain caused by debris and rubble, can change in elevation by each step the robot and/or the person travels
into the terrain. Furthermore the stability of the debris is easily compromised. When a robot traverses through
debris, each step is a potential terrain change waiting to happen.

In order to plan a path through uneven terrain, knowledge of the robot must be defined, since the possible paths
depends on both the size of the robot and the type of locomotion used by the robot. The locomotion is important
and the right type of locomotion should always be chosen with consideration of the environment surrounding
the robot.
Preliminary Knowledge 5

2.1.2 Robot Locomotion

In order to traverse a given terrain the robot needs to be equipped with a system which provides locomotion.
This could be done in many different ways, which can roughly be adapted into three different types [4].

• Rotational devices, such as wheels and tracks

• Legs, similar to those observed on animals

• Articulated bodies, similar to the body of a snake

The different types of locomotion are used in different fields, wheels and tracks are preferred where the terrain
is even, however when the terrain becomes more uneven the advantages of wheeled locomotion becomes some-
what obsolete [4]. Instead the advantages of legs makes walking more useful in an uneven terrain. The legged
robots advantage is the ability to position the legs with high precision. The number of joints on the leg, gives
the robot more Degrees of Freedom (DoF), which can give the robot a better or more precise placing of a foot.

However some terrains can be too uneven, even for a legged robot, since the ground clearance below the robot
is of a certain size, which is limited by the length of the legs. In such a case, a robot with an articulated body
could be preferable. The articulated body makes the robot able to move much like a snake, the composition of
the servos allows it to raise the Center of Mass (CoM) and improve its chances of traversing objects blocking
its way [4].

The environment in a disaster zone is, much like a substantial part of the Earth, inaccessible to wheeled or
tracked robots. Locomotion using legs or an articulated body is in most cases preferred, however the articulated
body often proves difficult to analyze and control, compared to legged locomotion which has been analyzed
and constructed since some of the first machines were built more than 2000 years ago [5].

2.2 Robot Description

This section describes the robot available for this project, which is produced by Lynxmotion[6] and is of the
type ”AH3-R Walking Robot”. It is a symmetric hexapod walker, with a cylinder shaped base chassis, and with
6 identical legs evenly distributed along the circumference, as is illustrated on Figure 2.2

The individual parts, the robot consists of, are named members or links. Typically links on a robotic system are
connected through joints.[7].

Each leg consists of 3 joints and 3 links. The joints can be manipulated by one servo per joint, providing 3
DoF for each leg, and a total of 18 DoF for the robot. One of the servos connects the entire leg to the base
chassis through a vertical axis, allowing the leg to rotate sideways in relation to the body. The two other servos
manipulates the two other joints of the leg, with rotation about horizontal axes, one located close to the body
and the other furter away.

On Figure 2.3 the name convention for the joints, and the location of the rotational axes are indicated. all
possible movements are caused directly by the servos. The names are adopted from insects, as their legs have
a similar structure. The innermost joint is dubbed coxa joint, as it is the joint which attaches the coxa link. The
middle joint is dubbed femur, and the outermost joint is named tibia.
6 2.2 Robot Description

Figure 2.2: Illustration of the robot, showing the basic geometry of the robot, the body is circular (red in figure), with legs attached at
equal spacing

Figure 2.3: Illustration of the robotic leg, with rotational axes added. The green axis (left) is named coxa joint, the Red axis (center) is
named femur joint, and the blue axis (right) is named tibia joint.
Preliminary Knowledge 7

2.2.1 The servos on the robot

The servo type (HS-645MG) used on the robot has an on-board controller which translates a Pulse Width
Modulated (PWM) signal to a position, and each servo can be updated to new positions with 50Hz. The general
parameters of the servos can be seen in Table 2.1. With a total of 18 servos the current draw accumulates to
8.1 A if moving all servos at minimum load, and up to 36 A if all servos are stalled [8].

Servo parameters:
Operating voltage: 4.8 - 6 [V]
Torque provided: 9.6 [Kg · cm]
Current consumption (idle): 9.1 [mA]
Current consumption (no load): 450 [mA]
Current consumption (stall): 2 [A]

Table 2.1: Table showing the general servo parameters.

The electronics of the robot is also delivered by Lynxmotion and consists of the SSC-32 servo controller which
provides a low level servo interface[6]. SSC-32 provides 32 PWM output connections, which is intended for
attaching servos or similar output devices. The interface for the SSC-32 is a RS232 connection which can be
set at baud rates up to 115.2kbit/s [6].

2.3 Preliminary Analysis of Six-Legged Gaits

The purpose of this section is to analyze what kind of gait should be used to enable the hexapod robot to walk.

A gait is a description of how the robot moves its legs, wheels or what ever means of locomotion the robot uses.
An important aspect of how the gait is constructed is if the gait of the robot is static stable and/or dynamically
stable. Static stability implies that the CoM must lie within Polygon of Stability (PoS) at all times [9, p.285-
286].

When talking about dynamic stability it is necessary to introduce the Zero Moment Point (ZMP). ZMP can be
defined as: "That point on the ground at which the net moment of the inertial forces and the gravity forces has
no component along the horizontal axes"[10, p. 8]. To ensure dynamic stability the ZMP must lie within the
PoS at all times and not lie on the edges of the PoS[11, p. 91].

2.3.1 Six-legged gaits

When planning a gait and a trajectory for a six-legged robot, a place to start is by looking at insects. The insects
normal way of walking is by using a tripedal walk, where its legs are moved three and three. When an insect
moves its legs, the front and rear leg on one side is moved at the same time as the middle leg on the other side
(Illustrated on Figure 2.4).

The tripedal walk has the advantage that three legs are touching the ground at all time which gives a triangular
PoS. The advantage of having a triangular PoS is that it is possible to obtain gait that is stable at all times. With
only two point feet on the ground, the PoS is reduced to a line, which makes it very difficult to achieve stability.

Another option for six-legged locomotion is the wave gait where one leg is moved at a time. When using this
8 2.3 Preliminary Analysis of Six-Legged Gaits

Figure 2.4: Illustration of an insect gait. The legs colored red are touching the ground and the legs colored with green are lifted. The
dashed triangles are the PoS

gait, a six-legged robot will have five end points on the ground all the time, which implies that the PoS will
be defined by the five end points, usually causing the PoS area to increase, reducing the risk of instability.
Even though the wave gait implies easier insurance of stability, the disadvantage is that it will be a slower walk
when compared to the tripedal walk, as only one leg is moved at the time. It is also possible to construct other
arbitrary gaits. Examples could be an ad hoc gait where the legs are placed according to the shape of the terrain
or if the robot is required to climb over obstacles.
Chapter 3
Objectives and Delimitations

The project proposal presents the problem of making a hexapod robot able to traverse an unstructured terrain.
The motivation of the project comes from the fact that SAR robots often need to traverse ruble and other
unstructered terrain.

While modern SAR robots use wheels or tracks for locomotion, it is argued in Section 2.1 that legs are better
suited for this type of terrain traversal. On the basis of the project proposal it is decided to focus on the
locomotion of the robot through some known terrain on the basis of directional commands from a human.

From this consideration the main project objective was found to be:

Design a system for the Lynxmotion hexapod robot which enables it to traverse an artifi-
cially created, unstructured terrain on the basis of directional commands from a human.

The problem gives rise to a range of solutions. to specify the problem and narrow the focus of the project the
following sub-objectives are proposed.

The robot must be able to traverse the entire length of the test terrain only by the help
of directional commands from a human.

Only the end points of the robot legs can be used to support the weight of the robot, no
other part of the robot is allowed to collide with the terrain.

3.0.2 Delimitation

The task of designing a control system that enables a SAR robot to traverse an uneven terrain consisting of
debris and rubble, is in this project delimited to designing a control system that enables the robot to traverse a
known artificially created terrain.

The terrain is created from 7.7 by 7.7 cm wooden beams in different lengths, mounted on four 75 by 75 cm
plywood boards which can be arranged in different patterns. The artificial terrain can be mapped digitally and
by selecting the lengths of the wooden beems at random, the terrain will still be unstructured, simulating debris
and rubble. The task of controlling the robot in an environment which can change under the weight of the robot,
10 3.1 System Functionality Outline

or is wet or slippery, is not considered. However the composition of the map should be sufficiently complex to
set some constraints to the robots movement.

Another problem not taken into account during the project, is how far the robot can see in the terrain. To give
the robot more autonomoty cameras could be mounted on the robot. However that solution is computationally
intensive, and the process of creating the map “on-the-fly” is considered a project in it self. The project proposal
suggest that a complete mapping of the environment is carried out before the robot is introduced. This makes
two different solutions possible, one is to give the robot full knowledge of the map, since the data is present.
Another solution could be to give the robot limited knowledge of the map, allowing it to see, e.g. 1 meter in
each direction.
It is chosen to give the robot complete knowledge of the mapped terrain for the purpose of this project.

To further narrow the range of solutions, the following delimitations are presented. These points are essentially
specifications of the important aspects of the project and delimits the project from what is considered less
relevant to the main objective.

The terrain will be mapped digitally and stored for use in the controller. No online
mapping of the terrain will be conducted.

The sensors used for the system will be the Motion Tracking Lab (MTL) cameras.

The system developed in the project is specially designed for the Lynxmotion Hexapod
robot, described in Section 2.2.

The robot will move only on the basis of human input, not autonomously.

No control routines will run on the robot hardware itself, the control software will run
on a separate PC.

The robot will not carry any payload.

3.1 System Functionality Outline

To complete the objectives stated above, while considering the project delimitations, some preliminary system
functionalities can be outlined. The system needs to have complete kinematic awareness of the robot, to be able
to position the robot body and legs. A gait that can handle uneven terrain and functionality enabling the robot
to avoid collisions with the terrain needs to be developed.

It can be argued that the gait should be, by itself, able to ensure that the robot does not collide with the terrain,
leaving any collision detection algorithm obsolete. In short the system should be able to supply the following
functionality.

• Kinematic model including Inverse Kinematics (IK) for control of the robot in a global coordinate system.

• Gait generating algorithm based on directional inputs from a human.

• Collision detection and/or avoidance functionality.

How the system features are designed and implemented, is described in the following parts of this report. First
the kinematic modelling of the robot and terrain is described, then the actual system design regarding gait
Objectives and Delimitations 11

generation and collision detection and avoidance is described along with other sub-features that was found
necessary due to the design of the main features.
12 3.1 System Functionality Outline
Chapter 4
Modelling

4.1 Introduction to Modelling

To be able to control the robot, models of both the robot and the environment it is present in, needs to be
developed.

First coordinate frames are defined for all important parts of the system. The defined coordinate systems will be
used throughout the report. To be able to coordinate the movement of the robot in relation to the environment,
a kinematic model is created. The kinematic model consists of both the kinematic description of the robot and
its manipulators and an IK solution for the manipulators and the robot body itself. This makes it possible to
calculate the joint angles for the robot legs, for a given leg and robot configuration.

To be able to determine the stability of the robot, the dynamic properties of the robot must be considered. The
stability of the robot is determined by the position of the CoM and the ZMP relative to the PoS. It is assumed
that the robot will move so slowly that the vertical projection of the CoM on the ground and the ZMP will be
roughly the same. Therefore only the CoM will be used to determine the stability of the robot. By using an
assumed maximum movement of the ZMP, a minimum distance, from the CoM to the edge of the PoS, can be
calculated hence ensuring stability at all times. The assumed maximum movement of the ZMP is calculated on
the basis of an assumed worst case acceleration of the robot, as explained in Appendix A.

4.2 Terrain Model

The environment, which a robot would encounter in a SAR situation, is hard to replicate. Rubble will often be
present, and the ground can be unstable. For simplicity it is chosen to consider only a stable terrain with no
loose rubble or slippery surfaces.

It is desirable to use a terrain and a corresponding map which is reconfigurable as it allows for having one terrain
for development, and another terrain for testing. The terrain is, due to the size of the laboratories, designed to
be installed on a square board measuring 1.5 meters on both sides. The unstructured terrain is simulated using
blocks cut from 7.7 by 7.7 cm logs of wood. The wooden blocks are cut in 4 different heights: 5, 10, 15, and
20 cm. As a result the map consists of 5 discrete elevations ranging from 0 to 20 cm.
14 4.3 Selection of Coordinate Frames

The distribution of the blocks can be used to build a discrete model of many terrain variations. As an example a
stair can be built by distributing them proportionally to an axis, and a wall could be created by aligning the tall
blocks in a row. For this project it is chosen to create a uniform distribution of the blocks, as this will model
different scenarios in a very confined area.

The uniform distribution is approximated by dividing the area into 7.7 by 7.7 cm squares, and then iterating
through each line of the map, assigning a band limited discrete random value to each element. This produces
a map as illustrated in Figure 4.1, and as the figure illustrates, the terrain is quite versatile in the challenges it
presents for the robot. Around the unstructured area of the simulated terrain, an area of flat ground is mapped.
This maps the flat lab floor around the terrain and allows for testing on flat ground.

1500 1500

1000 1000

500 500
Y− axis X− axis
0 0

Figure 4.1: Illustration of the terrain created by a Matlab script assigning a bounded discrete random height to each element in the terrain

4.3 Selection of Coordinate Frames

Before any modeling of the robot can begin the coordinate systems for all parts of the robot and terrain needs to
be identified and properly defined. All coordinate systems used will be cartesian and from now on be referred
to as frames.

4.3.1 Global Frame

The global frame is the frame that all other frames will be defined relative to. The global frame is rigidly
attached to the lower left corner of the terrain model so the z-axis is vertical and the xy-plane is aligned with
the floor surface. The x-axis and y-axis are parallel to the walls of the MTL. The x- and y-axes of the global
Modelling 15

frame is illustrated together with the artificial terrain model in Figure 4.1. The z-axis of the global frame points
upwards.

4.3.2 Robot Body Frame

The origin of the robot coordinate frame will be attached to the center of the bottom plane of the central robot
structure with the z-axis pointing up, the x-axis pointing left and the y-axis pointing forward. This is illustrated
in Figure 4.2.

In this project the roll, pitch and yaw angles (α, β and γ) of the robot body frame relative to the global frame
are defined as:

Roll: Rotation about the y-axis.

Pitch: Rotation about the x-axis.

Yaw: Rotation about the z-axis.

Figure 4.2: Location of the robot body frame relative to the robot hardware.

4.3.3 Leg Frames and Notations

The coordinate frames for the robot legs are assigned as shown in Figure 4.3. The assignment of link frames
follows the Denavit Hartenberg notation [12, p. 200]. The robot leg is made of links and joints as noted on
Figure 4.3. As stated in Section 2.2 the different links of the robot legs are called coxa, femur and tibia.

The robot leg frames starts with link 0 which is the point on the robot where the leg is attached (Attachment
link), link 1 is the coxa, link 2 is the femur and link 3 is the tibia. Link 4 is the end point of the leg and coincides
with link 3. The joints are located at the inner end of their respective links while the link frames are attached to
the outer end of their respective links. This means that joint 2 rotates about the z-axis of frame 1. The y-axes
of the link frames are not shown on Figure 4.3 as they are not relevant here.
16 4.4 Kinematic Model of Hexapod Robot

(a) (b)

Figure 4.3: Illustration of leg frame and link frames. Figure (a) shows a 3D rendering of one of the robot legs with its rotational axes.
Figure (b) shows the robot leg in a isometric view with all link frames and rotational axes. The y-axes of the link frames are not shown as
these are irrelevant for the given illustration.

4.4 Kinematic Model of Hexapod Robot

When all the coordinate frames are defined, it is possible to describe the kinematic model of the legs of the
hexapod robot. All units are milimeters for distance measurements and degrees for angle measurements if
nothing else is explicitly stated.

4.4.1 Robot Leg Parameters

The robot legs can be described by the following set of parameters which complies with the Denavit Hartenberg
notation [12, p. 200].

As described in Section 4.3 the legs are placed in a local leg frame with a vertical z-axis through the rotational
joint, which connects the leg to the body. The x-axis of the leg frame is defined to be perpendicular to the robot
body, pointing away from the center of the robot. In the leg frame, link frames are assigned to each link in the
leg, as described in Figure 4.3.

The Denavit Hartengberg parameters are denoted αi , ai , θi and di .

• αi is the angle between the zi−1 -axis and the zi -axis about the xi -axis.

• ai is the kinematic length of link i, e.g. the distance between the zi−1 -axis and the zi -axis along the
xi -axis.

• di is the link offset, e.g. the distance from the xi−1 -axis to the xi -axis along zi−1 -axis.

• θi is the joint angle or the joint variable. For the rotational joints in the robot legs, this is the angle
between the xi−1 -axis and the xi -axis about the zi−1 -axis.
Modelling 17

The Denavit Hartenberg parameters for the first and last link are not defined as the rest of link frames are
assigned in such a way that the twist and distance from these links to their respective nearest links are zero.
Meaning α0 = a0 = 0 and α4 = a4 = 0.

link/parameter αi ai di θi
1 (Coxa) 90◦ 38.5 [mm] 45 [mm] θ1
2 (Femur) 180◦ 56.5 [mm] 0 [mm] θ2
3 (Tibia) 0◦ 143.5 [mm] 0 [mm] θ3

Table 4.1: Denavit Hartenberg parameters for one robot leg.

The rotational axis of joint 1 is rotated 90 degrees relative to the rotational axis of joint 2, which again is rotated
180 degrees relative to the rotational axis of joint 3. The 180 degree difference α2 between joint 2 and 3 is
because of the way the servos are mounted, rotated 180 degrees relative to each other. The value can also be
set to zero and the rotational direction reversed at a lower software level, though that solution is not chosen.

4.4.2 Forward Kinematic Equations for Robot Leg

This section describes the forward kinematic equations, also called the direct kinematic equations, for one robot
leg. The coordinate frames are as described in Section 4.3 and the leg parameters are as described in Section
4.4.1.

The forward kinematic equations are a set of equations composing a transformation matrix, transforming co-
ordinates in one link frame to coordinates in another link frame. If multiplied the transformation matrices for
each link pair, gives the complete forward kinematic transformation matrix, transforming the coordinates in
frame N to coordinates in frame 0. The general form for the transformation matrix from link i to link i − 1 is
given in Equation 4.1 [12, p. 208].

 
cos θi − sin θi cos αi sin θi sin αi ai cos θi
 sin θ cos θi cos αi − cos θi sin αi ai sin θi 
i
Ti−1 = (4.1)
 
i 
 0 sin αi cos αi di 
0 0 0 1

The transformation matrix is a series of transformations:

1. Translate di along zi−1 -axis

2. Rotate θi about zi−1 -axis

3. Translate ai along xi−1 -axis

4. Rotate αi about xi−1 -axis

The specific leg transformation matrices, transforming the coordinates from one link frame to the previous, is
shown in Equations 4.2-4.5.
18 4.4 Kinematic Model of Hexapod Robot

T03 = T01 T12 T23 (4.2)

   
cos θi − sin θi cos 0 sin θi sin 0 143.5 cos θi cos θi − sin θi 0 143.5 cos θi
 sin θ cos θi cos 0 − cos θi sin 0 143.5 sin θi    sin θi cos θi 0 143.5 sin θi 

i
T23 =  =  (4.3)
 
 0 sin 0 cos 0 0   0 0 1 0 
0 0 0 1 0 0 0 1
   
cos θi − sin θi cos 180 sin θi sin 180 56.5 cos θi cos θi sin θi 0 56.5 cos θi
 sin θ cos θi cos 180 − cos θi sin 180 56.5 sin θi    sin θi − cos θi 0 56.5 sin θi 
 
i
T12 =  =


 0 sin 180 cos 180 0   0 0 −1 0 
0 0 0 1 0 0 0 1
(4.4)
   
cos θi − sin θi cos 90 sin θi sin 90 38.5 cos θi cos θi 0 sin θi 38.5 cos θi
 sin θ cos θi cos 90 − cos θi sin 90 38.5 sin θi   sin θi
  0 − cos θi 38.5 sin θi 
i
T01 =  =  (4.5)
 
 0 sin 90 cos 90 45   0 1 0 45 
0 0 0 1 0 0 0 1

Position of Link Center of Mass

To be able to derive the dynamic model, the transformation matrices to the CoM of the individual links are
presented here.

The CoM of each link is positioned relative to the link frame by a position vector pCoM = [xCoMi yCoMi zCoMi ]T .
The general homogenous representation of the CoM position is piCoMi = [xCoMi yCoMi zCoMi 1]T . The ap-
pended 1 is the scale factor and is needed for the position representation to be compatible with the previously
found homogeneous transformation matrices.

To find the position of the CoM, of the individual links, relative to the leg frame, the CoM coordinates (pCoMi )
are multiplied with the Denavit Hartenberg transformation in Equation 4.2. This gives the CoM positions as
shown in Equation 4.6.

p0CoMi = T0i piCoMi (4.6)

4.4.3 Position of Robot Legs on Robot Body and Resulting Kinematic Transforma-
tions

After defining the leg frames and constructing the individual leg transformation matrices, the legs position on
the body is defined. The leg frames are already moved upwards along the z-axis so the point z = 0 in the leg
frame equals z = 0 in the robot body frame. This means that the leg frames only needs to be transformed in
the xy-plane to be positioned correctly in the robot body frame. The leg frames are positioned on the robot
body as illustrated in Figure 4.4. As seen on Figure 4.4 the leg frame needs a rotation about their z-axis and a
translation along their x-axis to be positioned correctly in the robot body frame.

The general leg to body transformation TB


L is written in Equation 4.7. When referring to a specific leg, the
Modelling 19

Figure 4.4: This figure illustrates the position of the leg frames relative to the robot body frame. The notations (rm), (rf) (lm) and so on
are shorthand names for the leg positions, eg. rm is right middle and lf is left front, also the numeration of the legs are shown in the figure.

transformations are denoted TB lf for the transformation from the front left (lf) leg frame to the body (B) and
TBrm for the transformation from the right middle (rm) frame to the body frame and so on. The lf , rm or rr
notations all refer to the base leg frame of the leg in question, meaning frame 0 as found in section 4.3.3. In
addition to the letter indices of the legs the legs are numbered 1-6 in the positive direction starting from the left
front leg.

The legs are located on the circumference of a circle with a radius of 137 mm with an angle between them of
60 degrees. This angle can be seen as a local yaw angle for the individual leg (γk ).

 
cos γk − sin γk 0 137 cos γk
 sin γ cos γk 0 137 sin γk 
k
TB
L = (4.7)
 

 0 0 1 0 
0 0 0 1

Where:
γk is the yaw angle of the k’th leg, relative to the body frame.

The transformation matrices from the leg end point frames to the robot body frame can now be written as in
Equations 4.8- 4.13. The 3 in the indices indicate that the transformation transforms coordinated frame 3, in
leg frame, to the body frame.

0
TB B
rm3 = Tmf · T3 (4.8)
TB
rf 3 = TB
rf · T03 (4.9)
TB
rr3 = TB
rr · T03 (4.10)
0
TB B
lm3 = Tlm · T3 (4.11)
TB
lf 3 = TB
lf · T03 (4.12)
TB
lr3 = TB
lr · T03 (4.13)
20 4.4 Kinematic Model of Hexapod Robot

4.4.4 Transformation from Robot Body Frame to Global Frame

As defined in Section 4.3.2 the roll, pitch and yaw angles (α, β and γ) rotates the body around the y-axis, the
x-axis and the z-axis correspondingly. The rotation of the body frame consists of three rotations, one about
each axis. In this case the rotations occur in the yxz (roll-pitch-yaw) order.

The transformation from the robot body frame to the global frame is defined as in Equation 4.14. The transfor-
mation is written as a general homogeneous transformation.

 
" # cos α cos γ − sin α sin β sin γ − cos β sin γ cos γ sin α + cos α sin β sin γ X
RG dG 
cos α sin γ + cos γ sin α sin β cos β cos γ sin α sin γ − cos α cos γ sin β Y
TG
B =
B 
 
0 1  − cos β sin α sin β cos α cos β Z
0 0 0 1
(4.14)

Where:
α is the roll angle.
β is the pitch angle.
γ is the yaw angle.

Now the transformation from the leg end points to the global frame can be written as in Equation 4.15 and the
transformations from the CoM of the individual links are shown in Equation 4.16. This equation is for the right
middle (rm) leg, transformations for the rest of the legs are found in the same way by replacing the TB
rm3 with
the appropriate leg position transformation.

TG G B
rm3 = TB Trm3 (4.15)
G B 0
TG
CoMi = TB TL TCoMi (4.16)

Using these transformations the position of the leg end points and the CoM of the links, can be expressed as in
Equation 4.17-4.18.

 
0
0
G B 0 
pG
3 = TB TL T3   (4.17)
0
1
G B 0
pG
CoMi = TB TL pCoMi (4.18)

4.4.5 Inverse Kinematics

As seen in Section 4.4.2 a kinematic chain can be set up for each leg, in relation to the robot frame. A chain
describing the leg only contains links and rotational joints, and a rotation at a joint will orientate the links
attached accordingly. This can be used to determine how the entire leg is positioned, and as a consequence
where the end point is located.
Modelling 21

The reverse operation is often interesting, and if it is possible for the leg end point to reach a position in space,
it is also possible to determine the angles at all the joints, for the given position.

To be able to find the angles of all the joints on the robot, it is necessary to know the position of the end points,
and also the pitch, yaw, roll, and position of the robot body, in the global frame.

In general, solving the IK equations can present some challenges. Some positions cannot be reached at all, as
the physical system is unable to get there, e.g. the position could be too far away from the robot, this is called
kinematic sigularities. Some end point positions could have more than one solution, and not all the solutions are
equally desirable. Many proposals have been suggested for solving these issues, some revolve about minimizing
the torques required for moving to the position. Other methods simply depends on choosing the solution which
is closest to the current configuration. There is as such no simple generic method for optimally solving the IK
problem.

4.4.6 Transforming from Global Frame to Leg Frame

Before the IK kan be solved for the individual legs, the leg end point coordinates, which are referenced in the
gobal frame, needs to be transformed to the individual leg frames.

This inverse transformation is the pseudo inverse of the leg to body transformation TBL and body to global
G
frame transformation TB . The pseudo inverse of these transformations is found by transposing the rotational
and translational parts seperately and then recombining them. The pseudo inverse of TB G
L and TB is shown in
Equations 4.19 and 4.20. [12, P.145].

" #
G
(RG
B)
T
−(RG T
B ) · dB
TB G −1
G =(TB ) = (4.19)
0 1
" #
B
(RB
L)
T
−(RB T
L ) · dL
TL
B =(TB
L)
−1
= (4.20)
0 1

Where:
RG
B is the rotational transformation from the body to the global frame.
G
dB translational transformation from the body to the global frame.
RB
L is the rotational transformation from the leg to the body frame.
B
dL translational transformation from the leg to the body frame.

4.4.7 Solving IK for Each Leg Geometrically

For this particular system, it is decided to solve the IK equations for each leg separately, as this makes it possible
to solve it geometrically, by setting up some constraints. The first constraint for solving the IK equations, is
given by the fact that all of the robots joints only allow rotation about one axis. The second constraint is that the
Femur and the Tibia joint always rotate on parallel axes. The third set of constraints arises from the physical
limitations for each joint, giving us some angular interval for each joint in which the servos can actually rotate
the link. In Figure 4.5 the limited angles of movement are shown.

After transforming the leg end point coordinates from the global frame to the leg frame, the coxa joint angle θ1
22 4.4 Kinematic Model of Hexapod Robot

Figure 4.5: Illustration of the possible angles which the legs joints are confined to rotate within.

can be found using the atan2(y,x) function 1 . The relation between coxa angle and robot body, is illustrated on
Figure 4.6. This figure also shows how the coxa angle can be found directly from the leg end point coordinates
in the leg frame.

Figure 4.6: Illustration of the coxa joint angle, in the leg-frame. It is equivalent of determining the angle of the end point relative to the
x-axis of the leg frame.

The situation where the end point is positioned directly below the coxa joint poses a special case, as it will
result in infinitely many solutions for the choice of coxa joint angles. This is due to the fact that the rotation
of the coxa joint, in this specific case, will have no effect on the solution for the outer joints. In this case some
other information should be used to choose the coxa joints angle. This could be to re-use the last known angle
found for the coxa joint.

End point positions, closer to the center of the robot than the coxa joint, will be interpreted, by the atan2(y,x)
function, as an attempt to rotate the leg inwards through the chassis. Thus if positions underneath the robot
are desired, caution must be taken when rotating, to avoid inversion of the x-axis. Therefore, when the x-
1 To avoid confusing the sign of the angle, we use atan2 instead of tan−1 , which solves the arctan problem by determining the quadrant.

Thus it always returns positive angles (0 to 180) when in the upper half plane, and negative angles (0 to -180) when in the lower half plane,
atan2 calculates the angles in radians, but as that is a trivial conversion it is assumed in this section that it returns angles in degrees.
Modelling 23

component of the leg end point position in the leg frame, is negative, 180◦ must be added, to ensure that the
angle chosen is within the constraints. Now the transformation mentioned in the beginning of this section can
be achieved. The final equation for the coxa joint angle is shown in Equation 4.21.



 θ1 (k − 1) , xL = 0
θ1 (k) = atan2(yL , xL ) + 180 , xL < 0 (4.21)

atan2(yL , xL ) , otherwise

Where:
θ(k) is the coxa joint angle at time k
xL is the x-component of the position of the leg end point in the leg frame [mm]
yL is the y-component of the position of the leg end point in the leg frame [mm]

To find the femur and tibia angles, the leg end point coordinates are transformed to the coxa frame, by the
transformation in Equation 4.23. This way the angles can be found by looking at the angles in the triangle with
vertices in the origins of the coxa, the femur and the tibia frames. The triangle lies in the xy-plane of the coxa
link. The location of the coxa xy-plane is illustrated in Figure 4.7 and the triangle spanned by the coxa, femur
and tibia links is shown in Figure 4.8.

T01 = (T10 )−1 (4.22)


" #
(R10 )T −(R10 )T · d01
T01 = (4.23)
0 1

Figure 4.7: Illustration of the coxa frame. It is always oriented so the x-axis is parallel with the coxa link, and the y-axis is parallel with
the z-axis of the robot body frame.

In Figure 4.8 an illustration of the triangle, and the location of the angles for the IK solving, are presented.
Notice that the origin of the xy-plane in the coxa-frame is placed at the femur joint.

The angle φ2 which is the angle relating to the femur servo position, can be derived directly from the triangle,
hence making Equation 4.24 applicable:

θ2 =φ2 (4.24)
24 4.4 Kinematic Model of Hexapod Robot

Figure 4.8: Illustration of the 2D triangle with vertices in the coxa, the femur and the tibia linkframe origins. The angles φ1 , φ2 ,φ3 , the
lengths l2 and l3 and the length b are all used used in the IK solution.

The angle φ1 can be found by looking at the angle between the line b and the x-axis. This angle can be found
as shown in Equation 4.25

φ1 = atan2(y3 , x3 ) (4.25)

Where:
x3 and y3 is the x- and y-components of the leg end point coordinates in the coxa frame.

If an angle φt is defined as the angle describing the entire angle spanned by the femur corner of the triangle, it
is given from trigonometry that this angle can be described by the femur link length, and the tibia link length.
φt is found as shown in Equation 4.27.

q
b= x23 + y32 (4.26)
(l2 )2 + b2 − (l3 )2
 
φt = acos (4.27)
2 · l2 · b

Where:
x3 and y3 are the x and y components of the leg end point coordinates in the coxa frame [mm]
l2 , l3 and b are the lengths shown in Figure 4.8 [mm]

Note that this solution for φt in principle is also valid if the triangle is mirrored around the x-axis. For this
particular leg design however, the constraint on the tibia joints, prohibits it from rotating upwards. To ensure
that the tibia joint always rotates downwards, φt is defined as a positive angle.

Now we can determine φ2 by relating φ1 and φt

φ2 = φt + φ1 (4.28)

Using the same formula as in Equation 4.27 we can find φ3 .

l32 + l22 − b2
 
φ3 =acos (4.29)
2 · l3 · l 2
(4.30)

φ3 relates directly to θ3 , the tibia joint angle, as shown in Equation 4.31.


Modelling 25

θ3 = 180 − φ3 (4.31)

4.4.8 Summary of Inverse Kinematic Solution

The equations below provide a summary of the formulas needed to find the individual joint angles.



 θ1 (k − 1) , xL = 0
θ1 (k) = atan2(yL , xL ) + 180 , xL < 0 (4.32)

atan2(yL , xL ) , otherwise

2 2 2
 
(l2 ) + b − (l3 )
θ2 (k) =acos + atan2(y3 (k), x3 (k)) (4.33)
2 · l2 · b
 p 2 
2 + y (k)2 2 2
x 3 (k) 3 − (l 2 ) − (l 3 )
θ3 (k) =180 − acos  (4.34)
 
−2 · l3 · l2

Where:
xL and yL are the x- and y-components of the position of the leg end point in the leg frame [mm]
x3 and y3 are the x- and y-components of the leg end point coordinates in the coxa frame [mm]
l2 , l3 and b are the lengths shown in Figure 4.8 [mm]

4.5 Dynamic Considerations

As stated in the introduction of this chapter and in Section 2.3, dynamic considerations must be done in order
to be able to ensure static and dynamic stability. The goal of these considerations is to find the position of the
CoM and an expression of where the ZMP is according to a given acceleration.

4.5.1 The Center of Mass

The CoM of the robot must be found in order to control if static stability is ensured. As stated in Section 2.3
the CoM must lie within the PoS at all times. The CoM can be defined as the average position of all the links
weighted by their weight. Expressed as:

mi · r G
P
CG = P i
(4.35)
mi

Where mi is the mass and r G i is the position of the i’th object given in the global frame. In this case there
are 19 objects, 18 link objects in the legs and the body. The CoM is represented in the global frame, as are
the coordinates for the PoS. For the purpose of determining static stability, the CoM can be projected on to
the ground perpendicular to the global z-axis, hence it is possible to discard the z-component. As the robot is
symmetrical and if all the legs are placed with the same angles on each joint, the CoM will be placed on the
ground, in the center of the robot. This assumption could suffice in many cases but it is considered necessary
to develop a model of where the CoM will move to and from, in e.g. a tripod gait.

When using the kinematics from Section 4.4 the CoM can be described as in Equation 4.36.
26 4.5 Dynamic Considerations

B 0 i
mi · pG mi · TG
P P
B TL Ti pCoMi
G
C = P CoMi = P (4.36)
mi mi

In order to get a 2D representation of C G it must be projected perpendicular on to the ground.

4.5.2 The Zero Moment Point

As introduced in Section 2.3 it is necessary to investigate the ZMP. As stated previously the ZMP must lie
within the PoS at all times to ensure stability. If the ZMP reaches the edge of the PoS, the moving robot will
start to overturn hence making the robot unstable. As explained later in this section the ZMP is dependent of the
accelerations, the mass and the position of each link. One way to avoid dynamic instability is to monitor how
the ZMP behaves by measuring and calculating it online. This process can however require many calculations,
as it requires knowledge of the acceleration of all links. Another way is to investigate some sample gaits and
steps, and investigate how the ZMP behaves offline. By doing this offline simulation it is possible to find some
margin on the PoS that the CoM always must lie within. The ZMP is derived in [13] as:

n
X
mi (xi (z̈i + gz ) − ẍi zi ) − Iiy ω̇iy
i=1
xZM P = n (4.37)
X
mi (z̈i + gz )
i=1
n
X
mi (yi (z̈i + gz ) − ÿi zi ) − Iix ω̇ix
i=1
yZM P = n (4.38)
X
mi (z̈i + gz )
i=1

Where:
mi is the mass of link i [g]
xi , yi and zi is the position of link i
ẍi , ÿi and z̈i is the acceleration of link i
Ii is the inertia tensor of link i
gz is the gravity acceleration
ω̇i is the angular velocity of link i

This can be simplified if we consider the mass of each link to be a point mass in the CoM of the link [14].

n
X
mi (xi (z̈i + gz ) − ẍi zi )
i=1
xZM P = n (4.39)
X
mi (z̈i + gz )
i=1
n
X
mi (yi (z̈i + gz ) − ÿi zi )
i=1
yZM P = n (4.40)
X
mi (z̈i + gz )
i=1
Modelling 27

Maximum Movement of ZMP

The maximum movement of the ZMP is calculated on the basis of an assumed worst case acceleration of the
robot. This worst case acceleration is assumed here to be the case where the robot throws the body forwards
by moving all leg end points backwards as fast as possible. This acceleration can be found by using the MTL
to record the position of the robot body and then by deriving the double derivative to express the acceleration.
As shown and calculated in Appendix A the ZMP is able to move 8.8mm in worst-case. This allows us to set a
margin on the PoS which should be more than 8.8mm, hence avoiding dynamic instability.

4.6 Test of Inverse Kinematics

A test of the IK is described in Appendix C on page 85. The results from the test and the expected results are
shown in Table 4.2.

Subject Initial value Stop value Total movement Expected movement Difference
x 1.26 -52.51 53.77 mm 50 mm 3.77 mm
y 1.66 52.34 50.68 mm 50 mm 0.68 mm
z 71.93 122.9 50.937 mm 50 mm 0.937 mm
Roll -1.65 9.28 10.93 ◦ 10 ◦ 0.93 ◦
Pitch -0.76 -11.21 10.45 ◦ 10 ◦ 0.45 ◦
Yaw -4.47 5.33 9.8 ◦ 10 ◦ 0.2 ◦

Table 4.2: Results for the inverse kinematics test

Only small deviations was measured and the maximum deviation from the expected results was 3.77 mm. The
deviations from the expected results are due to the high amount of clearance in gears and motors in the servos
on the robot.
28 4.6 Test of Inverse Kinematics
Chapter 5
System Design

Through this chapter, the control software, is described. First the overall system is described and then each
subsystem in detail.

The system for controlling the robot relies on the interaction of four elements.

• The operator, who gives directions to the robot.

• The physical robot, described in Section 2.2

• The Vicon camera system, which measures where the robot and terrain is, as described in Appendix E

• The software developed in this project, responsible for moving the robot in the direction, provided by the
operator.

The interaction between the elements is illustrated in Figure 5.1.

Figure 5.1: The system is based on the interaction of four elements. The terrain position is also sampled by the camera.

The robot’s position is continuously sampled by MTL and transmitted to the software, which makes it possible
to analyze the motion and position of the robot. For moving the robot, it is necessary to have a reference
direction given in the global xy-plane, this is provided by the operator.

The software receives both the current robot coordinates from MTL, and the direction vector from the operator.
30 5.1 System Structure

5.1 System Structure

The task of the control software for the robot is to move the robot through the terrain. The sensor feedback
available is the robot’s body position and Euler angles. The user input allows movement of the body omnidi-
rectionally in the plane.

The directional input vector is interpreted by the system which generates angles for the servos. If the angles
results in some unfortunate properties, e.g. making the legs collide with each other, this must be detected and
avoided before the servos are actuated. The proposition for the controller structure is shown in Figure 5.2.

Figure 5.2: Concept of the controller structure.

The gait generator, presented on Figure 5.2, plans how to move the legs through the terrain and where to place
the end points to sustain stability. The gait generator can be divided into two, where one block preforms triangle
searching and one handles the positioning of the leg end points. The triangle search is done with the criteria
of maintaining stability and avoiding possible collisions with the terrain. The gait generator block determines
how the leg endpoints and robot body should be manipulated to achieve the gait. To prevent the end points
from colliding with the terrain the Artificial Potential Function (APF) algorithm is used, as described in Section
5.2.5.

Computes the servo angles from the leg end point and robot body position. If it is not possible to calculate the
servo angles for the given robot configuration it will try to recover by moving the body to another position. If
the IK solver does not encounter a problem or if it is able to recover from it, the angles of the legs are passed to
the error detection and handling block.

An extended overview of the entire structure can be seen in Figure 5.3

Figure 5.3: Illustration of the extended overall controller structure.

The IK solver block can be divided in two, as in figure 5.3, where one block calculates the servo angles for a
given end point position, if possible, and if not the other block will try to move the body in order to recover as
described in Section 5.2.8.

The error detection and handling block investigates if the solution, determined by the IK solver, will cause
collisions with either the terrain or with the robot it self. When the system has verified the angles, they are
passed to the robot via the serial communication interface. However if it is determined that the servo angles
will cause a collision, the collision avoider block will attempt to solve the problem as described in Section 5.5.
System Design 31

5.2 Gait Generation

In this section the generation of the robot gait will be described. It is chosen to focus on a tripedal gait, as this is
assumed to provide the fastest movement speed, as described in Section 2.3. First some general considerations
regarding the gait will be described together with a general description of the gait. Next it is explained how the
different processes in the gait is performed.

5.2.1 Definition of Gait Cycle

The gait to be generated is based on the tripedal gait. This gait is based on a triangularly shaped PoS as
described in Section 2.3.

The gait can be separated in two states. State one is when end points 1, 3 and 5 are free while end point 2, 4 and
6 are support end points. State 2 is the reverse where end points 2, 4, and 6 are free and 1, 3 and 5 are support
end points. The movement of the robot happens when the CoM is moved from one PoS to the next. The gait
cycle is illustrated in Figure 5.4.

Figure 5.4: Illustration of the PoS and CoM during the gait cycles. The dots represent the leg end points, where the ones with arrows
pointing to them represent the free legs. The edges drawn between the end points touching the ground represent the PoS.

5.2.2 Gait Limitations

The gaits possible for this robot is limited by the physical properties of the robot. Also the rotation of the robot
body frame relative to the global frame limits the possible gaits. E.g. it might not be possible for some legs
to reach the terrain if the roll or pitch angles are to large and the legs that can actually reach the terrain, might
have problems providing enough torque output as they have to lift a larger load.

These problems are considered when designing the gait generating algorithm and it is assumed that the robot
servos will always have enough torque output, for the generated gait.
32 5.2 Gait Generation

5.2.3 Gait Performance Measures

To be able to generate an optimal gait for a given terrain, it is necessary to determine what exactly defines a
good gait versus a bad gait. In this project the performance of a gait is chosen to be measured on the movement
speed of the robot and its stability. In other situations it might be important to keep the robot body aligned to
some vertical or horizontal axis, or to not generate large accelerations. Though for simplicity and to keep the
focus on the problem of traversing an uneven terrain effectively, only robot movement speed and stability are
chosen as performance measures.

As described in Section 4.5 the stability of the robot is determined online by the evaluating the position of the
CoM relative to the PoS.

The movement speed of the robot is a function of the step length of the gait and the gait cycle frequency.

Stability of Gait

As mentioned in Section 4.5, the PoS and the position of the CoM are necessary to determine the static stability
of the robot. A PoS with a large area results in a larger region wherein the CoM can move with ensured stability.
As the robot legs are limited in range and torque, there is an upper limit to the PoS area. To maximize the PoS
area it should have equally sized angles and equally sized edges (for a triangle if one is true so is the other).
This suggests that, for the tripod gait, the PoS should approach an equilateral triangle.

Regardless of the size of the PoS, a good gait regarding stability, is one that maximizes the minimum distance
from the CoM to the edges of the PoS. This can be described as maximizing the Static Stability Margin (SSM).
The SSM can be calculated as in Equation 5.1, which expresses the minimum distance from the CoM to any of
the edges of the PoS.

G !
(p − pG ) × (pG − pG )
i
SSM = min i−1 i
CoM for all i (5.1)
pG − pG
i i−1

Where:
pG
i is the position vector of corner i of the PoS
pG
CoM is the position vector for theCoM

It can be reasoned that the point in the PoS with the largest distance to all edges of the PoS, is the same as the
center of the largest circle that fit in the PoS (the incircle). To find the center of the incircle one can use the
angle bisectors. The intersection of the angle bisectors is thus the optimal position of the CoM inside the PoS,
regarding stability.

The center of the incircle in a triangle defined by three vectors, as shown in Figure 5.5, can be found as shown
in Equation 5.2.

(b · p1 + c · p2 + a · p3 )
p4 = (5.2)
a+b+c
System Design 33

Figure 5.5: Illustration of triangle and its incircle. The vector names correspond to the names used in Equation 5.2 that finds the center of
the incircle.

Step Length Considerations

To be able to determine how long one step should be, some approximations and assumptions are done. This is
done while considering the already known limitations given in previous sections.

The average PoS is considered an equilateral triangle, with 41 cm sides. In Appendix A it is calculated that
the PoS should be narrowed by 0.88 cm in order to ensure that the ZMP stays within the PoS. This gives the
P oSmodif ied a side length of approximately 39 cm.

It is chosen that the step length of the robot should be equal to the radius of the incirle of the P oSmodif ied . This
way it is possible to obtain a length which can be used when generating the gait. The reason, why this method
has been chosen, is to ensure that even if the robot moves all its weight forward from a standing position, static
stability will be ensured. This length can be calculated as:

r
(s − a)(s − b)(s − c)
rin =
s

Where:
s = a+b+c
2
a, b and c are the sides of the triangle.

Since the P oSmodif ied triangle is considered equilateral the formula can be reduced to:

v
u
a·3
3
u
2 −a
rin =t a·3
2
r
a2
=
12
=11.26cm (5.3)

So by using a step length of approximately 11 cm, it is, in this approximation, possible to ensure static stability
while moving forward from a centered position. It should be noted that this step length approximation is only
based on stability criteria and the kinematics of the robot, which might limit step length, are not considered.
34 5.2 Gait Generation

Movement Speed

The movement speed of the robot is dependant of the step length and the gait cycle frequency. As mentioned
the Section 5.2.2 there are limits to where the robot legs can move and how much torque the actuators can
deliver. This limits the step length and the gait cycle frequency. The maximum velocities of the leg end points
are related to the gait cycle frequency but will vary depending on the robot configuration. For simplicity it is
chosen to only measure the goodness of the gait, related to movement speed, by the step length.

5.2.4 Support Polygon Search Algorithm

In this section the actual gait generating algorithm is described. Many of the considerations in the previous
section are taken into account, though the solutions for the individual problems regarding the gait generation,
are kept as simple as possible.

The overall gait cycle is as described in Figure 5.4. While the robot is standing on the three supporting feet, the
algorithm finds the next support points and moves the feet, body and consequently the CoM there.

For each step the state changes and a new destination support triangle is found on the terrain. The vertices of
the destination support triangle are the goal destination for the free leg end points. Each time the free end points
reach their destination a state change is triggered. The process of triggering the state change and finding the
new support triangle is illustrated in Figure 5.6.

The algorithm that searches the terrain for acceptable support triangles uses the considerations in Section 5.2.3
and known constraints of the robot. The process of finding acceptable support triangles, is one of two central
aspects of the gait generation algorithm, the other being the process of navigating the free end points to their
destination.

The parameters determining if a support triangle is accepted is:

• Terrain edges.
Reason: To avoid placing the leg end points in areas with high risk of foot slip.

• Height difference of the triangle vertices.


Reason: To avoid support triangles with steep angles that might introduce kinematic singularities.

• Side lengths of the triangle.


Reason: To keep the leg end points within reasonable distance of each other and consequently the body.
To avoid kinematic singularities and possible torque limitations of the servos.

• Terrain height in and around the triangle.


Reason: To keep a reasonable amount of free space beneath the robot body, to avoid collisions with the
terrain.

The triangle search pattern is illustrated in Figure 5.7. The search pattern is made of increasingly larger circles
around the vertices of the immediately found triangle. The points on the circles are tested as possible vertex
locations in the positive direction around the immediate triangle vertex. The search circles are increased in
radius until an acceptable triangle is found, or the maximum radius is reached. The search circle radius is
increased in steps and the circles are searched for vertex positions with some angular step. The maximum
search circle radius and the step sizes are adjusted to find an acceptable trade-off between the number of points
System Design 35

Figure 5.6: Flowchart illustrating how the triangle search algorithm is executed.
36 5.2 Gait Generation

to be investigated, the time it takes to find an acceptable triangle and the probability of actually finding an
acceptable triangle.

Figure 5.7: Illustration of the search pattern used to search the terrain for support triangles. First the immediate triangle is found in the
movement direction, here illustrated by the triangle with dotted sides. If this triangle is not acceptable, the search is initiated. The search
is performed by searching positions on circles around the original triangle. If no points are found on the first circle the radius is increased
and the search restarted.

5.2.5 Leg end point Navigation

When some appropriate support triangle has been found, the next task is to move the leg end points to their
destinations. The leg end points are navigated to their destinations using APF’s. The main idea behind the APF
method is to attach artificial positive charges, to the leg end points and the obstacles, while attaching negative
charges to the goal positions. The leg end points can be guided around the obstacles to the goal position, by
looking at the gradient of the APF created by the charges [15, Chp. 4]. A simple example involving a simple
obstacle and a 3 DoF leg is shown in Figure 5.8.

The APF is divided in two parts, the repelling potential and the attractive potential. The attractive potential is
divided into two different functions one conic and one quadratic. The conic potential works farther away from
the goal to avoid large velocities, while the quadratic APF works closer to the goal to avoid oscillations due to
overshooting and to make sure the potential in the goal position is defined. Figure 5.9 illustrates the attractive
potential and its gradient in two dimensions. The attractive APF is written in Equation 5.4 and its gradient in
Equation 5.5 [15, p. 82].
System Design 37

Figure 5.8: Illustration of APF method. The positive charged obstacle repels the robot leg end point while the negative charge attracts it.

90

80

70
1400

1200 60

1000
50
800
Potential

Y−axis

40
600

400 30

200
20
0
80 10
60 80
60 0
40
40
20
20 −10
0 0 −10 0 10 20 30 40 50 60 70 80 90
Y−axis
X−axis X−axis

(a) (b)

Figure 5.9: Figure (a) shows an example of the attractive potential. In this example the threshold distance d∗goal is 10 mm meaning that the
APF is conic more than 10 mm away from the goal position (40,20) and quadratic closer than 10 mm to the goal position. From Figure
(a) this can be seen from the slope of the function. Figure (b) shows the gradient of the APF. From Figure (b) it can also be seen that the
gradient vectors decrease in magnitude when the distance to the goal position is less than 10 mm.
38 5.2 Gait Generation

(
1 2
2 ζd (q, qgoal ) , d(q, qgoal ) ≤ d∗goal
Uatt (q) = (5.4)
dgoal ζd(q, qgoal ) − 12 ζ(d∗goal )2

, d(q, qgoal ) > d∗goal
(
ζ(q − qgoal ) , d(q, qgoal ) ≤ d∗goal
∇Uatt (q) = d∗
goal ζ(q−qgoal )
(5.5)
d(q,qgoal ) , d(q, qgoal ) ≤ d∗goal

Where:
U is the APF.
∇U is the gradient of the APF.
ζ is a scaling factor for the attractive potential.
d(q, qgoal ) is the distance from the point q to the goal position qgoal [mm]
d∗goal is a threshold distance that determines when the APF is conic or quadratic. [mm]

The repulsive potential consists of the sum of several repulsive potentials in the immediate proximity of the leg
end point. The individual repulsive potentials are quadratic, meaning they increase rapidly when approaching
the obstacle. The repulsive APF and its gradient are presented in Equations 5.6 and 5.7.

  2
n 1 1 1
X η 2 di (q) − Q∗ , di (q) ≤ Q∗
Urep (q) = (5.6)

i=1 0 , di (q) > Q∗
n
(  
X η Q1∗ − di1(q) d21(q) ∇di (q) , di (q) ≤ Q∗
∇Urep (q) = i (5.7)
i=1 0 , di (q) > Q∗

Where:
η is a scaling factor for the repulsive potential. [-]
di (q) is the distance from the point q to the obstacle. [mm]
Q∗ is a threshold distance of influence for the obstacles. [mm]
n is the number of obstacles considered. [-]

In Figure 5.10 the distance from the leg end point to the obstacle and the obstacle distance of influence is
illustrated. Depending on the position of the leg end point relative to the repulsive charges, the number of
repulsive charges considered changes.

Figure 5.10: Illustration of three repulsive obstacle points with their distance of influence expressed by the dashed line and the individual
distances to the leg end point. The leg end points are only influenced by the repulsive potential of obstacle point i and i + 1 while it is
outside the distance of influence of obstacle point i + 2
System Design 39

When both the attractive and repulsive potentials have been calculated, a resulting APF is found by summing
the attractive and the repulsive potentials. An example of a complete APF can be seen in Figure 5.11 where
repulsive charges have been placed in (20,60), (40,60) and (60,60).

1400

1200

1000

800
Potential

600

400

200

0
80

60 80
60
40
40
20
20
0 0
Y−axis
X−axis

Figure 5.11: This figure shows an example of the APF with both attractive and repulsive potentials. The figure is the same as 5.9 (a) but
with the addition of repulsive charges at (20,60), (40,60) and (60,60). For this example the distance of influence for the repulsive charges
has been set to 40 mm while the repulsive scaling factor η have been set to 1000

The desired position of the leg end points is calculated using a gradient descent algorithm as shown below.


while q(k) − q goal > ǫ do
q(k + 1) = q(k) − α∇U (q(k))
k =k+1
end while

The gradient descent algorithm runs for each of the free leg end points until they are within some distance ǫ of
their destinations. The constant α is a factor scaling the step size, meaning it scales the distance the leg end
points travel each sample.

Example of APF Functionality

The terrain map is covered with repulsive charges. The distance between these points and the magnitude of their
individual potential fields needs to be selected in a way that, enhances the movability of the robot legs, while
avoiding collisions. For simplicity it is chosen to cover the terrain surface with a grid of equally distributed and
equally strong repulsive charges. Figure 5.12 shows how the repulsive charges are placed on the terrain, and
how a theoretical leg end point would move through the terrain.

State Change Problem

When a state change is triggered in the gait, a problem arises. As a free leg end point approaches its destination,
the repulsive charges near the destination, are not considered. Though when the state change happens and the
40 5.2 Gait Generation

Figure 5.12: The repulsive charges are equally distributed over the terrain surface. This is a section of the artificial terrain. The red points
are the repulsive obstacle points. The yellow and red triangle points to the destination of the leg end point while the green and black
triangles indicates the stepwise position of the leg end point.

supporting legs become free legs, the repulsive points near the supporting leg end points might cause the leg end
points to move unacceptably fast. This is handled by setting an upper limit for the repulsive potential gradient.
By limiting the repulsive potential gradient, it is possible to limit the distance travelled by the end point each
sample. The upper limit of the repulsive potential gradient should be selected with consideration of the sample
time. In praxis the limit was chosen empirically to achieve smooth movements at all times.

Local Minima Problem

The APF method is prone to local minima problems, where the moving point get caught in areas with a zero
potential gradient, that is not the destination. The characteristics of the terrain, and the way the repulsive points
are distributed on it, creates an environment where local minima problems are likely to arise. An example of
a local minima problem where the leg end point is caught at a point with close to zero potential gradient, and
does not reach its destination, is illustrated in Figure 5.13 (a). Figure 5.13 (b) is a simplified illustration of the
local minima problem. The problem arises when the forces pulling the end point towards the destination and
pushing it away from obstacles are equal.
System Design 41

(a) (b)

Figure 5.13: Figure (a) shows a case of the local minima problem that prevents the leg end point from reaching its destination. The yellow
lines are the sides of the support polygon, ending in the end point destination. The green and black triangles are the end point positions at
each sample. The red and black triangles indicate that a local minimum problems was detected. Figure (b) is an illustration of a simplified
version of the problem. In both cases the resulting force on the leg end point is zero.

To solve the local minima problem, the area that contains the local minimum can be filled with repulsive
charges. A simplified example is illustrated in Figure 5.14. Repulsive charges are placed between the leg end
point and the destination. This repels the leg from the local minima. If adding one repulsive charge is not
sufficient, and the end point is still caught or gets caught in a new local minimum, continually adding repulsive
charges will eventually fill up any areas that contain local minima problems and the leg end point can continue
to its destination.

To detect if leg end points are caught in a local minimum of the APF, the magnitude of the APF gradient can
be used. If the gradient is very small, the leg end point is either close to its destination, or caught in a local
minimum. Because of the discrete time nature of the system the gradient will rarely be equal to zero. Therefore
a threshold is used to detect if the local minima problem occurs. If the magnitude of the gradient vector is less
than some threshold, and the leg end point is not considered near its destination, a Boolean value is set to true
and passed on in the system, so extra repulsive charges can be placed appropriately. The local minima problem
is detected and solved individually for each leg.

As mentioned the extra repulsive charges are placed between the leg end point and the end point destination.
The charges are placed 2 mm below the vector pointing from the leg end point to its destination, at a constant
distance of 5 mm. This way the leg end point is forced upwards, usually away from the terrain. The repulsive
charges are placed as shown in Equation 5.8.
42 5.2 Gait Generation

(a) (b)

Figure 5.14: Illustration of the principle of adding extra charges to avoid local minima. The extra charge is placed between the leg end
point and the destination. This way the leg end point will be repelled from the local minimum. Figure (a) shows the same case as in Figure
5.13 (a), but here the local minimum is escaped by the use of extra charges, indicated in the figure as cyan and magenta triangles. The
yellow lines are the sides of the support polygon, ending in the end point destination. The green and black triangles are the end point
positions at each sample. The red and black triangles indicate that a local minimum problem was detected. Figure (b) shows a simplified
version of how the extra repulsive charge is positioned to avoid the local minimum.

 
0
pd − pep
pc = 5 −
0 (5.8)

pd − pep
2

Where:
pc is the position vector for the extra repulsive charge in the global frame.
pep is the position vector for the leg end point in the global frame.
pd is the position vector for leg end point destination in the global frame.

The extra repulsive charges are fed back to the APF block via a buffer that can only hold a limited number of
coordinates. This way old charges will get overwritten by new ones and the next time the robot passes the same
area on the map, the extra charges should be gone.

5.2.6 Robot Body Position and Orientation

In Sections 5.2.4–5.2.5 it was explained how the robot gait is generated and how the leg end points are moved
to their destinations. When moving the leg end points, the robot body needs to move with them. In this section
it is described how the robot body movement is done, and how it is ensured that the robot is stable at all times.

The desired robot body position and orientation depends on the desired positions for leg end points. If the
robot body is positioned, or oriented appropriately, the desired leg end point positions might result in kinematic
singularities. The problem with kinematic singularities is therefore dependent on proper support triangle gener-
ation. It is assumed that the vertices of the support triangles are always possible to reach with the corresponding
System Design 43

leg end points.

Two triangles can be drawn with the leg end points as vertices. One of these triangles is the support triangle as
described in Section 5.2.4, the other is the free triangle with the free leg end points as vertices. It is chosen to
place the robot body along a triangle representing an average between the support triangle and the free triangle.
The vertices of the average triangle is calculated as in Equations 5.9.

ps i + pf i
pa i = (5.9)
2

Where:
pa i is the coordinates for i’th vertex of the average triangle . [-]
ps i is the coordinates for i’th vertex of the supporting triangle . [-]
pf i is the coordinates for i’th vertex of the free triangle . [-]

An example of possible support, free and average triangles are shown in Figure 5.15. As long as the robot is
considered stable, the robot body is continually moved towards the incenter (in the xy-plane) of the average
triangle and the body’s roll and pitch angles are aligned with the average triangle. If the robot is marginally
stable, meaning the CoM passes the threshold distance to the PoS, the robot body is moved towards the incenter
(in the xy-plane) of the supporting triangle. The height of the robot body is set to a constant distance above the
incenter, of either the average triangle or the supporting triangle.

60

50
55
40 50
z−axis

30 45

20 40

35
10
z−axis

30
0 500
25
1200
20 400
1250
1300 15 300
1350
500 10 200
1400
400 5
1450 100
300 1200
1500 1250 1300
200 1350 1400 0 y−axis
1450 1500
1550 1550 1600
100
1600 0
x−axis y−axis x−axis

(a) (b)

Figure 5.15: Illustration of the support triangle (green), the free triangle (red) and the average of the two (blue). The stars indicate the
incenters of each triangle. Figure (a) and (b) show the triangles from two different angles.

The body is moved a constant distance towards the desired position each sample. As the leg end points move
towards their destinations, the free triangle moves and consequently the average triangle moves. By setting the
distance which the body travels each sample, according to the distance the average triangle travels each sample,
it can be ensured that the body does not trail too far behind the legs during the gait cycle.

The next position for the robot body is calculated in each sample, as shown in Equation 5.10. In Equation 5.10
the vector pdest is either the incenter of the average triangle or the supporting triangle depending on the stability
of the robot. A factor bs scales the distance, the body travels each sample. If bs = 1 the distance travelled
during one sample is 1 mm.
44 5.2 Gait Generation

pdest(k) − pb (k)
pb (k + 1) = bs ; (5.10)
|pdest − pb (k)|

Where:
pb (k) is the coordinates for the body frame at time k . [-]
pdest is the desired destination coordinates for the robot body at time k. [-]
bs is a constant scaling the distance the body moves each sample. [-]

To align the robot’s roll and pitch angles to the average or supporting triangle the angle between the robot and
these triangles needs to be calculated. This is done by calculating the angle of a vector normal to the triangle,
relative to vectors pointing along and perpendicular to the robot body heading. These angles are found as shown
in Equation 5.11 and 5.12 .

vr = − (acos(pn • pheadp ) − 90) (5.11)


vp =acos(pn • phead ) − 90 (5.12)

Where:
vr is the roll angle of the triangle relative to the robot.
vp is the pitch angle of the triangle relative to the robot.
pn is the normal vector for the triangle.
phead is the a unit vector indicating the robot body heading in the xy-plane.
pheadp is the a unit vector perpendicular to the robot body, always pointing to the right side of the robot.

To make sure the robot body does not change its orientation too fast, a unit vector is generated from the
difference between the robot orientation and the relative triangle orientation. This unit vector is added to the
vector containing the robot roll, pitch and yaw angles. The derivation of the unit angle vector and the addition
of this to the robot orientation vector is shown in Equation 5.15

 
α(k)
ab (k) = β(k) (5.13)
 

γ(k)
 
vr (k)
at (k) = vp (k) (5.14)
 

0
at (k + 1) − ab (k)
ab (k + 1) =ab (k) + (5.15)
|at (k + 1) − ab (k)|

Where:
vr (k) is the roll angle of the triangle relative to the robot at time k.
vp (k) is the pitch angle of the triangle relative to the robot at time k.
ab (k) is a vector containing the roll, pitch and yaw angles of the robot.
at (k) is a vector containing the relative roll and pitch angles of the triangle relative to the robot.
System Design 45

5.2.7 Preliminary Simulation Example of Gait

In Figure 5.16 a simulated gait with 60 mm step length is illustrated. The figure illustrates both the support
triangles, found by the triangle search algorithm, and the sample wise leg end point positions generated with
the APF method. Figure 5.17 shows the same gait as illustrated in Figure 5.16, but executed on the robot in
open loop. The simulation is carried out on flat ground because the kinematic limitations of the robot makes it
impossible for the robot to walk in the uneven terrain. It is evaluated that gait generating algorithm in principle
can enable the robot to walk in the uneven terrain, but that requires some means of recovering from a kinematic
singularity. This is further considered in Section 5.2.8.

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5.16: Time lapse illustration of the robot gait on flat ground. The robot is illustrated by lines representing the kinematic links. In
Figure (a) the robot has just started a gait cycle. The yellow triangle is the destination support triangle for the now free leg end points. In
Figure (b) the leg end points have travelled some of the distance. The green points indicate the legs end points when they are free. The
pink points indicate the positions of the leg end points when they are supporting the robot. In Figure (c) the robot has finished one step and
a state change have been triggered. The green triangle indicates the new destination for the now free free legs. Figure (d) to (i) illustrate
how several state changes takes place and the robot travels across the flat ground.
46 5.2 Gait Generation

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Figure 5.17: Photo time lapse illustration of the same gait sequence as in Figure 5.16. The robot walks from right to left.
System Design 47

5.2.8 Inverse Kinematics Problem Handling

The leg end point position created by the APF method can in combination with the body coordinates, result in
scenarios, where the IK cannot be solved. An example is depicted in Figure 5.18.

Figure 5.18: In this case, one of the legs are planned to be placed on top of a 10 cm tall box. The path to the top cannot be solved for the
entire span of time using the IK, resulting in a halt. The red lines indicate that the inverse kinematics for the legs cannot be solved

To escape the IK deadlock caused by the robots movements, it is necessary to manipulate either the path of the
end point or the robot body. If the leg, causing the error, is a supporting leg, the leg end point cannot be moved,
instead the body must be manipulated. If the leg is a free leg and the path of the end point is manipulated, only
the particular leg is affected, however it is not always obvious how to manipulate the path.

The body can often be manipulated as it by default is placed using constraints which should minimize kinematic
problems. Manipulating the body have the potential to solve most of the IK problems. However as the IK chain
relating to a leg can seldom be isolated from the rest of the robots kinematics, it might not always solveable.

The case shown in Figure 5.18, is one of the situations where manipulating the body’s position was not found to
solve the problem. However, in other cases it was found that moving the body away from the legs causing the
IK problem, enabled the system to move the end points closer to the target, and thus a method for manipulating
the body is devised.

If the initial IK block cannot calculate angles for a leg, it outputs the last found angles for that leg instead.
While this is a good method to ensure that the other legs are still being solved properly, it is not desireable if
the leg, which cannot be solved, is one of the supporting legs. As a dynamical model is not devised, the result
of "lifting" an end point on the supporting legs is not easily predicted. Instead it is decided to ensure that this
never happens.

To avoid manipulating the end point positions, determined by the APF algorithm, where it was possible to
solve the IK for the leg, the body is moved as little as possible while still ensuring that the supporting legs are
touching the ground. Determining the distance to move the body is done by combining the body position and
the body angles, with the forward kinematics matrices applied on the joint angles, outputted from the initial IK
block. This results in new end point coordinates as shown in Equation 5.16.
48 5.2 Gait Generation

 
0
0
pG = TG
leg3 ·   (5.16)
 
ep
0
1

Where:
pG
ep is the end point position in global frame coordinates.
TGleg3 is the transformation matrix from the third frame, to global coordinates.

Calculating the vectors from the resulting legs end point positions, to the old end point positions, which was
properly placed on the ground, provides a vector that describes how to move the body to ensure that, that leg is
properly solved.

v i = pG G
epoldi − pepi (5.17)
v= Σni=1 v i (5.18)

Where:
pG
epold is the end point coordinates found in global frame coordinates in the last controller sample.
n is the number of supporting legs which could not be solved by IK without singularities.

When the vector v is added to the body position, the IK is solved again, and if the supporting legs are solved
properly, this is chosen as a better solution than the one first proposed by the initial IK solver.

If any IK singularities are still present, after the vector has been applied, the movement of the body position is
enhanced, by utillizing the property that the control system is run iteratively. Thus the change applied to the
body can be enhanced by adding the body change vector from the previous steps to the one applied by this step.

v new = v + v old (5.19)

The flow of the IK singularity solving attempt can be seen in Figure 5.19

In this manner, if any of the changes are aiding in eliminating the IK singularities, the changes are amplified
until the problem is solved. If it does not solve the problem, it will reach a maximum change, and stay there. If
the inverse kinematics can be solved without errors at some point, the built up vector is cleared.

If neither of the two approaches can provide a solution, where all the supporting legs are solved properly, the
body position and body angles from last sample, are re-used, to ensure that the supporting legs are never moved.

If any of the manipulations of the body have not solved all the IK problems, charges are added to the potential
map, to manipulate the future trajectory of the non solved free leg sideways. As the stall function ensures that
the end point is always manipulated upwards when it enters a local minima, it is chosen also to manipulate the
path upwards. However as this will not necessarily solve the problem, the charge is placed on one of the sides
of the end point. While it may not be the best solution, the charge is allways placed to the right of the end point,
for simplicity.

The position of the charge is found by creating a vector which is perpendicular in the global xy-plane to the
System Design 49

Angles of the legs

Body position and angles


Forward kinematics

End point New end point


coordinates coordinates
from APF
Calculate vector

vector

Apply vector to body

body coords

Solve IK

If singularities vector
Add vector to vector found last step
otherwise output solution

Apply vector to body

Solve IK

If solution has fewer singularities


than initial solution output solution

Figure 5.19: The flow of the singularity handler, which attempts to ensure that the end points of supporting legs are not moved due to
singularities

vector between the end point and the body coordinates.

~v = pG G
ep − pbody (5.20)
 
vy
w
~ = −vx  (5.21)
 

vz
 
0
2·w
pG = pG
ep + q − 0  (5.22)
 
charge
vy2 + vx2 + vz2 −2

Where:
pG G
body is the body coordinates in global frame coordinates. pcharge is the extra repulsive charge coordinates in
global frame coordinates.

When it is possible to solve the IK, the servo angles are found, and can be updated to move the robot. If it is
not possible to solve IK for all the legs, the supporting legs are given a higher priority than the free legs.
50 5.3 Collision Detection

Result of Inverse Kinematic Problem Handling

Even though several steps was taken to avoid kinematic singularities, while walking in the terrain, none of them
successfully solved all the kinemtic problems. Hence the robot will not at this stage in the development, be able
to traverse the entire length of the terrain model. The robot will however, walk on flat ground. It is evaluated
that further work on the gait generating algorithm and the inverse kinematic problem solver, can eventually
enable the robot to traverse the terrain model. This evaluation is based on manual manipulation of the robot
and its legs.

5.3 Collision Detection

When a part of the robot is moving, it is important to avoid collisions, as they can reduce the abillity to reach
the designated position of the end points. Collisions can happen with the robot it self, and with the terrain. As
an example, consider the situation where the end point of the leg is planned to be moved outwards and to a
lower elevation. Then one of the edges in the terrain closer to the body can be found to collide somewhere with
the leg. Which is the situation depicted in Figure 5.20.

Figure 5.20: The leg is given a new position which cannot be reached due to a collision with the terrain.

If collisions are not avoided this could result in pushing/lifting the robot with the part of the leg which is
unintentionally touching the surface. In the best case scenario it just results in stalling a servo, as the leg cannot
be manipulated due to collisions. It can, however if a leg collides with a vertical surface, result in pushing the
robot away from the surface, and horizontally off a support surface, causing it to tilt.

Depending on the situation in which the collision is detected, the trajectory of the leg must be modified to avoid
the collision, or the desired end point might not be reached at all. Figure 5.20 also illustrates the situation where
the two joints movements alone doesn’t reveal the fact that the leg cannot be positioned at the desired location.
In this case some other body path or gait must be planned to move the robot in a different direction.

The detection of the collisions is a task which can only be ignored if a gait generating algorithm, which can
ensure that all collisions are avoided, exist. The implemented APF algorithm does not ensure this, hence it
is necessary to detect collisions. One way of determining a collision is to analyze if anything is close to the
surfaces of the robot.

A generic approach to surface collision will give the abillity to determine collisions between the robot and the
terrain, and between the robots legs.

Representing the surface as polygons, allows for manipulation of the polygon model by the kinematic models,
as the vertices of the polygons can be positioned by applying the forward kinematics transformation matrices.
System Design 51

Figure 5.21: The left leg is approximated in a 3D environment using very few triangles, The right leg is rendered from a more detailed 3D
model

The left leg model in Figure 5.21 illustrates the result of applying the forward kinematic matrices on a box
model of the leg links. In comparison with the actual robots shape, the box model has some inaccuracies. They
can be made smaller by using more polygons to build the model, which is shown in the right rendering in
Figure 5.21.

Adding more polygons will however also add more vertices, which must be transformed, hance adding more
polygons requires more calculations. A generic test for collisions on two or more triangles, can be applied to
all the surfaces, described in this polygon topology.

5.3.1 Generic Collision Test on Triangular Surfaces in 3D

If all objects in an environment are built from a set of triangles, it is only necessary to test the triangles against
other object’s triangles to test if the objects collide. Three different situations, which are listed below and
illustrated in Figure 5.22, can be set up.

1. All of the vertices of another object is on the same side of the triangles surface, which implies that no
collision is occuring between the triangle and that object.

2. Some part of another object’s surface is located on the surface of the triangle, which implies that there is
a collision.

3. There are vertices from another object on both sides of the triangle, which implies that a collision has
already occured.

In the following description two triangles are tested for a collision. The two triangles are dubbbed S1 and S2
for surface 1 and surface 2, to distinguish between them.
52 5.3 Collision Detection

Figure 5.22: The three situations which can occur are presented. The left triangle has all its vertices on the same side of the colored
triangle, the second triangle shares some surface with the colored triangle and thus collides, the third triangle has vertices on both sides of
the colored triangle, and thus collides.

These three cases can be investigated by describing the plane which the triangle S1 is located in, and then
evaluating the vertex coordinates of S2 against the plane equation. For the cases with one or more S2 vertices
in the plane, (see the second situation in Figure 5.22) or with S2 vertices on both sides on a plane (see the
third situation in Figure 5.22), it is necessary to determine if the intersection with the plane occurs within the
boundaries of the S1 triangle.

Before the S1 triangle boundaries are taken into account, it is described how to determine if there is any
intersection between S2 and the S1 plane.

Triangle and Plane Collision

While a plane can be described by a scalar d and a normal vector n as in Equation 5.23, the three vertices of S1
can also be directly used for describing a plane.

d = −n • p
nx · x + ny · y + nz · z + d = 0 (5.23)

Where:
p is a point in the plane
n is the normal vector of the plane

To be able to evaluate S2 against the plane, it is desired to transform the three S1 vertices to the standard plane
equation (Equation 5.23), this can be done using Equation 5.24

n = (p2 − p1 ) × (p3 − p1 ) (5.24)


p = p1
d = −n • p

where:
pa is one of the three vertices in S1 , and each pa holds 3 coordinates (x, y, z)
System Design 53

The plane equation allows algebraic evaluation of the S2 vertice distances to the plane. In Equation 5.25 it
is determined which side of the plane a vertice of S2 is located, and thus it can be used for determining if any
intersections with the plane occurs.

L = n • pa + d (5.25)

Where:
pa is a vertice of S2
L is a scalar, which is positive for one side, negative for the other, and zero when excactly in the plane.

L reveals which vertices are on which side of the plane, and thus which triangles collide with the plane. This
is not the same as a collision with S1 as the plane stretches infinitely. However it can be used to sort out all
triangles, which does not intersect with the plane, and thus not with the triangle S1 .

In the case of a detected collision with the plane, it is also necessary to test if the intersection with the plane
is within the boundaries of the S1 triangle. This is illustrated in Figure 5.23, where only one of the plane-
intersecting triangles also intersect with S1 .

Figure 5.23: A triangle S1 has been used for generating a plane equation, a set of triangles intersect with the plane, but only S2 intersects
with S1 .

Boundary check of the Intersection in Plane

Determining if the intersection with the plane happens within the boundaries of S1 can be formulated as a 2D
problem. The following situations on the resulting 2D description can be stated:

• If only a single S2 vertex is in the plane, and all others are on the same side, the intersection with the
plane will appear as a single point in the 2D map.

• If exactly two S2 vertices are located in the plane, or if the S2 vertices are spread on both sides of the
plane, the intersection will appear as a line between 2 points in the 2D map.

• Should the entire set of vertices in S2 be located in the plane, it will appear as a triangular shape between
3 points in the 2D map.
54 5.3 Collision Detection

To transform points located in the 3D plane to a simpler 2D description, a transformation matrix can be formed.
The matrix can be multiplied with intersection points, to desribe them and the triangle S1 in a 2D environment.
To assure that this transformation matrix will work as intended, a local x-y-z frame is aligned with the plane.
The origin of the frame, is chosen to be the vertex p1 .

If the x-axis is chosen as the line between the vertices p1 and p2 in S1 , the y-axis is placed in the plane when
the cross product of the y-axis unit vector and the x-axis unit vector is equal to the plane normal vector n. This
is illustrated in Figure 5.24, and expressed in Equation 5.26.

Figure 5.24: A local frame can be placed using the vertices, and the normal vector of the plane.

n=x×y (5.26)

As n, x and y are all vectors, the following must also be valid:

y = −n × x (5.27)

A frame based on these three vectors can be used to determine the three angles, which makes a transformation
matrix applicable. If the point p1 is subtracted from all the points of intersection, the transformation matrix
needed is only a rotation matrix. Determining such a rotation matrix can be done in the following manner:

1. Create a copy of the frame, used for temporary manipulations, in Figure 5.25 the copied frame is illus-
trated together with the global frame.

2. Rotate the copied frame about the global z-axis to place the frame’s x-axis in the global x-z plane. See
Equation 5.28 and Figure 5.26

3. Rotate the frame about the y-axis to align the frame’s x-axis exactly with the global x-axis. See Equa-
tion 5.29 and Figure 5.27

4. Determine the rotation matrix for rotating the frame about the x-axis, to align the frames y-axis excactly
with the worlds y-axis. See Equation 5.30, Figure 5.28 shows the angle found.

5. Multiply the rotation matrices used, for rotating the copied frame to get the 2D rotation matrix. See
Equation 5.31, in Figure 5.29 the result of applying the lat matrix found is shown.
System Design 55

Figure 5.25: A local frame and the global frame, shown with common origins.

 
x1
x2 
 

0
xu1 = p 2
x1 + x22
  
1
θz = cos−1 xu1 • 0
  

0
 
cos θz − sin θz 0
Rz =  sin θz cos θz 0
 

0 0 1
xux = Rz · x (5.28)
yux = Rz · y
zux = Rz · n

Where:
x denotes the vector representing the local frame x-axis (spanning from p1 to p2 )
y denotes the vector representing the local frame y-axis.
n denotes the vector representing the local frame z-axis (and the plane normal vector)

Figure 5.26: Determining the angle between the local frame x-axis and the global frame xz-plane allows rotation about the z-axis
56 5.3 Collision Detection

xux
xun =
|xux |
 
1
θy = cos−1 xun • 0
  

0
 
cos θy 0 sin θy
Ry =  0 1 0 
 

sin θy 0 cos θy
xuy = Ry · xux (5.29)
yuy = Ry · yux
zuy = Ry · zux

Figure 5.27: A rotation matrix formed with the angle shown in Figure 5.26 has been used to rotate about the z-axis. Thus the angle
between the local frame x-axis and the global frame x-axis can be found.

yuy
yun = (5.30)
|yuy |
 
0
θx = cos−1 yun • 1
  

0
 
1 0 0;
Rx = 0 cos θx − sin θx 
 

0 sin θx cos θx

R2D = Rx · Ry · Rz (5.31)

As it is known that the points of interest always lie in the plane, multiplying plane intersection coordinates with
the transformation matrix results in a z component of 0, (or very close to it, where small deviations are due to
computational errors). Thus the intersection and the triangle is now described by x and y components within
the plane.

When mapped to the 2D description, the three possible shapes (a point, a line, or a triangle), present different
test cases, which can be investigated in the following manner:
System Design 57

Figure 5.28: A rotation matrix formed with the angle shown in Figure 5.27 has been used to rotate about the y-axis. Thus the angle
between the local frame y-axis and the global frame y-axis can be found.

Figure 5.29: The angles determined in Figures 5.27,5.26,5.28 have been used to build the rotation matrix resulting in the perfect alignment
of the local and the global frame.

• Point

– The intersection point lies inside the S1 triangle or on the circumference and there is a collision.
– The intersection point lies outside the S1 triangle, and there is no collision.

• Line

– The intersection line crosses one or more of the triangle sides, hence there is a collision.
– None of the lines cross, but the two end points of the intersection line is detected to cause collision
by testing them as a point, which implies a collision.
– The entire intersection line lies outside the S1 triangle, and there is no collision.

• Triangular shape

– One or more lines of the triangle is causing a collision when testing it as a line, and thus a collision
is occuring.
– The intersection triangle lines all lie outside the S1 triangle, but encapsulates S1 and there is a
collision.
– The intersection triangle lines all lie outside the S1 triangle, and does not encapsulate S1 and there
is no collision.

From the test cases it is possible to extract two tests which can be used for testing all the situations. One test
is to determine if two lines cross, and another is a matter of determining if a point is located within a triangle.
One way of testing each of the situations are presented in the following.
58 5.3 Collision Detection

Point in Triangle Test


Collisions can sometimes only be detected by an algorithm which tests if a point is located inside a triangle.
Many algorithms which can detect this exist, however many of them are dependent on using acos or the square
root. Neither acos or the square root are desireable as they are time consuming to compute, and it is expected
that this algorithm is to be run at least once for each terrain polygon that intersects with an infinite plane. There
is created one plane for each robot polygon.

An alternate method called the "barycentric Technique" tries to avoid the use of square root and acos, and is
described in the following [16].

The basic idea of this method is to use a variant of the barycentric coordinate system. Usually this coordinate
system is used to determine the CoM, as it marks the center of a polygon based on the masses in each vertice.
For a triangle with unit masses, it defines the corners as (1,0,0); (0,1,0); (0,0,1), and the CoM is located in
(0,0,0).

In this way any point p placed within the triangle can be described as a set of barycentric coordinates p =
(λ1 , λ2 , λ2 ). Where λ1 + λ2 + λ3 = 1. This is illustrated in Figure 5.30, and it can be interpreted in the
following manner.

• The point lies inside the triangle if 0 < λi < 1, for all i = 1,2,3

• The point is placed on the rim, if 0 ≤ λi ≤ 1, for all i = 1,2,3

• Otherwise the point lies outside the triangle

Figure 5.30: A point P in a triangle and the corresponding barycentric coordinates

Determining barycentric coordinates in general is a matter of solving linear equations, but for the specific case
of a triangle a simplified approach exists. First, the task is simplified by computing vectors, with origin in one
of the vertices. Here v1 is used, see Equation 5.32.

s0 = v3 − v1 (5.32)
s1 = v2 − v1 sp = p − v1 (5.33)

As illustrated in Figure 5.31 an axis w is chosen such that it is aligned on s1 , and similarly an axis u is placed
on so , the point p can be described as components of w and u, now as p = (u, w).

As the properties of the barycentric coordinates are still valid in this representation, it results in the following
interpretation.
System Design 59

Figure 5.31: Axes u and w are created from s1 and s0 , the corresponding sidelengths of the triangle are scaled to 1 on u, and 1 on w

• If 0 ≤ w and 0 ≤ u and u + w ≤ 1, then the point is located within the triangle

• Otherwise it is outside the triangle

In this representation the barycentric coordinates can be found by applying dot products:

s1 • s1 · s0 • sp − s0 • s1 · s1 • sp
u= (5.34)
s0 • s0 · s1 • s1 − s0 • s1 · s0 • s1
s0 • s0 · s1 • sp − s0 • s1 · s0 • sp
v= (5.35)
s0 • s0 · s1 • s1 − s0 • s1 · s0 • s1

And it can be determined by a logic comparator if a point is located within the triangle.

Line in Triangle Test


For the case where a line appears in the 2D map, it seldomnly happens that the lines 2D end points are formed
directly from S2 vertices. In many cases the end points of the intersection line needs to be derived before the
2D transformation is applied. In such cases the point of intersection can be found by shooting a "ray" between
two of the S2 vertices which are located on different sides of the plane. The point of intersection between the
ray and the plane, will be the coordinate for the end point of the line in the 2D plane.

Determining the equation for a ray:

S(t) = pa + (pb − pa )t, t ∈ R, (5.36)

Where:
pa and pb are vertices in S2
t is a time variable

Intersection between a ray and a plane occur when the ray and the plane equations are equal:

−d = n • (pa + (pb − pa )t) (5.37)

This can be used to find t, and insert it into the ray equation to determine the point in space where the ray
intersects with the plane.

pa • n + d
t=− (5.38)
(pb − Pa ) • n

Notice that if the direction of the ray is perpendicular to the plane normal then the denominator will be zero.
So this equation should never be used on rays which are parallel with the plane.
60 5.3 Collision Detection

When all intersection points are found, (usually two per polygon) the transformation to 2D can be applied on
the determined intersection points, and a line description y = αx + b between the two intersection points can
be derived.

To detect if the resulting line segment crosses the triangle in 2D, the triangle is also described as three lines
of the form y = αx + b. Then by combining two line equations (one side of the triangle at a time, with the
intersection line), the x - coordinate of the line to line intersection is given by:

b2 − b1
x= (5.39)
a1 − a2

The resulting x must then be between the x-coordinates of the vertices to have collided with it, which is also
illustrated in Figure 5.32. Unless the triangle lines are vertical, then the y component is investigated.

Figure 5.32: Intersection lines only intersect with the triangle, if the intersection point is between the triangle vertices. The green point of
line intersections does not cause collisions. The red intersection points do however mark a collision.

5.3.2 Implementation Considerations

When no collisions have been detected with S1 and S2 , S1 needs to be tested against all other triangles. It will
load the processor, as both the robot and the terrain map hold several polygons. More specifically the terrain
used in this project contains 7072 polygons.

To minimize the load, a linear sorting of the terrain polygons is done, so that collisions are only investigated on
surfaces close to the robot.

To detect which polygons are close to the robot, it is either necessary to choose a maximum size of the polygons
representing the terrain, as it allows for using axis aligned bounding boxes for sorting, or it is necessary to
perform a collision test between the surface of a bounding object and the terrain to select the objects which is a
more detailed collision test afterwards.

Of the two methods the first one is by far the fastest, as an axis aligned bounding box can be formed by logic
comparators. In Figure 5.33 the polygons selected by a bounding box are presented, and it is illustrated that
adding a margin ensures that all polygons, which can have interest, are included. If any polygons are allowed
to be larger than the bounding box, the box can be crossed by a polygon, which have its vertices outside a
bounding object, and thus not considered, resulting in a collision which is not detected. By taking half the
length of the largest polygon, a margin length is achieved that ensures this never happens.

A bounding box around the robot plus the margin selects appoximately 600 terrain polygons. Depending on
the location of the robot.

One way of reducing the overall calculations, is to reduce the amount of polygons used for representing the
robot. Though for curved surfaces, the difference between the real surface and the polygon model increases,
when the polygon count decreases, hence a compromise between load and accuracy must be found. The model
used for implementation is shown in Figure 5.34 and contains 312 polygons
System Design 61

Figure 5.33: To minimize calculations only polygons that has one or more vertices inside the bounding box is tested for collisions, in
praxis a margin is added to the bounding box, to ensure that all polygons are included.

350
600

300 550

250 500

450
200

400
150
350
100
300

50
250

200
150 200 250 300 350 400 450 500 550 600 650

Figure 5.34: The low polygon 3D model used for representing the robot

A cylinder as the one used for the end point of the leg, is represented as a box, and the body can be assumed to
be a solid cylinder. The cylinder of the body is approximated by triangles spanning from the center to the rim.
A box shaped leg, will perhaps report a collision with the surface millimeters before the leg actually touches the
ground. This is acceptable when the only purpose is to avoid collisions, however it gives rise to some problems
when trying to use the leg for support, as it will detect collisions where the end point of the supporting leg
touches the ground.

Thus such a model introduces a need for ignoring collisions happening where contact is actually wanted. A
small bounding box can be created around the desired place of the foot, which can be used for removing the
collisions found near the end point of the supporting legs.

The flowchart describing the collision detection algorithm is illustrated in Figure 5.35.

The collision test described is sufficient for non-moving objects, but as the robot moves, and the motion is
calculated in discrete time steps, it is not known what happens between time steps. An end point of a leg can
be moved from one side of a corner of an obstacle within a timestep, in such cases it is also necessary to avoid
62 5.3 Collision Detection

angles for legs,


and robot coordinates

For each robot polygon

Build robot polygons


Create 2d transformation matrix

Transform polygons
to global coordinates
For each terrain polygon

Load terrain polygons


Determine the distance to
the robot polygons planes
for all vertices on terrain polygons
Use bounding box on terrain

yes

All vertices are on


the same side of
the plane
no

yes
vertices intersects
as rays

Determine point of ray intersection

no
Transform stored
collisions to 3D

Transform intersection points to 2D

store collision coordinates


if vertices are found in robot triangle

Figure 5.35: Illustration of the flow of the robot to terrain collision detection algoritm

the obstacle between time steps.

Many methods for avoiding collisions between timesteps exist. One method is to create a "solid" object,
connecting a polygon with its position for the next step, creating virtual surfaces between the two locations.
This is a linear approximation to the motion between time steps, and allows for re-using the collision test on
the virtually created surfaces, and thus detect what happens in between timesteps. Unfortunately it is also a
method which requires many extra calculations, as the virtual polygons need to be formed, resulting in six new
polygons for each existing polygon.

Another solution is to expand the surface representation, in relation to the stepsize. When using this method a
very slow movement will have a collision detection scheme which allows for the robot to be very close to other
objects, as the virtual representation will be close to the real object. A fast movement will however require a
greater distance to the objects it can collide with, as the virtual representation needs to be expanded untill it is
connected with it self in the next frame. If the movement between time steps are sufficiently small, the surface
needs very little expansion, if any.

For this project, it is assumed that the robot moves so slowly that surfaces are allways connected with the old
representation without adding new polygons. With a leg thickness of 1.5 cm where it is thinnest (near the end
point), it limits the end point to movements less than 1.5cm per time step.
System Design 63

5.3.3 Inter-leg Collisions

The robot can physically collide with it self, and create a situation where two or more legs hit each other. The
gait generator can also result in a trajectory which will force the robot to collide with it self. This is due to
the fact, that the path of each leg is found seperately, without considering where the other legs are. To avoid
that such a collision will damage the robot or cause a deadlock situation, this type of collisions must also be
detected.

Using the polygon to polygon test on all the legs in the model will detect such a collision. To reduce calcula-
tions, it is desireable to sort out as many polygons as possible.

One way of sorting out some of the polygons, is to check if any of the legs are in the vicinity of each other. If the
robot movements are described in the robot frame, then in the xy-plane, each leg appears as a line segment. The
length of the line segment is given by the femur and tibia joints angles, and the angle of the line is determined
by the coxa joint.

In the 2D topology the distance between the leg line segments must be less than the widest point of the legs,
before the possibillity of a collision exists. Should line segments be further away than the leg width they cannot
collide, if they are closer, it is necessary to encorporate the z-axis also to verify that they are not at different
altitudes. This can be accomplished properly by running the collision detection on the polygons of the two legs
which are near the area that makes out the small distance.

Illustrations of 2D segment representation of the legs are shown in Figure 5.36

Figure 5.36: Legs are represented as line segments in 2D, and it is only necessary to test for collisions where the segments are closer than
the leg max width

If the legs are defined as line segments, the shortest distance between them depends on the angle of the line
segments. Evaluating the end points distance to the other legs line segment is not sufficient, as the end point
and the point which in 2D is farthest away from the legs base, can be different points.

Thus it is necessary to find the end points for the line segment first, and as each of the four origins of link frames
represent a point on the line segment in 2D, and each one of them can be found in the middle (depending on the
angles at the joints), this requires a few calculations. One way of determining which points are 2D line segment
end points, is to calculate the length of the segments between all the joints. Defining Lv as the length of the
vector spanning between two points, the length can be found using Pythagoras:

v =a−b
q
Lv = v 2x + v 2y (5.40)
64 5.3 Collision Detection

Calculating the Lv for all possible combinations of points, and selecting the points used for the longest Lv will
give the correct end points. From here on the points chosen is dubbed a and b

As both of the leg’s line segments can have different angles, and thus have different end points selected, the
most general way of determining the shortest distance between them, is to look at the distance between the two
line segments. So in a similar way the points c and d describes the end points of the other line segment.

If the line segments found intersect, the shortest distance between them is 0. Determination of where the
intersection happens is done by forming infinite line equations, and then testing if the intersection happens
between the points a and b.

Figure 5.37: Legs are represented as line segments in 2D, end points are not necessarily the closest point on both lines.

In Figure 5.37 different cases are shown where the line segments does not intersect, in some situations a line
between the end points gives the shortest distance, and in others a point on the line segment provides the shortest
distance. This illustrates that eight lengths needs to be calculated in the generic case, to assure that the shortest
distance is always found. The lengths are denoted as L, and can be found as dotted lines in Figure 5.38:

• Lac = |ac| is the length between the points a and c

• Lad = |ad| is the length between the points a and d

• Lbc = |bc| is the length between the points b and c

• Lbd = |bd| is the length between the points b and d

• Lqa = |qa| is the length between the point q on the segment cd and the point a

• Lqb = |ab| is the length between the point q on the segment cd and the point b

• Lpc = |pc| is the length between the point p on the segment ab and the point c

• Lpd = |pd| is the length between the point p on the segment ab and the point d

The lengths between the endpoints are always calculated, for the remaining four lengths, it is only necessary
to calculate them if the end points of the other line segment can be found at a line perpendicular on the line
segment in question. This can be investigated by applying perpendicular projection, of the vector spanning
from an end point i.e. a to the point investigated i.e. c. If the lines are scaled so that ab becomes a unit vector,
a scalar t can be used to evaluate if a point is placed perpendicularly between the end points of a line segment
See Figure 5.39 for the relation between t, and the point c.

ac • ab
t= (5.41)
L2ab
Where:
t = 0 is equivalent to placing p on a, and then the length is the same as Lac .
System Design 65

Figure 5.38: The eight lengths span between the point end points, and the point on the segments which are closest to the other line. The
shortest distance in this example is Lpc

Figure 5.39: If t becomes larger than 1, it indicates that the point cannot form a perpendicular line on the line segment.

t = 1 is equivalent to placing p on b, and then the length is the same as Lbc .


t < 0 locates p on the backward extension of ab which is outside the end points.
t > 1 locates p on the forward extension of ab which is outside the end points.
0 < t < 1 indicates that p is between a and b and is the point closest to c.

Only if 0 < t < 1 should the point p be calculated. That is done by:

p = a + t(b − a) (5.42)

As this is located in 2D, and we already know Lab the distance between P and C can be calculated as:
(ay − cy ) · (bx − ax ) − (ax − cx ) · (by − ay )
s= (5.43)
L2ab
Where:
s < 0 indicates that c is left of ab
s > 0 indicates that c is right of ab
s = 0 indicates that c is excactly on ab

Finally the distance Lpc from c to p is found as |s| · Lab . When the shortest distance is less than the maximum
leg width, it is necessary to do polygon collision detection to investigate if the legs collide. If the shortest
distance is larger than the maximum leg width, it is not necessary to do further calculations to conclude that
no collisions occur. The robots legs can physically collide with the neighboring legs, and the leg next to the
neighbor. All legs can thus be checked for collisions by checking them against the right neighbor, and the leg
to the right of this, which ends up as 12 line segment distance calculations.
66 5.4 Verifying Collision Detection

5.4 Verifying Collision Detection

To verify if the collisions are detected properly, a simulation is run where the polygon model of the robot is
placed at two different locations in the terrain. The first location should not cause any collisions as the robot
is standing on the three supporting legs, and the other legs are lifted from the surface. The second location
should cause collisions as the robot is placed such that the free legs, are touching the surface, also one of the
supporting legs are forced into the terrain to verify that collisions are detected outside the bounding box of the
foot.

The robots polygons are plotted with the terrain, and the collisions detected are marked with yellow triangles
to illustrate where the algorithm has detected that the surfaces collides. The collision are detected at the bottom
vertice of the yellow triangles.

The first situation is illustrated in Figure 5.40, here no yellow triangles appear, which indicates that no collisions
are detected.

The second situation is illustrated in Figure 5.41, here it is visually confirmed that collisions are detected. The
location of the detected collisions implies that collision detection can detect collisions at propable locations,
when collisions between the terrain polygons, and the robot polygons occur.

Figure 5.40: The only legs touching the surface is the supporting legs, which are allowed to collide near the end point, and no collisions
are detected

Figure 5.41: All the legs are touching the surface and collisions are detected, also one of the supporting legs is pushed into the terrain to
test if collisions are detected above the allowed end point area

5.5 Avoiding Collisions

The purpose of the Collision avoidance block, is to modify the motion proposed by the APF algorithm and the
IK solver, to avoid collisions. One way of achieving this modification of the paths, is to add extra repulsive
System Design 67

charges to the potential map used in the APF algorithm, this will change the output from the gait generation
block.

One of the consequences of this method is that charges added in one sample causes an effect for the next
samples also, this can be an advantage if it is placed correctly. It can be a complex task to predict the outcome
of adding multiple charges, and can give undesired results, if placed unfortunately. However the concept is
in harmony with the APF algorithm, and when the problem is solved the charges will only give rise to new
problems if the same area of terrain is traversed again. This is solved by removing charges after some time.

An advantage of adding charges, is that the existing map and target is also used to define a new path. This
means that there is no need for focusing on where the target is located when modifying the path, as the existing
APF algorithm attempts to move the end point closer to the target. It is expected that this will aid in selecting a
reasonably modified path, and thus this method is chosen for avoiding errors where it is possible.

5.5.1 Inter-leg Collision Avoidance

When an inter-leg collision is detected, the purpose of the error handler is to ensure that the legs move away
from each other again. The data available from the detector is knowledge about which legs are colliding, and
where in space the collision happened. It is physically possible for the robot to intercollide three legs at a time,
however as the legs are moved in tripedal gait, and thus usually in the same general direction this should never
happen. Instead focus is put on solving the problem arising when two legs are colliding. Note that in the case
where three legs are collided, solving for two of them at a time will also result in some manipulation of the
third leg. The result of this effect is however not further investigated.

If a repulsive charge is added to the potential map directly between the legs end points, it would force the legs
end points to move away from each other in the APF algorithm. This type of collision is shown in the right
inter leg collision in Figure 5.42. Even though this will often solve the problem, it is not the optimum solution
if the legs are somehow crossed, or if the collision happens far away from the end points. In the crossed legs
case, it could worsen the problem, in the left inter leg collision in Figure 5.42 the legs are crossed. So the point
of collision, in relation to the legs links, can be used to position a charge even better.

Figure 5.42: The free legs have been moved but hits the supporting legs, causing collisions. Collisions are marked by red lines.

It is assumed that one of the two legs are used for support, if the legs are neighboring legs, this will allways be
the case in tripedal walk. The only situation where that is not the case, is assumed to be solved by the fact that
the leg end points are moving towards different targets.

The end point of the supporting leg cannot be moved, and thus collision avoidance should be done by manipu-
lating the free leg. To do this, a location for the new charge is found by looking at the impact angles, and then
use the angles of impact to place the point charge. The angles are representable as a vector, and a vector can
be found which describes the direction from the free leg end point, to the collision point. Such a vector can be
added to the end point of the moveable leg, and hence provide a position for the point charge. The result of this
68 5.5 Avoiding Collisions

placement in the potential map, should move the end point in the opposite direction of the impact direction,
which should make the leg avoid a collision for at least the next step.

Describing the kinematic links of a leg as line segments between joints, can be used to create vectors which
describes the shortest distance from the collision point to each of the links on the moveable leg. Sometimes one
of these vectors will be shorter than the others, and perpendicular to one of the links. In such a case this vector
alone will desribe the impact.

In the other cases, the collision has occured near one of the joints, and none of the distance vectors are perpen-
dicular to any of the links. In that case, two of the vectors should be similar in length, and found between the
joint and the collision, and any one of the two vectors can be chosen to place the charge.

Figure 5.43: A collision is found, and the shortes distance to all the links are found. The vector desribing the shortest of the distances
found, here marked in red, is used for positioning a charge

To create the distance vectors, a method for describing the distance from a point to a line segment in 3D is
needed. Remebering the ray equation:

v = a + (b − a) · t (5.44)

Where:
a and b are 3D coordinates
t is a scalar defining the distance to the origin of the ray.

The shortest distance to the ray is found as a function of the scalar t:


(a − c) • (b − a)
t=− (5.45)
|b − a|
Where:
c is the coordinate of the point investigated

Inserting t into the ray equation provides the point P where the shortest distance is present. If p is between a
and b the vector wanted for placing the charge is p − c. Otherwise it is either a − c or b − c depending on which
is closer to the collision point. The charge coordinates are then given as EndpointCoords + vector

5.5.2 Terrain Collision Avoidance

When a collision with the terrain is detected, a set of error data is generated. The error data holds information
about where the collision happened in space, and which part of the robot was investigated when it was detected.
This data can be used to resolve the problem.

The collision can occur on the robot body or on the robot legs. As movement on the supporting legs can only
be caused by body movements, the collision found on supporting legs indicate that the path of the robot body
System Design 69

must be reevaluated. If the collision instead occur on a leg being moved towards a new target, then the path of
the leg itself can sometimes be modified to make it avoid the obstacle, and in such a case the body position and
orientation does not need to be changed. In general, replanning the body’s path will affect the movements of all
the legs, with the risk of creating new collisions. Thus, if the leg itself can be manipulated, this is preferable.

In the situation depicted in Figure 5.44, moving the leg by adding new charges will be able to solve the problem,
and the rest of the robot remains unaffected. In such a case, the method described for avoiding inter-leg collision
can be adopted to aid a leg to avoid the terrain.

Figure 5.44: The moving leg will collide before the end point has reached its target, if it follows the original path. In this case, moving the
end point away from the collision when collisions happen, will aid in solving the problem.

However in a situation where the leg end point is being moved downwards, and is unable to reach its destination
due to collisions, moving only the leg is not enough, and it is necessary to move the body to avoid the collision
also. If the body needs to be manipulated, a more complex problem arises, as it affects both the leg with
collisions, but also the rest of the kinematics. This can lead to kinematic problems, or collisions elsewhere.

Manipulating the body’s path optimally to avoid collisions, depends on the kinematics, and the location in the
terrain. Calculating the current movement and detecting if it generates collisions is time consuming and to
avoid exceeding the real time deadlines a method which can change the path of the robot within the timeframe
of this iteration is needed. Otherwise, the robot must be paused in its movements, while iterative attempts using
the kinematics and collision detection, are run.

An approach for changing the bodys path is chosen which does not consider consequences with respect to
kinematics or new collisions. However, the robot is paused for the next iteration, and thus the inverse kinematics
solver and the collision detection calculations must be done in the next iteration before the robot is re-animated.

The basic concept of this solver is to divide the problem into subproblems that again are solved without con-
sidering the influence on each other. The first subproblem is to avoid collisions on the body it self. The other
subproblems are similar in type, and consider collisions on a leg.

It is assumed that if a complex problem arises which is not solveable by applying one of the subproblems alone,
the iterative nature of the controller will converge towards a solution, otherwise the system is halted, until the
user inputs a new direction.

Avoiding Collisions on the Body

If the body collides with the terrain, it is chosen to minimize the problem to consider only how a free floating
body could be manipulated to avoid the collision. One answer could be that the body should always be lifted
directly upwards untill collisions are avoided.

However to minimize the risk of introducing new kinematic errors, it is decided that the minimum change
upwards is preferred. This means that if the body can avoid a collision simply by tilting, it is preferred, as this
70 5.5 Avoiding Collisions

will affect the legs less than moving the body directly upwards.

If all the collision coordinates are transformed to the robot frame, two distinct types of collisions can occur on
the body. Either a collision is placed below the robot, or it is placed on the side of the cylindric body.

Collisions on the sides are easily recognized as they will have a z-component larger than zero.

To avoid collisions on the sides of the body, movement in the robot frame xy-plane makes more sense than
lifting or rotating the robot. If the x and y components of the collision coordinates are considered as a vector
describing the distance to the center of the robot, then simply inverting the vector will indicate the direction
which will ensure that the detected collision is no longer colliding. If more than one collision occur at once, then
summing the vectors will avoid points on the same part of the body. Opposite collisions will cancel eachother
out, and not be avoided by this method.

It is chosen to always move the body in a single step length of 2 mm in the direction indicated by the vector
sum. To avoid moving the body to fast to ensure stabillity.

Collisions on the bottom, are treated after the body has been moved in the robots own xy-plane. This has the
advantage that if some side collisions are still located within the circumference of the robot, they can be treated
as collisions with the bottom, and thus be avoided by lifting the robot.

Collisions with the bottom of the body are avoided by tilting and lifting the body. As there is no knowledge
provided about how much the robot is being penetrated at a collision, it is assumed that movements at 2 mm
will avoid the collision, and this value is assigned as a new z component for the collisions. Remaining side
collisions already contain information about the z component, which is used directly.

Tilting the robot about a point on the rim, can be done in many ways, and the resulting manipulations is to be
translated into pitch, roll, and a new z component for the robots body. In Figure 5.45 it is shown in 3D how the
final orientation is built from iterative changes.

Figure 5.45: Final angles and lifts are found by avoiding one point at a time.

The following iterative method is used for tilting the body. An iteration starts by defining the current plane
equation, as a plane normal vector and a scalar. In Figure 5.46 a 2D example is illustrated, where two points
are avoided. The plane equations are represented as lines, and the resulting angles (a and b) is to be translated
to roll, and pitch, and a lift of the center.

From the plane equation, the perpendicular distance to the collision point is found as L. In the situation in
System Design 71

Figure 5.46, this is a vertical line for the first iteration, but not for the second. The calculations are, as shown in
Equation 5.25 in section 5.3.

Figure 5.46: Final angles and lifts are found by avoiding one point at a time.

The distance from the collision point to the rotation point can be found by creating a vector between the points,
and then determining the length of this. This distance will be represented in the desired future plane of the
cylinder. The distance found is dubbed D, and it can be combined with L to determine the angle of rotation.

a = asin(L/D) (5.46)

This angle a denotes the upward rotation perpendicular on the direction vector from the point of rotation to the
collision point. Before the next iteration a must be converted to pitch and roll.

The x and y components of the direction components reveals the relationship between pitch and roll which will
form the angle correctly in 3D. If the x and y components are represented by a unit vector, the two components
can be multiplied with the angle to provide the delta pitch and delta roll; the deltas are added to the existing
pitch and roll and a new set of angles are found.

When the vector used for determining pitch and roll is aligned with one of the robot frame axes, will produce
the depicted 2D situation in Figure 5.46. It can be seen from the illustration that just rotating the plane about the
center will not avoid the collisions. So an appropriate z value must be found. This is done by forming the new
plane equation, given by the new pitch and roll angles, and the center point which may be lifted by previous
iterations. The new plane equation is then lifted, so that it is above the collison point, and the step is completed.

If the next collision point is investigated in the next step, based on this new plane equation, the angle b in the
figure will be found, and the resulting placement can be found.

Avoiding Collisions on a leg

Collisions on a supporting leg requires movement of the body, this is often also the case for a free leg. For the
moveable leg, adding charges in the APF algorithm, can sometimes solve the problem, and for a supporting
leg, adding charges will have no effect.

Thus charges are always added near the end point, in the same manner as for inter-leg collisions. The shortest
distance from the collision point to a leg line segment is expressed as a vector, (remember Equation 5.45). If a
similar vector is applied to the robot body, the body will avoid that particular collision, (assuming no rotations
or other movements of the body has been applied). If the inverse kinematics allow for this change, it will
eventually solve the collision related problems.
72 5.5 Avoiding Collisions
Chapter 6
Epilogue

6.1 Conclusion

The goal of this project was to create a model and a controlling system for a hexapod robot. As the long term
goal was to navigate in a debris or rubble, it was chosen to create an artificial map for the robot to navigate
within.

In Chapter 4 a kinematic and inverse kinematic model, for the robot, was derived. This model was verified and
considered sufficiently precise. This model enabled us to monitor the position of the CoM online, which made
it possible to sustain static stability, and enabled us to calculate and control the position of the robot and its legs
in a global coordinate system.

Considerations regarding dynamic stability was done offline, by worst-case assumptions, and showed that to
be able to ensure dynamic stability the PoS needed to be narrowed. This was done through simulations which
showed that the ZMP could move up to 8.8 mm.

Through Chapter 5 a system to control the robot was built. The system is able to generate a gait that works
on flat ground and in theory also on the uneven terrain, but because of limitations in the kinematic system a
successful gait on uneven terrain was not achieved. The triangle search algorithm can successfully find support
triangles in the uneven terrain, and the APF algorithm can navigate the leg end points to the desired support
points without colliding with the terrain. A simple control scheme was designed to control the robot body
position and orientation according to the leg end points positions.

If collisions should happen a system was designed to detect and avoid these, and through simulations this was
shown to perform satisfactorily. Because a gait was not achieved on the uneven terrain, a proper real world test
was not performed.

The overall conclusion of the project is that a control system that can successfully control the robot was de-
signed, and the system generated a satisfactory gait on flat ground. The gait generating part of the robot can
successfully generate a gait to traverse the terrain. A system to detect and avoid collisions was designed and
implemented, but not fully tested.
74 6.2 Discussion

6.2 Discussion

The main reason the overall goal of the project was not completed, is because of the kinematic limitations of
the robot. The robot does probably have the kinematic flexibility to achieve an acceptable gait on the uneven
terrain, though either the gait generating algorithm or the kinematic singularity solver needs to be enhanced to
achieve a successful gait on uneven terrain.

The APF method, used for navigating the leg end points through the terrain, worked satisfactorily. Though
by reconsidering the assignment of repulsive charges to the terrain and the repulsive potential function, the
number of repulsive charges could probably be reduced. By reducing the number of repulsive charges the
computational load would also be reduced, which allows a faster gait. The method of assigning extra repulsive
charges for escaping local minima, in the potential function, is a solution that will always solve the problem,
if at all possible. Though the time to escape the local minimum might be reduced by the use of other methods
like, temporary assignment of attractive charges to pull the leg end point away from the local minimum.

For final implementation of the collision detection it should be considered to represent the robot and the terrain
as connected spheres. Such a representation will reduce the calculation time, but also reduce the accuracy of
the collision detection.

If the terrain, the robot surface model, and the end point destinations, were represented in the Configuration
Space (CS) (see Appendix G) the APF could probably be applied in the CS to create a gait which would never
produce IK singularities or collisions. The result being that the error handling could be completely removed
from the system. However a representation of the CS, which is feasible to implement, was not developed in
this project.

The overall system structure is considered a valid solution for the problem. As the system is built by somewhat
generic subsystems, one block can be replaced without having to change the entire system. This feature allows
the use of alternate methods, when handling the motion control or the kinematic singularities.
Bibliography

[1] United Nations. World urbanization prospects - the 2007 revision. CD: literature/urbanization.pdf.

[2] Robin Roberson Murphy. Human-robot interaction in rescue robotics. CD: literature/humanrobotinter-
action.pdf.

[3] Illah R. Nourbakhsh, Katia Sycara, Mary Koes, Mark Yong, Michael Lewis, and Steve Burion. Human-
robot teaming for search and rescue. CD: literature/humanrobotteam.pdf.

[4] Shigeo Hirose. Three basic types of locomotion in mobile robots. CD: literature/hirose.pdf.

[5] Manuel F. Silva and J.A. Tenreiro Machado. Journal of vibration and control. CD: literature/vibrationcon-
trol.pdf.

[6] Lynx Motion. http://www.lynxmotion.com/.

[7] LungWen Tsai. Robot Analysis: The Mechanics of Serial and Parallel Manipulators. John Wiley & Sons,
1999.

[8] Hitec. HS645mg datasheet. CD: literature/hs645mg.pdf.

[9] George A. Bekey. Autonomous Robots: From Biological Inspiration to Implementation and Control. MIT
Press, 2005.

[10] Miomir Vukobratović and Branislav Borovac. Zero-moment point — thirty five years of its life. Interna-
tional Journal of Humanoid Robotics, 1:157–173, 2004. CD: literature/vukobratovic35.pdf.

[11] M. Vukobratović, B. Borovac, and V. Potkonjak. Towards a unified understanding of basic notions and
terms in humanoid robotics. Robotica, 25(1):87–101, 2007. CD: literature/notionsandterms.pdf.

[12] Reza N. Jazar. Theory of Applied Robotics, Kinematics, Dynamics and Control. Springer, 2007.

[13] Yariv Bachar. Developing controllers for biped humanoid locomotion. Master’s thesis, School of Infor-
matics, University of Edinburgh, 2004. CD: literature/bachar.pdf.

[14] K. Erbatur, A. Okazaki, K. Obiya, T. Takahashi, and A. Kawamura. A study on the zero moment point
measurement for biped walking robots. pages 431–436, 2002. CD: literature/erbatur.pdf.

[15] H. Choset, K. M. Lynch, S. Hutchkinsosn, G. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun. Principles
of Robot Motion. Massachusetts Institute of Technology, 2005.

[16] Christer Ericson. Real Time Collision Detection. Morgan Kaufmann, 2005.
76 BIBLIOGRAPHY
Chapter 7
Acronyms

APF Artificial Potential Function

CoM Center of Mass

CS Configuration Space

DoF Degrees of Freedom

IK Inverse Kinematics

MTL Motion Tracking Lab

PoS Polygon of Stability

PWM Pulse Width Modulated

SAR Search and Rescue

SSM Static Stability Margin

SW SolidWorks

ZMP Zero Moment Point


78
Appendices
Appendix A
Measurement and Simulation of Worst-Case
ZMP Movement

To determine how much the ZMP moves, an approximation of the maximum acceleration has been done. The
test was done by placing the robot in the MTL and recording the position of the robot body. The robot moved
the body 50 mm forward and waited for 5 s, before the body was moved back again. These two movements were
done with the maximum speed allowed by the servos, which is considered to produce the highest acceleration
possible. As shown on Figure A.1 the maximum measured acceleration is found to be 1.375 sm2 .

0.05
Measured position

0.045

0.04

0 2 4 6 8 10 12

0.1

0.05
Velocity

−0.05

−0.1
0 2 4 6 8 10 12

2
Acceleration

−1

−2
0 2 4 6 8 10 12
Time

Figure A.1: The measured position of the robot body and the calculated velocity and acceleration. The maximum acceleration is found to
be 1.375. sm2

When the worst-case acceleration is found, it is possible to use Equation A.2, from Section 4.5 to find the
82

maximum movement of the ZMP.


n
X
mi (xi (z̈i + gz ) − ẍi zi )
i=1
xZM P = n (A.1)
X
mi (z̈i + gz )
i=1
n
X
mi (yi (z̈i + gz ) − ÿi zi )
i=1
yZM P = n (A.2)
X
mi (z̈i + gz )
i=1

As this is an approximation it is considered sufficient to have ẍ = 1.375 sm2 and ÿ = z̈ = 0. The position of the
CoM of each link is found by applying the forward kinematics from Section 4.4.

A.0.1 Results

The maximum movement of the ZMP is found to be 8.8 mm. This is a rough approximation but it is considered
to be sufficient. This result is used to decrease the size of the actual PoS to ensure that the robot will not
overturn due to dynamic instability.
Appendix B
Finding the Center of Mass of Each Link

As it is necessary to know the CoM of each link, to be able to ensure static and dynamic stability, it has been
measured. The CoM has been simulated in SolidWorks (SW). SW is a program used by mechanical engineers
to create models and preform simulations. It is able to simulate the CoM of each link given some materials of
each part of the body/leg. The SW models are found on Lynxmotions homepage1 .

Measuring weight

To ensure that the used weights in SW are correct some selected parts have been measured.

Name Weight [g]


Frame with 5 legs connected 1906.2
Entire leg with servos 292.4
Servo (with short wire) 57.8
Servo (with long wire) 60.2
Tibia link (no servo) 44.7
Femur link (no servo) 23.4
Coxa link (no servo) 32.6

Table B.1: Measured values of selected parts of the robot

Simulation of CoM

The simulated values of the CoM are shown in table B.2.

1 http://www.lynxmotion.com/ViewPage.aspx?ContentCode=sesmodel&CategoryID=115
84

Part name Measured weight SW weight SW CoM piCoMi = [xyz]T Note


Body 444.2 292.71 [0 0 26.4]T The change in mass is due to wiring
and no PCB in SW
Coxa 148.2 118.5 [-14.5 -24.5 2.5]T Change in mass is due to wiring and
screws
Femur 23.4 N/A [-27 0 0]T Not simulated in SW due to simple
construction
Tibia 102.5 64.60 [-130.2 -1 -1.8]T Mass difference caused by incom-
plete material list in SW

Table B.2: Simulated CoM possitions

Evaluation of results

As it is shown in table B.2 the measured weight and the weight found in SW differ. This is considered to be
caused by the materials used in SW and by the fact that no wires or screws was used in SW. Even though the
simulated results are considered sufficiently good, as the difference of the weights are relatively small.
Appendix C
Test of Inverse Kinematics

This test will verify the kinematic and inverse kinematic models. The test is conducted by postions of the leg
end points and then moving the body. To record the movements, the test is be performed in MTL

Equipment

Equipment Make and Model AAU nr.


Power supply Danica EA-PS 7032 77077
Power supply MWS27 None
Hexapod LynxMotion AH3-R Walkin Robot None
MTLAB Vicon UltraMX 75468
Vicon interface PC Fujitsu Siemens 75459

Measurment arrangement

As stated in Appendix E, the robot is fitted with 4 markers so the body movement can be measured. As a result
the tests are done with the end point positions of the legs locked. The end point positions are chosen, so the
robot will be in starting position, meaning that the coxa joints angles are 0 ◦ , femur joints are 90 ◦ and tibia
joints are 0 ◦ , giving the following end point positions.

Leg Left,front Left,middle Left,back Right,back Right,middle Right,front


x 1230 1113 1231 1470 1586 1466
y 455 250 45 45 250 455
z 5 5 5 5 5 5

Table C.1: End point positions for body offset, pitch, yaw and roll test

The robot is placed at the center of MTL, so the x and y element of the body position is 0.
86

Test specification

A Simulink model is constructed to retrieve and save data from MTL.

Position [m]
Quaternion [ ]
Pos/Euler
DCM [ ]
Euler [rad]
Frame [ ]

ViconLink inkintestx.mat

To File

Pseudo Real Time

Pseudo Real Time Display

Figure C.1: Simulink model for retrieving data from the inverse kinematics test.

The Simulink model in Figure C.1 will retrieve the body coordinates from MTL and store them in a file. Each
file holds a matrix, which has time, x, y, z, roll, pitch and yaw.

A Simulink block is created to perform the inverse kinematics of the robot.

1350

250
Singularity
Y
body positions
Leg1_ang
100

Z Leg2_ang

body angles Leg3_ang


Display
0
Leg4_ang
Roll

Leg5_ang leg1
0 −C− end point positions
Leg6_ang leg2
Pitch Leg endpoints

Inverse Kinematics leg3


seriel_com
0 leg4

Yaw leg5

leg6

Ramp Saturation S−Function Builder1

Figure C.2: Simulink model for the inverse kinematics test.

Figure C.2 shows the Simulink block to control the inverse kinematics test of the robot system. As shown in
the Figure, there are 6 constants, by altering these constants one at a time, the inverse kinematics block will be
tested. The control model must be compiled with the Linux Realtime Target1 to run sufficiently fast. For all
tests the stop time is set to 30 s and the time steps are 0.2 s. The constants are altered by disconnecting one and
connecting the ramp and saturating blocks. In Figure C.2 the mode is made to test the movement along the x
axis. Table C.2 shows ramp and saturation values for the 6 tests.

1 Created by Dan Bhanderi, http://www.mathworks.com/matlabcentral/fileexchange/5939


Test of Inverse Kinematics 87

Minimum value
Maximum value
Test subject Ramp starting Stepsize Total movement
Saturation point
point
mm
x 1350 1400 3 step 50 mm
mm
y 250 300 3 step 50 mm
mm
z 80 130 3 step 50 mm

Roll 0 10 1 step 10 ◦

Pitch 0 10 1 step 10 ◦

Yaw 0 10 1 step 10 ◦

Table C.2: Test values for inverse kinematics test

Test procedure

All tests are similar. Only the procedure for testing the x-axis movement will be described. All nesessary files
are located in the folder code/matlab/simulink/invtest on the CD. The stepsizem, starting values
and stop values are listed in Table C.2.

• Place the robot in MTL

• Connect robot to PC and power supply

• Power up the Vicon system and interface PC

• Open kintestx.mdl. The model is illustrated in Figure C.2

• Compile the kintestx.mdl in to an executeable, kintestx, by pressing "ctrl" + "b" in simulink

• Open the model Vicon_socketx

• Start Vicon_socketx

• Run the executeable with the command sudo ./kintestx

• Stop Vicon_socketx, when kintestx ends

Results

The data form the tests are stored in files. The data in the files are extracted using the matlab script plotkintest.m.
Notice that the first three seconds in each test, the robot will move to the starting position.
88

Results for x-axis movement


X
10

−10

−20

[mm]
−30

−40

−50

0 5 10 15 20 25 30 35
[s]

Figure C.3: Graph showing the recorded movement in the x direction during test

X Pitch
4
0 2

[degree]
0
[mm]

−20
−2
−40 −4
−6
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]
Y
−0.6
5
−0.8
[mm]

0 −1

−1.2
−5
−1.4
0 5 10 15 20 25 30 35 0 10 20 30 40 50 60
[s]
Z Yaw
5
96
[degree]

94
[mm]

92 0
90
88 −5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.4: Graphs that show all the recorded movement during test
Test of Inverse Kinematics 89

Results for y-axis movement

50

40

30

[mm]
20

10

0 5 10 15 20 25 30 35
[s]

Figure C.5: Graph showing the recorded movement in the y direction during test

X Pitch
4
0 2

[degree]
0
[mm]

−20
−2
−40 −4
−6
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]
Y
−0.5

40
[mm]

−1
20

0
−1.5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s]
Z Yaw
5
96
[degree]

94
[mm]

92 0
90
88
−5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.6: Graphs that show all the recorded movement during test
90

Results z-axis movement


Z

120

110

100

[mm]
90

80

70

0 5 10 15 20 25 30 35
[s]

Figure C.7: Graph showing the recorded movement in the z direction during test

X Pitch
10 4
2
5
[degree]
0
[mm]

0 −2
−4
−5 −6

0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]
Y
0

40 −1
[mm]

20
−2
0
−3
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s]
Z Yaw
5
120
[degree]
[mm]

100 0

80
−5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.8: Graphs that show all the recorded movement during test
Test of Inverse Kinematics 91

Results for Roll


Roll
14

12

10

[degree]
4

−2

−4

−6

0 5 10 15 20 25 30 35
[s]

Figure C.9: Graph showing the recorded roll during test

X Pitch
1044
1042 0

[degree]
1040
[mm]

−5
1038
1036 −10
1034
−15
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]
Y
0
−260
−5
[mm]

−265
−10
−270
−15
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s]
Z Yaw
95 0
[degree]

90
[mm]

−5
85

−10
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.10: Graphs that show all the recorded movement during test
92

Results for pitch

Pitch
4

−2

−4

[degree]
−6

−8

−10

−12

−14

−16
0 5 10 15 20 25 30 35
[s]

Figure C.11: Graph showing the recorded pitch during test

X Pitch

0
1040
[degree]
[mm]

−5
1035
−10
1030 −15
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]
Y
−255 0

−5
−260
[mm]

−10
−265

−15
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s]
Z Yaw
95 10

5
[degree]

90
[mm]

0
85 −5

−10
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.12: Graphs that show all the recorded movement during test
Test of Inverse Kinematics 93

Results for yaw

Yaw
10

[degree]
0

−2

−4

−6

−8

0 5 10 15 20 25 30 35
[s]

Figure C.13: Graph showing the recorded yaw during test

Pitch
X
4
1040 2

[degree]
0
[mm]

1035 −2
−4
1030 −6
0 5 10 15 20 25 30 35
0 5 10 15 20 25 30 35
[s]
[s]
Y
0
−255
−0.5
[mm]

−260 −1

−1.5
−265
−2
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s]
Z Yaw
10
95
5
[degree]
[mm]

90 0

−5
85
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35
[s] [s]

(a) Movement (b) Rotation

Figure C.14: Graphs that show all the recorded movement during test

Results

None of the tests showed a signification deviation from the expected results. Table C.3 summarizes the results.
The largest measured deviation is measured when testing movement along the x axis. This deviation is due

Subject Initial value Stop value Total movement Expected movement Difference
x 1.26 -52.51 53.77 mm 50 mm 3.77 mm
y 1.66 52.34 50.68 mm 50 mm 0.68 mm
z 71.93 122.9 50.937 mm 50 mm 0.937 mm
Roll -1.65 9.28 10.93 ◦ 10 ◦ 0.93 ◦
Pitch -0.76 -11.21 10.45 ◦ 10 ◦ 0.45 ◦
Yaw -4.47 5.33 9.8 ◦ 10 ◦ 0.2 ◦

Table C.3: Results for the inverse kinematics test

to the way, the robot is constructed. The joint are assembled using bolts and nuts and the servos has a high
clearance in gears and motor. Normal usage of the robot causes the nuts and bolts to loosen and some of these
are very hard to tighten, causing a lot of clearance in the joints. However as the maximum measured deviation
from the expected result is 3.77 mm and 0.93 ◦ and no significant undesirable movement or change in angles
are detected, the deviation is considered insignificant.
94
Appendix D
User interface

It is decided that the robot is piloted by an operator. To do this, the robot system must be able to detect inputs
from the user. For this a Simulink block is created, which creates an UDP socket. The user interface is a python
script, which connects to this socket. The robot system takes a direction vector as input.
 
x
~ =
dir y 

(D.1)
z

The user has no influence on the movement at the z-axis, so the z-element is allways zero and thus the only
nesessary data communication between the sockets consists of the x- and y-components.

Figure D.1 shows a block diagram of the user interface.

Server Client
Robot system User interface
Simulink Python

  User
x
y 
 

z
Keyboard

UDP
Socket Socket

Figure D.1: Block diagram for the user interface

The client socket consists of two threads, one that handles user input, and one that, continuosly, sends the data
string through the socket. Figure D.2 shows a flowchart of the client:
96

Thread 1 Thread 2
Initialize
Handles the user Handle socket
input communication

User input

Evaluate user input

No input
Send Data_string

Key released Key pressed

Data_string(i) =
Data_string(i) = 0
direction

Figure D.2: Flowchart for user interface client

When running, the events from the keyboard is evaluated. When a valid key is pressed, the direction vector
will be given a value, that coresponds to the certain key. When no keys are pressed the direction vector is zero.
The direction vector is stored in a 2 byte string. Table D.1 shows which keys, corresponds to a change in the
direction vectors.

Byte 1 2
Value x y
Positive Right Up
Negative Left Down

Table D.1: Data structure for user interface socket. (Data_string in figure D.2)

Figure D.3 shows a flowchart of the Simulink part, of the user input. As computation time is an issue for the
controller, a poll control is implemented. When a socket is initialized, it is assigned to a file descriptor and by
checking this for changes, it will only try to receive data from the socket, when data are present. If the block
would try to receive from the socket every sample, then there would be a timeout when there was no data, and
this would be an unnessesary delay in the computation time.
User interface 97

Initialize socket

Check for data on socket

No data on socket
Data on socket

No change to
Receive data from  
socket X
Y 
 

Z
Decrypt data

X=Data_string[1] Y=Data_string[2] Z=0

Change
 
X
Y 
 

Figure D.3: Flowchart for user interface Simulink block


98
Appendix E
Motion Tracking Lab

Introduction

As the robot has no sensors or feedback from the servos, another way of feedback is nessesary. To track
movements, the robot is fitted with four reflectors. These reflectors can be tracked using MTL. MTL is fitted
with 8 cameras and with the bundled software the position and orientation of an object can be read from a
socket. Figure E.1 shows a sketch of the setup in MTL.

Overview

Robot system

Interface PC

Vicon controller

Reflector Reflector
Cameras

Cameras

Robot

Reflector Reflector

Figure E.1: Overview of the MTLAB setup


100

Cameras Marker

Robot

Figure E.2: Position of the markers on the robot

Preparing the robot for MTL

To measure the position of the robot in MTL, 4 markers has been added to the top of the robot, shown in Figure
E.2.

It has been chosen to only measure the body movements, since leg movement measurement would require
markers on each leg. Having markers on the side of the robot is not optimal, as the MTL cameras need clear
sight of the markers to track them. Figure E.2 shows how the robot could hide the markers from the cameras.

Interfacing with the robot system

The robot systems needs the position of the robot in global frame coordinates, see section*sec:globalframe,
hence two objects are created in the MTL software, robot and landscape. To read the position of these objects,
a custom Simulink block is available. The Simulink block can be added to the robot system and will output the

Position [m]

Quaternion [ ]
DCM [ ]

Euler [rad]
Frame [ ]

ViconLink

Figure E.3: Simulink block for the Vicon system

coordinates and euler angles of the robot object and the landscape object. Using this block gives two issues,
which must be handled. First issue is to calculate the position of the robot in respect to the landscape.

To get the robot position in global frame coordinates, the landscape coordinates are subtracted from the robot
Motion Tracking Lab 101

coordinates, which gives the resulting robot position:


   
XRobot − XLandscape Xres
Robotposres =  YRobot − YLandscape  =  Yres 
   

Zrobot − ZLandscape Zres


The simulink block will give the position in meters and the euler angles in radians. The robot system requires
the robot position in milimeters and degrees. Hence a gain is applied. Finally an offset for the landscape
coordinates is required. The landscape is constructed from 4 blocks, which each holds 10x10 pieces of wood in
different heights1 . Around the landscape a flat area has been added, which has the width of 7 average wooden
blocks, 77mm. As shown in Section 4.2. MTL will give the center of the landscape as (0, 0), but the robot
system requires (0, 0) to be at the corner of the total landscape. Hence the resulting robot position must be
added the difference between the measured (0, 0) and the required (0, 0), which is 17x77mm. This gives the
final resulting robot positon.
   
Xglobal Yres + 17 ∗ 77
 Yglobal  = Xres + 17 ∗ 77 (E.1)
   

Zglobal Zres
An illustration of the solution to calculate the needed position and orientation is shown i figure E.4. As shown

[17*77 17*77 0]’

Constant
2
MTL−Map pos

1000 1
Robot−Pos

m to mm Switch X and Y
1
MTL−Robot pos

4
MTL−Map ang

180/pi 2
Robot−Ang

3 Radian to degree
MTL−Robot ang

Figure E.4: Similink diagram for the coordinate handling

in figure E.4, the x- and y-component is switched, due to the way the robot system is implemented.

Next issue is getting the data to the robot system. The robot system is created in Simulink and to do calculations
fast enough, the Simulink model is be compiled using a realtime target2 . The Vicon simulink block however,
shown in Figure E.3, can not be compiled with this realtime target. As a result, the MTL output can not be fed
directly in to the robot system, to handle this issue a socket is implemented. The socket in the robot system will
request an update in the position and angles, by sending a request string and the response will be the angles and
position of the robot. This method is chosen to eliminate waiting on the socket. Figure E.5 shows a flowchart
for the socket implementation. The implementation in Figure E.5 creates the socket and gives the robot system
acces to the MTL data and can still be compiled using the realtime target. The connection in the other end will
try to connect to this socket and when connected, it will send the current data from MTL to the robot system,
when the request string is received.

A block diagram for the complete MTL setup is shown in Figure E.6
1 The wooden pieces is quadratic and is approximately 77mm in width and length.
2 Created by Dan Bhanderi, http://www.mathworks.com/matlabcentral/fileexchange/5939
102

Initialize socket
Constant output

Wait for connect


Update output
   
x Roll
Send request
y  P itch
   

z Y aw

Receive data

Decrypt data

Figure E.5: Flowchart for robot system socket communication

Cameras

Vicon

Calculate position
and angles Robot system

Data
Handle socket Handle socket
communication Request communication

Figure E.6: Block diagram for the complete MTL setup


Appendix F
Robot limitations

The robot body can be moved without moving the legs. By altering the yaw, pitch and roll angles, the body can
be tilted and turned. A test was conducted to have an idea of the configurations, that are possible with the robot.
For this test, the end points of the legs was locked and the noted limits is when these end points can not longer
be maintained. The measure limitations are not verified with MTL, it is assumed that the kinematics models
are suffiently correct1 .

Measurement arrangement and theory

The robot is connected to the pc via the RS232 link and the powersupply is set to 6V. With the PC it is possible
to alter the following six settings.

• x-position offset

• y-position offset

• z-position offset

• Roll

• Pitch

• Yaw

The communication between the simulink model and the python script is done by created an UDP socket on
port 12000. The python code and simulink model is explained in appendix E on page 99. Table F.1 shows how
to move the robot.

1 The kinematic models are verified in appendix C


104

Key Action Body action


w Increase y-offset Forward
s Decrease y-offset Backward
a Increase x-offset Left
d Decrease x-offset Right
e Increase z-offset Up
q Decrease z-offset Down
u Increase roll
i Decrease roll
j Increase pitch
k Decrease pitch
n Increase yaw
m Decrease yaw

Table F.1: Assigned keys for testing robot limitations

Used equipment

Equipment Make and Model AAU nr.


Power supply Danica EA-PS 7032 77077
Power supply MWS27 None
Hexapod LynxMotion AH3-R Walkin Robot None

Procedure

The procedure for all the possible actions is generic, so will only be described for moving the body forwards.
All files needed for this test is located in code/matlab/simulink/limitations on the CD.

• Open robot.mdl and build the executable robot by pressing ctrl+b.

• Run ./robot in terminal

• Run python hexapod.py

• Press w until the body cannot move further without moving the legs

• Note the offset value outputtet by robot

• Repeat for other actions

Results
Robot limitations 105

Value Minimum Maximum


x-offset -86 86
y-offset -82 82
z-offset -44 38
Roll -23 23
Pitch -15 15
Yaw -45 45

Table F.2: Robot limitations

Discussion of results

Despite the high clearance in the servos, the measured limitation seems valid. The measured limitation shows
symmetry for all limits expect movement along the z-axis.
106
Appendix G
Configuration space

A configuration for a robot is a specification of the all the points in the system[15]. Looking at the two-joint
robot in figure G.1(a) the configuration is the angles θ1 and θ2 . The workspace for the robot is the area, that
the gripper can reach. Each point of the workspace is a consequence of different angles in the two joints. To
describe the space, that is required for the robot to move freely, the CS is introduced. The sum of all the possible
configurations gives the CS.

Joint 2
Gripper
θ1
Leg 1
Leg 2
θ2
θ2
θ1
Joint 1

(a) Two joint robot (b) Configuration space for a two-joint robot

Figure G.1: Illustration of configuration space

To illustrate the CS, a two joint robot is considered. It is assumed that both joints in Figure G.1(a) has no
limitations. Each configuration, q, of the robot is denoted q = (θ1 , θ2 ). Naming the robot, in Figure G.1(a), R,
the full space, that is occupied by the robot is R(q). Looking at each joint individually, the CS is a circle. Hence
each angle θi on joint 1 or joint 2, is a point on the unit circle S 1 . By combining the two CSs, the complete
CS for the robot is given, which gives S 1 × S 1 = T 2 . Figure G.1(b) illustrates T 2 . Having a CS will provide
a lookup table for all possible configurations of a given robot. Useful for testing if a movement is possible and
no calculation time is required.
108

Configuration space for hexapod robot

The following section will describe the CS for the hexapod robot, by determining CSs and obstacles, for the
individual joints of the robot.

Each leg has 3 joints, which in section 4.3.3 on page 15 is denoted, Coxa, Femur and Tibia. When setting up a
CS for the one leg, the limitations for for each joint must be set up.

Qcoxa = −90 ≤ θc ≤ 90
QF emur = −45 ≤ θf ≤ 90
QT ibia = 0 ≤ θt ≤ 135

These angles specifies the possible movement of the legs and combined with the transformation matrices found
in Section 4.4.2 on page 17, the space occupied by all possible positions of a leg can be found. In section 4.4.2
and 4.4.3 the transformation from robot frame to leg position was found. From these matrices the CS of each
leg can be found, with respect to the robot frame. To determine the CS point for one leg, all end points of the
joints must be known. Section 4.4.2 states the matrices for each joint individually1 . Generating the end points
for each joint requires three matrices.

T03 = T01 T12 T23


T02 = T01 T12
T01

Where:
 
cos θi − sin θi 0 143.5 cos θi
 sin θ cos θi 0 143.5 sin θi 
i
T23 = 
 

 0 0 1 0 
0 0 0 1
 
cos θi sin θi 0 56.5 cos θi
 sin θ − cos θi 0 56.5 sin θi 
i
T12 = 
 

 0 0 −1 0 
0 0 0 1
 
cos θi 0 sin θi 38.5 cos θi
 sin θ 0 − cos θi 38.5 sin θi 
i
T01 = 
 

 0 1 0 45 
0 0 0 1

Combining these matrices with the limitations for each leg, gives the CS. The algorithm in Figure G.2 is used
to calculate the CS for one leg.

The stored points will be the CS for one leg with no obstacles blocking the movement of the leg. Figure G.3(a),
G.3(b) and G.3(c) illustrates the CS for one leg.

1 Equations 4.2-4.5 on page 18


Configuration space 109

for −90 ≤ θc ≤ 90 do  
0
0
coxapoint = T01 (θi = θc ) ·  
 
0
1
store coxapoint
for −45 ≤ θf ≤ 90 do
 
0
0
f emurpoint = T12 (θi = θf ) · T01 (θi = θc ) ·  
 
0
1
store f emurpoint
for 0 ≤ θt ≤ 135 do  
0
0
tibiapoint = T23 (θi = θt ) · T12 (θi = θf ) · T01 (θi = θc ) ·  
 
0
1
store tibiapoint
end for
end for
end for

where θcs , θfs and θts is the angles of surrounding legs

Figure G.2: Algorithm to calculate the CS for one leg

To use the CS as a part of controlling the robot, it is necessary to define a CS with obstacles, which is the
surrounding legs. CS can only be defined for stationary obstacles and to do this the surrounding legs must be
considered as stationary. This caused a problem, as the legs is not stationary. To generate the CS it is possible
to consider every possible combination of angles for one leg and the surrounding two legs. Generating the CS
requires an algorithm similar to the algorithm in Figure G.2, but each configuration must be checked for colli-
sions with surrounding legs. For simplicity only one of the surrounding legs will be considered. The purpose
with a CS, is to know if a configuration of a leg is possible. This is done by checking a given configuration of a
leg and the leg next too for collisions and then storing information whether the configuration is possible or not.
The algorithm in figure G.4 can be used to determine CS with when one leg next to is used as obstacles.

By running the algorithm in Figure G.4 some data is produced, of which the amount is dependent of the angle
stepsize. Calculating the amount of data is done by considering the number of points in each leg with a given
stepsize and assuming that for each configuration 1 byte is stored. For the Coxa joint, the number of points will
be 90−(−90)
10 = 18 and similar for the Femur and Tibia joints. For the algorithm in Figure G.4 with a stepsize

of 10 the amount of data is:
 2
90 − (−90) 90 − (−45) 135 − 0
· · = 10.8 · 106 byte = 10.3M byte (G.1)
10 10 10
The required stepsize for the hexapod is 1 ◦ and with this stepsize the amount of data will be:
 2
90 − (−90) 90 − (−45) 135 − 0
· · = 10.8 · 101 3byte = 10.3 · 107 M byte = 9.8T byte (G.2)
1 1 1
110

40

20

−20

−40

Z
−60

−80

−100

−120

50 0 −50 −100 −150 X


Y

(a) Plot of position with coxa and femur locked

200

150

100
Z

50

−50

−100

50 0 −50 −100 −150 −200 X

(b) Plot of position with coxa locked

200

150

100

50
Z

−50

−100

200
100 200
0 150
100
−100 50
0
−200
−50
Y
X

(c) Plot of position with all joints moving freely

Figure G.3: Red: Coxa


Green: Femur
Blue: Tibia
Configuration space 111

1: for −90 ≤ θcs ≤ 90 do


2: for −45 ≤ θfs ≤ 90 do
3: for 0 ≤ θts ≤ 135 do
4: First three for loops is the angles for the surrounding leg.
5: When moving the leg around now, the angles of the leg next to, is locked and can be considered
stationary.
6: for −90 ≤ θc ≤ 90 do
7: for −45 ≤ θf ≤ 90 do
8: for 0 ≤ θt ≤ 135 do
9: Check the two legs for collisions
10: Save the configuration
11: end for
12: end for
13: end for
14: end for
15: end for
16: end for

where θcs , θfs and θts is the angles of surrounding legs

Figure G.4: Algorithm to calculate the full CS for the robot

These estimations are for one leg and only considering one of the two surrounding legs. Hence this must be
considered not feasible for real use. The amount is data is simply to large to store. Even if only 1 bit was stored
for each configuration. The amount of data would stille be too large: 9.8T byte/8 = 1.2T bit.

A conclusion to all this, is that precalculating a CS, which can prevent collisions with surrounding legs, is not
possible.
112
Appendix H
CD

You might also like