7.mech IJME Manipulator Mihir

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

International Journal of Mechanical

Engineering (IJME)
Vol.1, Issue 1 Aug 2012 66-72
© IASET

MANIPULATOR MECHANISM WITH OBJECT DETECTION USING


MACHINE VISION SYSTEM

A. MIHIR, A. PATEL, B. PAVAN, S. PATEL, C. ANANT & H. JAIN

ABSTRACT

One picture worth more than 10,000 words. In recent increasing need of Automation, Machine
vision is supposed to be pioneer future technology. As a part of fully autonomous system, machine vision
gives flexibity to identify and manipulate any surrounding object. Ensuring reliability and reducing per
unit cost are two fundamental objectives of process automation in manufacturing industry. For pick and
place application accurate positioning is essential in assuring product quality and also fast and stable
operating speed enables high production rate to be achieve. Positioning accuracy and speed are often two
conflicting requirement which are not so easy to attain together.

In this study include, machine vision, image processing and Cartesian manipulator that integrally
identify the basic object like rectangle, circle, square and triangle and manipulate as per requirement.
Main goal of this paper is to design autonomous system for pick and place application with machine
vision support, GUI and analyze performance of system.

KEYWORDS: Cartesian Manipulator, Machine Vision, Pick and Place Manipulator

INTRODUCTION

Some tasks ask for a robotic manipulator that can only pick an object from a fixed position and
transfer it to another fixed position. They are called pick and place robots (P&P). In general, these pick &
place robots are much faster and have a higher accuracy. These robots do not have any sense to
differentiate between object.

In this study, we have design autonomous system for pick and place manipulator and software called
IAARC (Image Acquisition, Analysis and Robot Control) to control it. IAARC is design and coded in
MATLAB GUI environment.

We have provided a sense of vision to pick and place manipulator for identify specific object and its
location with the help of machine vision and image processing. Block diagram of the system is shown in
figure 1. System contains three important phases

1) Image acquisition and object identification

2) Interfacing (Transferring control signal to control system)

3) Robot Movement. First phase of system locate and identify the object into

Cartesian space. Second phase of the system generate control signal to move manipulator towards
the object. And third phase of system operate mechanical assembly of Cartesian manipulator.
67 Manipulator Mechanism With Object Detection Using Machine Vision System

Fig. 1: Block Diagram

IMAGE ACQUISITION AND OBJECT IDENTIFICATION

The first phase of system is to acquire an image of working space of the robot. USB web cam-
INTEX 5.0 mega pixels camera with 600X 480 resolutions is used as Image Acquisition device. Image
Acquisition is done with the help of MATLAB 7.0 – image acquisition tool box. Video input stream is
acquired through image acquisition. Now from this sequence one frame is selected for further processing.

After image acquisition, image needs enhancement [1] for improving quality so that further
operation can be carried out effectively. Image enhancement implement in three steps. 1) Convert color
image to grayscale image 2) using Median filter to remove noise from the image. 3) Convert grayscale
image to binary image.

Captured image is 24-bit RGB color image, so it is converted into grayscale image. During image
acquisition, due to camera & system it noise is introduced into acquired image. After analysis of image
acquisition, salt and pepper type of noise is found in the image. For removing salt & pepper kind of
noise, median filter is the best choice. Because salt & pepper noise is having characteristic to make pixel
either white (brighter) or black (darker), i.e. pixel with noise will have extreme gray level value. In
median filtering, the pixel gray level value is altered with the median value from neighbor pixel gray
level values. Thus, using median filter, noise is removed from the image. The grayscale image is further
convert to binary image using simple thresholding technique. Thus, object is separated from background
of image as object is in dark and back ground is white.

1) The object must be dark in color.

2) The background should be relatively brighter than object.

3) It can only locate one object at a time.

The binary image is then processed to find object. In binary image, object is of black color and
background is of white color. As we need to find centroid of object, binary image is then, processed with
A. Mihir, A. Patel, B. Pavan, S. Patel, C. Anant & H. Jain 68

image negative algorithm. Now, just by adding the pixel co-ordinate contained by object and dividing by
number of pixel contained by object, we can calculate the centroid [1] of an object

Where, A = Area contained by object surface. xi = Pixel x-axis co-ordinate yi = Pixel y-axis co-ordinate.
f(xi,yi) = Gray level value of image pixel. After finding centroid of object, signature of object is
generated. Signature is a 1-dimensional plot of distance from center to boundary of object at regular
increment in angle from 0o to 360o. So after finding the center of object, a 1-dimension plot has been
generate for increment into angle and measure distance between boundary pixels to center pixel.

Using signature method[1], basic shape of object can be found in 2-D space like circle, rectangle and
triangle. For circle type of object, signature is straight line at some constant. For rectangle type of object,
signature contains 4-peak values into the plot. For triangle type object, signature contains 3-peak values
into the plot. Different type of object’s shape and its Signature is shown in fig. 2

Fig. 2(a) Input Image. Fig.2 (b) Plot of Distance v/s Angle
69 Manipulator Mechanism With Object Detection Using Machine Vision System

Fig.2 (c) Input Image. Fig.2(d) Plot of Distance v/s Angle for Square object.

Fig.2 (e) Input Image. Fig.2 (f) Plot of Distance v/s Angle for Circular object.

Fig.2(g) Input Image Fig.2(h) Plot of Distance v/s Angle for Rectangular object.
A. Mihir, A. Patel, B. Pavan, S. Patel, C. Anant & H. Jain 70

The signature generated by the above method will be invariant of the translation but they do depend
on the rotation and scaling. The main advantage of this method is its simplicity. But serious disadvantage
is that if the shapes are noise than that will create the error in the result.

Using signature method basic shape of object is identified like triangle, rectangle and circle. After
identified object and its location into Cartesian space, control signal is generated for x-axis and y-axis to
moves manipulator towards the object.

For any irregular shape, it is very difficult to find number of peaks as well as it is sometimes not
possible to predict the plot. This can be seen in fig.3.

Fig. 3: Signature of irregular shape

TRANSFERRING CONTROL SIGNAL TO CONTROL SYSTEM

After identifying the object and its location, control signal is generated and transfer to control system
to move pick and place manipulator toward the object. Based on calibration, centre of object found in-
terms of pixel coordinate, and this value further convert into actual measurement of unit like centimeter.
As actual coordinates of Cartesian manipulator is in terms of centimeter.

The control system is made of Programmable logic controller that is more flexible for industrial
applications. There are two ways to interface microcomputer and PLC

1) By using port of computer.

2) SCADA SYSTEM (OPC & HMI SYSTEM) Due to high cost of server, the interfacing circuit is
utilized.

Interfacing circuit contains isolation circuit with optocoupler. Control signal send from PC to
Optocoupler via parallel port. Optocoupler is necessary to prevent system from damage. Again, To
trigger input of PLC, we require minimum 6.6V. This purpose is also served by using optocoupler.
Optocoupler send the control signal to input terminals of Allen Bradley PLC micrologix 1200. In our
case, optocoupler 4N35 is used in interfacing circuit.
71 Manipulator Mechanism With Object Detection Using Machine Vision System

After calculating distance, the number of pulse required to reach manipulator at location is
calculated. This depends on distance travelled by slider per pulse. The number of pulses are generated by
computer and fed to PLC through interfacing circuit.

CONTROL SYSTEM OPERATES MANIPULATOR

The mechanical structure is composed of Base, stand, telescopic channels and 3-stepper motors with
timing belt for controlling X, Y and Z axis of structure. The Work volume of the Manipulator is 420mm
wide (X-direction) x 315mm long (Y-direction) x 65mm high (Z-direction). The mechanical structure is
shown in fig.4. Initially, the end effector (solenoid controlled electromagnetic gripper) is located at
origin. After receiving control signals from PC, first we move the sliding rack of x-position and after
setting x-axis position, we move the sliding rack of y-position. So at a time only one axis can be moved.
Each sliding rack is driven by stepper motor. Control signal generated using the control word of 8-bit to
control the stepper motor of each axis. Control word is shifted to generate more steps.

Fig. 4: Final mechanical assembly

Once pick and place manipulator reach at the location of the object, third sliding rack is activated to
pick an object. Here in this study object is made of iron metal material. So electromagnetic gripper is
used to pick object. Then gripper lift towards upward direction and place the object at location (0,0) of
the Cartesian space.

ANALYSIS

The paper is aimed to design and implement the mechanism with machine vision capability. Based
on theoretical and experimental work, the following terms are observed.

1) The image acquisition device is having resolution of 600 X 480. With this resolution, the
minimum distance that can be measured is 2mm.
A. Mihir, A. Patel, B. Pavan, S. Patel, C. Anant & H. Jain 72

2) The whole set up requires proper illumination.

3) Theoretically, the belt displacement per pulse is measured 0.225mm.

4) Based on trial and error method, the minimum required time measured between two successive
pulses is approximately 35ms. This limitation is observed for Allen Bradley PLC micrologix
1200.

CONCLUSIONS

1) The above robot moves with the precision of 0.225 mm accounting for every step of Motor.

2) We have Vision System (IAARC) based on the Mat lab with the Graphical User Interface so
that even a common user can operate that program.

3) The vision system consists of algorithms for following things

I. Object Location

II. Shape Detection

III. Parallel (LPT) port control via Matlab7.0

4) We have used the parallel port as an input for the PLC (MicroLogix 1200).

5) Stepper motor is controlled with the help of PLC and the ladder diagram for the same has been
made using RSLogix 500.

6) We are using the simple Electromagnet as the gripping Device for Pick And Place purpose.

7) There is an error of 2mm in the positional accuracy due to the limitation of the camera.

REFERENCES

1. Rafael c. Gonzalez and Richard e. Woods, "Digital Image Processing," 2nd Edi., Pearson
Education,, pp. 166,191,443 -458, 488-490, Fifth Reprint 2000.

2. Stephen j. Chapman, ” MATLAB Programming for Engineers” 3rd Edi. Thomson Learning, pp.
85-199, 439-505.First Reprint 2004.

3. Kenjo, Takashi, “Stepping motors and their microprocessor controls” Oxford University Press,
c1984. LC number: TK2785 .K4 1984

4. Kandray, Daniel E., ” Programmable Automation Technologies - An Introduction to CNC,


Robotics and PLCs” © 2010 Industrial Press

5. Gary Dunning, “Introduction to Programmable Logic Controllers”

6. Motorola Semiconductor Data Manual, Motorola Semiconductor Products Inc., Phoenix, AZ,
1989.

You might also like