crct mini p

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 35

A

MINI PROJECT REPORT


On
VIRTUALISAITON OF COMPUTER PERIPHERALS
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
(MIP-C10)
BHURA RAVIKUMAR :197Y1A05F1
BICHALA GIRIDHAR :197Y1A05D5

Under the Guidance


Of

MR.CH.V.V.NARSHIMHA RAJU
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


MARRI LAXMAN REDDY
INSTITUTE OF TECHNOLOGY AND MANAGEMENT
(AUTONOMOUS)

(Affiliated to JNTU-H, Approved by AICTE New Delhi and Accredited by NBA & NAAC With ‘A’ Grade)

OCTOBER 2022

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 1


CERTIFICATE

This is to certify that the project report titled “VIRTUALISATION OF COMPUTER

PERIPHERALS” is being submitted by BHURA RAVIKUMAR (197Y1A05F1) &

BICHALA GIRIDHAR(197Y1A05D5) in IV B.Tech I Semester Computer Science &

Engineering is a record bonafide work carried out by him. The results embodied in this

report have not been submitted to any other University for the award of any degree.

Internal Guide HOD

Principal External Examiner

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 2


DECLARATION

I hereby declare that the Major Project Report entitled, “VIRTUALISATION OF

COMPUTER PERIPHERALS” submitted for the B.Tech degree is entirely my work

and all ideas and references have been duly acknowledged. It does not contain any work

for the award of any other degree.

Date:

BHURA RAVIKUMAR BICHALA GIRIDHAR


(197Y1A05F1) (197Y1A05D5)

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 3


ACKNOWLEDGEMENT

I am happy to express my deep sense of gratitude to the principal of the college


Dr. K. Venkateswara Reddy, Professor, Department of Computer Science and
Engineering, Marri Laxman Reddy Institute of Technology & Management, for having
provided me with adequate facilities to pursue my project.

I would like to thank Mr. Abdul Basith Khateeb, Assoc. Professor and Head, Department
of Computer Science and Engineering, Marri Laxman Reddy Institute of Technology &
Management, for having provided the freedom to use all the facilities available in the
department, especially the laboratories and the library.

I am very grateful to my project guide MR.CH.V.V.Narasimha Raju, Asst. Prof.,


Department of Computer Science and Engineering, Marri Laxman Reddy Institute of
Technology & Management, for his extensive patience and guidance throughout my
project work.

I sincerely thank my seniors and all the teaching and non-teaching staff of the Department
of Computer Science for their timely suggestions, healthy criticism and motivation during
the course of this work.

I would also like to thank my classmates for always being there whenever I needed help or
moral support. With great respect and obedience, I thank my parents and brother who were
the backbone behind my deeds.

Finally, I express my immense gratitude with pleasure to the other individuals who have
either directly or indirectly contributed to my need at right time for the development and
success of this work.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 4


TABLE OF CONTENTS

S.NO TITLE Page


No
1. INTRODUCTON 1

1.1 EXISTING SYSTEM 1

1.2 PROPOSED SYSTEM 1

2. LITERATURE SURVEY 4

3. SYSTEM REQUIREMENTS 5

4. SYSTEM ARCHITECTURE 6

4.1 MOUSE AND PAINTING 6

4.2 KEYBOARD 6

4.3 UML DIAGRAMS 7

5. IMPLEMENTATION 9

6. RESULTS AND DISCUSSION 22

7. ADVANTAGES 25

8. APPLICATIONS 25

9. TEST CASES 26

10. CONCLUSION 27

11. FUTURE ENHANCEMENT 28

12. REFERENCE 29

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 5


ABSTRACT

Nowadays, Computing is not limited to desktops and laptops it has found it’s way
into mobile devices. But what has not changed for the past years is the input
device. Virtual keyboard, mouse, drawing uses computer vision and AI (Artificial
Intelligence) to let user’s work. With the help of camera virtual keyboard will be
created on screen and the typing will be captured on camera. Virtual mouse will
take finger co-ordinates as input and tracks the finger for movement of cursor. In
virtual drawing pen colour will be captured on camera and it draws the captured
colour. For keyboard we map touch point to keystrokes and recognize the
character. For mouse tracking and finger detection we are tracking and counting
number of fingers. It implements majority of mouse tasks such as left click, right
click, double click and scrolling. However it is difficult to get stable results
because of variety of lighting and skin colours of human races. The Virtual Mouse
color recognition program will constantly be acquiring real-time images where the
images will be undergone a series of filtration and conversion. Whenever the
process is complete, the program will apply the image processing technique to
obtain the coordinates of the targeted colors position from the converted frames.
After that, it will proceed to compare the existing colors within the frames with a
list of color combinations, where different combinations consist of different mouse
functions. If the current colors combination found a match, the program will
execute the mouse function, which will be translated into an actual mouse function
to the users' machine. Virtual Painting is fully developed in Python, it implements
the basic and advance levels of python. The color tracking and detection process is
used to achieve the output. Here the color marker is used to produce a mask on the
original color canvas.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 6


INTRODUCTION
Computer technology continues to grow up, the importance of human computer
interaction is increasing rapidly which has also found its way into mobile devices like
palm tops and even cell phones. But what has not been changed for the past years is
the input device. Virtual keyboard technology is an application of virtual reality.
Virtual reality means enabling single or multiple users to move and react in a
computer simulated environment. It contains various devices, which allow users to
sense and manipulate virtual objects.

Computers have undergone rapid change from being a 'space saver' to 'as tiny as your
palm'. Disks and components grew smaller in size, but one component that still
remained the same for decades - it's the keyboard. “Many researchers in the field of
human computer interaction and robotics field have tried to control mouse using video
devices. Computers have undergone rapid change from being a 'space saver' to 'as tiny
as your palm'. Disks and components grew smaller in size, but one component that
still remained the same for decades - it's the keyboard. “Many researchers and
scientists have tried to control mouse movements using video. However, all of them
used different methods to make mouse clicking event. In our project we are using
virtual keyboard, mouse and mouse. In virtual keyboard, mouse and drawing we are
capturing the finger movement with the help of camera. The virtual keyboard, mouse
and drawing which makes the human computer interaction simpler being a handy,
small and easy to use application.

1.1. EXISTING SYSTEM

A computer mouse is a hand-held pointing device that detects two-dimensional


motion relative to a surface. This motion is typically translated into the motion of a
pointer on a display, which allows a smooth control of the graphical user interface of
a computer.

A computer keyboard is a peripheral input device modeled after the typewriter


keyboard which uses an arrangement of buttons or keys to act as mechanical levers or
electronic switches. Keyboard keys (buttons) typically have a set of characters

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 1


engraved or printed on them, and each press of a key typically corresponds to a single
written symbol.

A graphics tablet (also known as a digitizer, drawing tablet, drawing pad, digital
drawing tablet, pen tablet, or digital art board) is a computer input device that enables
a user to hand-draw images, animations and graphics, with a special pen-like stylus,
similar to the way a person draws images with a pencil and paper. These tablets may
also be used to capture data or handwritten signatures. It can also be used to trace an
image from a piece of paper that is taped or otherwise secured to the tablet surface.
Capturing data in this way, by tracing or entering the corners of linear polylines or
shapes, is called digitizing.

Drawbacks of Existing System:

● They need a flat space close to the computer


● Older style mice which have roller balls can become clogged with grease and
grime and lose their accuracy until cleaned.
● If the battery wears out in a wireless mouse, it cannot be used until it has been
replaced.
● Mechanical wear and tear of keyboard.
● Excessive use can lead to health problems such as repetitive strain injury
(R.S.I.)
● Graphic tablet is not suitable for general selection work such as pointing and
clicking on menu items.
● Graphics tablets are much more expensive than a mouse.

1.2. PROPOSED SYSTEM


Virtual Keyboard, Mouse and Painting system uses camera to interact with computer.
This method is easy to use less expensive. However, the system developed is user
friendly. The system will work in the following manner shown below respectively:

Mouse:

1. The mouse will be represented by use of our finger for recognition of the
cursor moments.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 2


2. A camera will be there to capture live feed of our finger movement on the
screen.
3. Hence, In the Image processing, in real time movement of finger will be
detected.
4. Those co-ordinates will be taken as the input of the mouse.

Keyboard:

1. The keyboard will be drawn on computer screen.


2. A camera will be there to capture live feed of fingers typing on the screen.
3. Hence, with the Image processing, in real time typed words on keyboard will
be detected.
4. Those words will be displayed on desktop.

Painting:

1. The drawing will be done by the movement of finger and also by an object
like pen/marker.
2. A camera will be there to capture live feed of our finger movement or
pen/marker on the screen.
3. Hence, In the Image processing, in real time movement of finger or
pen/marker will be detected.
4. Those movements will be drawn on screen.

In this design of project by moving our index finger we can move mouse pointer,
move across keyboard and move on the screen for drawing. And by using two finger
we can perform clicking action for mouse, pressing keys for keyboard and drawing on
the screen.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 3


2. LITERATURE SURVEY

Many researchers in computer science and human computer interaction developed


various technologies related to virtual keyboard and mouse. However all of them used
different techniques. Approaches related to keyboard, where in Eckert. M developed
for the persons with physical impairments with presenting a new middleware for
mapping gestures, obtained by a motion sensing camera device. Another approach
was developed by Zhang, Yunzhou introduced a method by the use of infrared laser
module, keyboard pattern projector, embedded system and a single image sensor
where every keystroke can be determine accurately by image processing including
morphology principle and ellipse fitting.

Approach related to mouse. One approach, by Erdem to, control the motion of the
mouse by finger tip tracking. A click of the mouse button was implemented on the
screen such that a click occurred when a user’s hand passed over the region. Another
approach was developed by Chu-Feng Lien. He controls the mouse cursor and
clicking event by using the finger-tips movement. His clicking method was based on
image density, and required the user to hold the mouse cursor on the desired spot for a
short period of time. Paul et al, used some another method to click. He used the
motion of the thumb from a ‘thumbs-up’ position to a fist to mark a clicking event of
thumb. By making a special hand sign moved the mouse pointer.

Jun Hu developed bare-finger touch interaction on regular planar surfaces for e.g.
walls or tables, with only one standard camera and one projector. The touching
information of finger tips is recovered just from the 2-D image captured by the
camera. We used the concept of camera and image processing but without the help of
projector and laser light a simple keyboard is drawn on the screen and the movement
of typing is captured by camera same for the mouse,painting the finger movement is
captured.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 4


3. SYSTEM REQUIREMENTS

3.1. SOFTWARE REQUIREMENTS

1. Operating System: Windows 10 or higher


2. Language: Python (Version 3.5 and above)
3. Python Libraries:
● OpenCv
OpenCV is a library of programming functions mainly aimed at real-time
computer vision. Originally developed by Intel, it was later supported by
Willow Garage then Itseez. The library is cross-platform and free for use
under the open-source Apache 2 License.
● Mediapipe
MediaPipe offers cross-platform, customizable ML solutions for live and
streaming media.
● Numpy
NumPy is a library for the Python programming language, adding support for
large, multi-dimensional arrays and matrices, along with a large collection of
high-level mathematical functions to operate on these arrays.
● Time, autopy, pynput, etc.
4. Pycharm IDE
PyCharm is a dedicated Python Integrated Development Environment (IDE)
providing a wide range of essential tools for Python developers, tightly integrated to
create a convenient environment for productive Python, web, and data science
development.

3.2. HARDWARE REQUIREMENTS

1. Processor: Intel core i3 7th Gen or higher


2. RAM: 4GB RAM or higher
3. Hard Disk: 20 GB or higher
4. Peripheral webcam at least 30 frames/second, 640x480 resolution.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 5


4.SYSTEM ARCHITECTURE

4.1 MOUSE AND PAINTING

4.2 KEYBOARD

Here in the above architectures the hand gestures will be given as input to the camera
as live feed. Then these hand gestures are processed by image processing using
OpenCv library. Finally the detecting of mouse pointer or keyboard typing will be
detected.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 6


4.3UML DIAGRAMS

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 7


MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 8
5. IMPLEMENTATION

1)VIRTUAL MOUSE
import cv2
import numpy as np
import time
import autopy
from cvzone.HandTrackingModule import HandDetector

wCam,hCam=640,480
frameR=100 #frameReduction
smoothening=5
pTime=0
plocX,plocY=0,0
clocX,clocY=0,0

cap=cv2.VideoCapture(0)
cap.set(3,wCam)
cap.set(4,hCam)

detector=HandDetector(maxHands=1)
wScr,hScr=autopy.screen.size()

while True:
success,img=cap.read()
img=detector.findHands(img)
lmList,bbox=detector.findPosition(img)
if len(lmList)!=0:
x1,y1 =lmList[8][0:]
x2,y2 = lmList[12][0:]
#print(x1,y1,x2,y2)

fingers=detector.fingersUp()

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 9


#print(fingers)
cv2.rectangle(img, (frameR, frameR), (wCam - frameR, hCam - frameR),
(255, 0, 255), 2)

if fingers[1]==1 and fingers[2]==0:

x3=np.interp(x1,(frameR,wCam-frameR),(0,wScr))
y3=np.interp(y1,(frameR,hCam-frameR),(0,hScr))

clocX = plocX + (x3 - plocX) / smoothening


clocY = plocY + (y3 - plocY) / smoothening

autopy.mouse.move(wScr-clocX,clocY)
cv2.circle(img,(x1,y1),15,(255,0,255),cv2.FILLED)
plocX,plocY=clocX,clocY

if fingers[1] == 1 and fingers[2] == 1:


length,img,lineInfo=detector.findDistance(8,12,img)
print(length)
if length<40:
cv2.circle(img, (lineInfo[4], lineInfo[5]),
15, (0, 255, 0), cv2.FILLED)
autopy.mouse.click()

cTime=time.time()
fps=1/(cTime-pTime)
pTime=cTime
cv2.putText(img,str(int(fps)),(20,50),cv2.FONT_HERSHEY_PLAIN,3,
(255,0,0),3)

cv2.imshow("image",img)
cv2.waitKey(1)

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 10


OUTPUT:
#Coordinates of Index Finger and Middle Finger.

178 256 164 401

209 226 195 370

223 211 212 370

239 195 227 357

254 182 245 344

262 172 255 332

2)VIRTUAL KEYBOARD

import cv2
import cvzone
from cvzone.HandTrackingModule import HandDetector
from time import sleep
from pynput.keyboard import Controller

cap = cv2.VideoCapture(0)
cap.set(3, 1280)
cap.set(4, 720)

detector=HandDetector(detectionCon=0.8)
keys=[["Q", "W", "E", "R", "T", "Y", "U", "I", "O", "P"],
["A", "S", "D", "F", "G", "H", "J", "K", "L", ";"],
["Z", "X", "C", "V", "B", "N", "M", ",", ".", "/"]]
finalText=""

keyboard=Controller()

def drawAll(img,buttonList):

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 11


for button in buttonList:
x, y = button.pos
w, h = button.size
cv2.rectangle(img, button.pos, (x + w, y + h), (255, 0, 255), cv2.FILLED)
cv2.putText(img, button.text, (x + 20, y + 65),
cv2.FONT_HERSHEY_PLAIN, 4, (255, 255, 255), 4)
return img

class Button():
def __init__(self,pos,text,size=[85,85]):
self.pos=pos
self.size=size
self.text=text

buttonList=[]

for i in range(len(keys)):
for j, key in enumerate(keys[i]):
buttonList.append(Button([100 * j + 50, 100 * i + 50], key))

while True:
success, img = cap.read()
img = cv2.flip(img, 1)
img = detector.findHands(img)
lmList, bboxInfo = detector.findPosition(img)
img=drawAll(img, buttonList)

if lmList:
for button in buttonList:
x,y=button.pos
w,h=button.size

if x<lmList[8][0]<x+w and y<lmList[8][1]<y+h:

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 12


cv2.rectangle(img, button.pos, (x + w, y + h), (175, 0, 175), cv2.FILLED)
cv2.putText(img, button.text, (x + 20, y + 65),
cv2.FONT_HERSHEY_PLAIN, 4, (255, 255, 255), 4)
l, _, _ =detector.findDistance(8,12,img,draw=False)

print(l)

if l<30:
keyboard.press(button.text)
cv2.rectangle(img, button.pos, (x + w, y + h), (0, 255, 0), cv2.FILLED)
cv2.putText(img, button.text, (x + 20, y + 65),
cv2.FONT_HERSHEY_PLAIN, 4, (255, 255, 255), 4)
finalText+=button.text
sleep(0.15)

cv2.rectangle(img,(50,350), (700,450), (175, 0, 175), cv2.FILLED)


cv2.putText(img, finalText, (60, 430),
cv2.FONT_HERSHEY_PLAIN, 5, (255, 255, 255), 5)

cv2.imshow("Image", img)
cv2.waitKey(1)

OUTPUT:
#Distance between index finger and middle finger

158.31613941730643

188.66372200293304

185.36720314014556

189.42280749687984

43.289721643826724

22.090722034374522 # If the distance between index finger and middle finger

104.12012293500234 # is less than 30 then key will be pressed.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 13


3)VIRTUAL PAINTER

import cv2
import numpy as np
import os
from cvzone.HandTrackingModule import HandDetector

brushThickness=15
eraserThickness=50

folderPath="Header"
myList=os.listdir(folderPath)
print(myList)
overlayList=[]
for imPath in myList:
image=cv2.imread(f'{folderPath}/{imPath}')
overlayList.append(image)
print(len(overlayList))

header=overlayList[0]
drawColor=(255,0,255)

cap=cv2.VideoCapture(0)
cap.set(3,1280)
cap.set(4,720)

detector=HandDetector(detectionCon=0.85)
xp,yp=0,0
imgCanvas=np.zeros((720,1280,3),np.uint8)

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 14


while True:
#1.import image
success,img=cap.read()
img=cv2.flip(img,1)

#2.find hand landmarks


img=detector.findHands(img)
lmList,bbox=detector.findPosition(img,draw=False)

if len(lmList)!=0:

#print(lmList)

#tip of index and middle finger


x1,y1=lmList[8][0:]
x2,y2=lmList[12][0:]

#3.check which finger are up

fingers=detector.fingersUp()
#print(fingers)

#4.if selection mode-when 2 fingers are up


if fingers[1] and fingers[2]:
xp, yp = 0, 0
print("Selection mode")
#checking for click
if y1<125:
if 250<x1<450:
header=overlayList[0]
drawColor=(255,0,255)
elif 550<x1<750:
header=overlayList[1]

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 15


drawColor = (255, 0, 0)
elif 800<x1<950:
header=overlayList[2]
drawColor = (0, 255,0)
elif 650<x1<1200:
header=overlayList[3]
drawColor = (0, 0, 0)
cv2.rectangle(img, (x1, y1 - 25), (x2, y2 + 25), drawColor, cv2.FILLED)

#5. if drawing mode - when index finger up


if fingers[1] and fingers[2]==False:
cv2.circle(img,(x1,y1),15,drawColor,cv2.FILLED)
print("Drawing mode")
if xp==0 and yp==0:
xp,yp=x1,y1

if drawColor==(0,0,0):
cv2.line(img, (xp, yp), (x1, y1), drawColor, eraserThickness)
cv2.line(imgCanvas, (xp, yp), (x1, y1), drawColor, eraserThickness)
else:
cv2.line(img,(xp,yp),(x1,y1),drawColor,brushThickness)
cv2.line(imgCanvas, (xp, yp), (x1, y1), drawColor, brushThickness)

xp,yp=x1,y1

imgGray=cv2.cvtColor(imgCanvas,cv2.COLOR_BGR2GRAY)
_,imgInv=cv2.threshold(imgGray,50,255,cv2.THRESH_BINARY_INV)
imgInv=cv2.cvtColor(imgInv,cv2.COLOR_GRAY2BGR)
img=cv2.bitwise_and(img,imgInv)
img=cv2.bitwise_or(img,imgCanvas)

img[0:125,0:1280]=header
#img=cv2.addWeighted(img,0.5,imgCanvas,0.5,0)

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 16


cv2.imshow("image",img)
#cv2.imshow("Canvas",imgCanvas)
cv2.waitKey(1)

OUTPUT:
['1.jpg', '2.jpg', '3.jpg', '4.jpg']

Selection mode

Selection mode

Selection mode

Selection mode

Drawing mode

Drawing mode

Drawing mode

4)VIRTUAL PAINTER USING OBJECT DETECTION

A)OBJECT DETECTION
import cv2
import numpy as np

frameWidth = 640
frameHeight = 480
cap = cv2.VideoCapture(0)
cap.set(3, frameWidth)
cap.set(4, frameHeight)

def empty(a):
pass

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 17


cv2.namedWindow(“HSV”)
cv2.resizeWindow(“HSV”, 640, 240)
cv2.createTrackbar(“HUE Min”, “HSV”, 0, 179, empty)
cv2.createTrackbar(“HUE Max”, “HSV”, 179, 179, empty)
cv2.createTrackbar(“SAT Min”, “HSV”, 0, 255, empty)
cv2.createTrackbar(“SAT Max”, “HSV”, 255, 255, empty)
cv2.createTrackbar(“VALUE Min”, “HSV”, 0, 255, empty)
cv2.createTrackbar(“VALUE Max”, “HSV”, 255, 255, empty)

while True:
success, img = cap.read()
imgHsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

h_min = cv2.getTrackbarPos(“HUE Min”, “HSV”)


h_max = cv2.getTrackbarPos(“HUE Max”, “HSV”)
s_min = cv2.getTrackbarPos(“SAT Min”, “HSV”)
s_max = cv2.getTrackbarPos(“SAT Max”, “HSV”)
v_min = cv2.getTrackbarPos(“VALUE Min”, “HSV”)
v_max = cv2.getTrackbarPos(“VALUE Max”, “HSV”)
print(h_min)

lower = np.array([h_min, s_min, v_min])


upper = np.array([h_max, s_max, v_max])
mask = cv2.inRange(imgHsv, lower, upper)
result = cv2.bitwise_and(img, img, mask=mask)

mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)


hStack = np.hstack([img, mask, result])
cv2.imshow(‘Horizontal Stacking’, hStack)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break

cap.release()
cv2.destroyAllWindows()

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 18


OUTPUT:

#h_min,h_max,s_min,s_max,v_min,v_max

0 179 0 255 0 255

3 179 0 255 0 255 # Hue Saturation Value (Minimum and Maximum Values)

7 179 0 255 0 255 # For object detection

10 179 0 255 0 255

B)PAINTING:

import cv2
import numpy as np

frameWidth = 1920#640
frameHeight = 1080#480
cap = cv2.VideoCapture(0)
cap.set(3, frameWidth)
cap.set(4, frameHeight)
cap.set(10, 150)

myColors =[[84,144,28,255,0,255],
[48,66,42,159,156,255],
[57,76,0,100,255,255]]
myColorValues = [[255,0,0],
[0,255,0],
[0,0,255]]
myPoints = []

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 19


def findColor(img, myColors, myColorValues):
imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
count = 0
newPoints = []
for color in myColors:
lower = np.array(color[0:3])
upper = np.array(color[3:6])
mask = cv2.inRange(imgHSV, lower, upper)
x, y = getContours(mask)
cv2.circle(imgResult, (x, y), 15, myColorValues[count],cv2.FILLED)
if x != 0 and y != 0:
newPoints.append([x,y,count])
count += 1
# cv2.imshow(str(color[0]]),mask)
return newPoints

def getContours(img):
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
x, y, w, h = 0, 0, 0, 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area>500:
# cv2.drawContours(imgResult, cnt, -1, (255, 0, 0), 3)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
x, y, w, h = cv2.boundingRect(approx)
return x + w // 2, y

def drawOnCanvas(myPoints, myColorValues):


for point in myPoints:
cv2.circle(imgResult,(point[0], point[1]), 10, myColorValues[point[2]],
cv2.FILLED)

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 20


while True:
success, img = cap.read()
img = cv2.flip(img, 1)
imgResult = img.copy()
newPoints = findColor(img, myColors, myColorValues)
if len(newPoints) != 0:
for newP in newPoints:
myPoints.append(newP)
if len(myPoints) != 0:
drawOnCanvas(myPoints, myColorValues)
cv2.imshow("Result", imgResult)

if cv2.waitKey(1) and 0xFF == ord('q'):


break

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 21


6. RESULTS AND DISCUSSION

6.1 Virtual Mouse

6.2 Virtual Keyboard

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 22


6.3 Virtual Painter

4) Virtual Drawing Using Object Detection

6.4.1 Object Detection

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 23


6.4.2 Painting Using Object

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 24


7. ADVANTAGES

• The main advantage of using hand gestures is to interact with computer as a


non-contact human computer input modality.

• Reduce hardware cost by eliminating use of mouse and keyboard.


• Convenient for users not comfortable with touchpad and keyboard.
• It can be useful in places like operation thearters where low noise is essential
• The typing does not require a lot of force .So easing the strain on wrists and
hands

8. APPLICATIONS

• The framework may be useful for controlling different types of games and
other applications dependent on the controlled through user defined gestures.
• Virtual Painting can be used in online classes while teaching.
• The framework may be useful for security reasons like recognizing the hand
pattern and giving the access to a system for the recognized hand only.
• Tv remote control
• High-tech and industrial sectors
• Hand gesture to control the home appliances like MP3 player ,TV etc.
• Virtual reality and immersive reality systems are computer-generated
environments that replicate a scenario or situation, either inspired by reality or
created out of imagination.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 25


9.TEST CASES
Input Expected Output Actual Output Commen
ts

Start the tool Starting off the camera Camera fragment gets Pass
fragment started

The camera recognizes Colors Blue,Red,

Color Calibration The colors and it can be Yellow are calibraed Pass

Calibrated

Capture image Capture the image Capture the image Pass

Process image Processing of the image Processing of the image Pass

Check center Verify the centres of Centers of all the three Pass
the three colors colors detected

Execute gesture Execute the respective Execute the required Pass


gesture gesture

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 26


10. CONCLUSION

We developed a system to get an input of keyboard drawn on a screen, to control the


mouse cursor and paint using a real-time camera. We implemented mouse tasks such
as clicking, double clicking, and moving. However, it is difficult to get stable results
because of the variety of lighting and skin colors of human races. Most vision
algorithms have illumination issues. From the results, we can expect that if the vision
algorithms can work in all environments then our system will work more efficiently.
This system could be useful in presentations and to reduce work space.

Gesture recognition technology is the turning point in the world of VR/AR


development. It can allow seamless non-touchable control of computerized devices to
create a highly interactive, yet fully immersive and flexible hybrid reality.
The inclusion of this technology in multiple applications across various sectors is
further revolutionizing human-computer communication. That said, gesture
recognition is no novice’s game.
It’s a fully integrated, highly advanced technology that requires specialized skills of
individuals with relevant experience that can guarantee favorable results. AppReal is a
development company with the resources, talents, and expertise of over 200
extremely proficient and dedicated professionals who can recognize and understand
your requirements and successfully deliver to your expectations. AppReal can help
you realize your VR goals and make them a reality.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 27


11. FUTURE ENHANCEMENT
There are several features and improvements needed in order for the program to
be more user friendly, accurate, and flexible in various environments. The
following describes the improvements and the features required:

a) Smart Movement: Due to the current recognition process are limited within
25cm radius, an adaptive zoomin/out functions are required to improve the
covered distance, where it can automatically adjust the focus rate based on the
distance between the users and the webcam.

b) Better Accuracy & Performance: The response time are heavily relying on
the hardware of the machine, this includes the processing speed of the processor,
the size of the available RAM, and the available features of webcam. Therefore,
the program may have better performance when it's running on a decent machine
with a webcam that performs better in different types of lightings.

c) Mobile Application: In future this web application also able to use on Android
devices, where touchscreen concept is replaced by hand gestures.

In order to complete this project we had to study different subjects in computer


vision: - Using image processing algorithms correctly: o Edge detection. o Hough
transforms. o Morphological operations & Image filtering. o Homographic
transform. o Correlation. - Determine threshold values - Skin detection.

The main achievement from this project is the fingertip detection used. The
method used to detect the fingertip (R-SoG) was implemented by us, with no
reference. We couldn’t find cases using this method for this purpose.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 28


12. REFERENCE

1. Eckert, M. Lopez, M. ; Lazaro, C. ; Meneses, J. ; Martinez Ortega, J.F., 2015


Mokey - A motion based keyboard interpreter .Tech. Univ. of Madrid, Madrid,
Spain

2. Su, Xiaolin ,Zhang, Yunzhou ; Zhao, Qingyang ; Gao, Liang ,2015 Virtual
keyboard: A human-computer interaction device based on laser and image
processing, Virtual keyboard: A human-computer interaction device based on
laser and image processing, College of Information Science and Engineering,
Northeastern University, Shenyang, China

3. Erdem, E. Yardimci, Y. Atalay, V. Cetin, 2002. Computer vision based


mouse, Proceedings. (ICASS). IEEE International Conference

4. Vision based Men-Machine Interaction.

5. Chu-Feng Lien, Portable Vision-Based HCI - A Real-time Hand Mouse


System on Handheld Devices.

6. Jun Hu, Guolin Li, Xiang Xie, Zhong Lv, and Zhihua Wang, Senior Member,
IEEE:Bare-fingers Touch Detection by the Button’s Distortion in a Projector–
Camera System
7. K. P. Vinay, “Cursor control using hand gestures,” International Journal of
Critical Accounting, vol. 0975–8887, 2016.
8. Google, Mediapipe, Opencv.

MINI PROJECT REPORT (2019-2023 Batch) Dept. of CSE, MLRITM 29

You might also like