crct mini p
crct mini p
crct mini p
MR.CH.V.V.NARSHIMHA RAJU
Assistant Professor
(Affiliated to JNTU-H, Approved by AICTE New Delhi and Accredited by NBA & NAAC With ‘A’ Grade)
OCTOBER 2022
Engineering is a record bonafide work carried out by him. The results embodied in this
report have not been submitted to any other University for the award of any degree.
and all ideas and references have been duly acknowledged. It does not contain any work
Date:
I would like to thank Mr. Abdul Basith Khateeb, Assoc. Professor and Head, Department
of Computer Science and Engineering, Marri Laxman Reddy Institute of Technology &
Management, for having provided the freedom to use all the facilities available in the
department, especially the laboratories and the library.
I sincerely thank my seniors and all the teaching and non-teaching staff of the Department
of Computer Science for their timely suggestions, healthy criticism and motivation during
the course of this work.
I would also like to thank my classmates for always being there whenever I needed help or
moral support. With great respect and obedience, I thank my parents and brother who were
the backbone behind my deeds.
Finally, I express my immense gratitude with pleasure to the other individuals who have
either directly or indirectly contributed to my need at right time for the development and
success of this work.
2. LITERATURE SURVEY 4
3. SYSTEM REQUIREMENTS 5
4. SYSTEM ARCHITECTURE 6
4.2 KEYBOARD 6
5. IMPLEMENTATION 9
7. ADVANTAGES 25
8. APPLICATIONS 25
9. TEST CASES 26
10. CONCLUSION 27
12. REFERENCE 29
Nowadays, Computing is not limited to desktops and laptops it has found it’s way
into mobile devices. But what has not changed for the past years is the input
device. Virtual keyboard, mouse, drawing uses computer vision and AI (Artificial
Intelligence) to let user’s work. With the help of camera virtual keyboard will be
created on screen and the typing will be captured on camera. Virtual mouse will
take finger co-ordinates as input and tracks the finger for movement of cursor. In
virtual drawing pen colour will be captured on camera and it draws the captured
colour. For keyboard we map touch point to keystrokes and recognize the
character. For mouse tracking and finger detection we are tracking and counting
number of fingers. It implements majority of mouse tasks such as left click, right
click, double click and scrolling. However it is difficult to get stable results
because of variety of lighting and skin colours of human races. The Virtual Mouse
color recognition program will constantly be acquiring real-time images where the
images will be undergone a series of filtration and conversion. Whenever the
process is complete, the program will apply the image processing technique to
obtain the coordinates of the targeted colors position from the converted frames.
After that, it will proceed to compare the existing colors within the frames with a
list of color combinations, where different combinations consist of different mouse
functions. If the current colors combination found a match, the program will
execute the mouse function, which will be translated into an actual mouse function
to the users' machine. Virtual Painting is fully developed in Python, it implements
the basic and advance levels of python. The color tracking and detection process is
used to achieve the output. Here the color marker is used to produce a mask on the
original color canvas.
Computers have undergone rapid change from being a 'space saver' to 'as tiny as your
palm'. Disks and components grew smaller in size, but one component that still
remained the same for decades - it's the keyboard. “Many researchers in the field of
human computer interaction and robotics field have tried to control mouse using video
devices. Computers have undergone rapid change from being a 'space saver' to 'as tiny
as your palm'. Disks and components grew smaller in size, but one component that
still remained the same for decades - it's the keyboard. “Many researchers and
scientists have tried to control mouse movements using video. However, all of them
used different methods to make mouse clicking event. In our project we are using
virtual keyboard, mouse and mouse. In virtual keyboard, mouse and drawing we are
capturing the finger movement with the help of camera. The virtual keyboard, mouse
and drawing which makes the human computer interaction simpler being a handy,
small and easy to use application.
A graphics tablet (also known as a digitizer, drawing tablet, drawing pad, digital
drawing tablet, pen tablet, or digital art board) is a computer input device that enables
a user to hand-draw images, animations and graphics, with a special pen-like stylus,
similar to the way a person draws images with a pencil and paper. These tablets may
also be used to capture data or handwritten signatures. It can also be used to trace an
image from a piece of paper that is taped or otherwise secured to the tablet surface.
Capturing data in this way, by tracing or entering the corners of linear polylines or
shapes, is called digitizing.
Mouse:
1. The mouse will be represented by use of our finger for recognition of the
cursor moments.
Keyboard:
Painting:
1. The drawing will be done by the movement of finger and also by an object
like pen/marker.
2. A camera will be there to capture live feed of our finger movement or
pen/marker on the screen.
3. Hence, In the Image processing, in real time movement of finger or
pen/marker will be detected.
4. Those movements will be drawn on screen.
In this design of project by moving our index finger we can move mouse pointer,
move across keyboard and move on the screen for drawing. And by using two finger
we can perform clicking action for mouse, pressing keys for keyboard and drawing on
the screen.
Approach related to mouse. One approach, by Erdem to, control the motion of the
mouse by finger tip tracking. A click of the mouse button was implemented on the
screen such that a click occurred when a user’s hand passed over the region. Another
approach was developed by Chu-Feng Lien. He controls the mouse cursor and
clicking event by using the finger-tips movement. His clicking method was based on
image density, and required the user to hold the mouse cursor on the desired spot for a
short period of time. Paul et al, used some another method to click. He used the
motion of the thumb from a ‘thumbs-up’ position to a fist to mark a clicking event of
thumb. By making a special hand sign moved the mouse pointer.
Jun Hu developed bare-finger touch interaction on regular planar surfaces for e.g.
walls or tables, with only one standard camera and one projector. The touching
information of finger tips is recovered just from the 2-D image captured by the
camera. We used the concept of camera and image processing but without the help of
projector and laser light a simple keyboard is drawn on the screen and the movement
of typing is captured by camera same for the mouse,painting the finger movement is
captured.
4.2 KEYBOARD
Here in the above architectures the hand gestures will be given as input to the camera
as live feed. Then these hand gestures are processed by image processing using
OpenCv library. Finally the detecting of mouse pointer or keyboard typing will be
detected.
1)VIRTUAL MOUSE
import cv2
import numpy as np
import time
import autopy
from cvzone.HandTrackingModule import HandDetector
wCam,hCam=640,480
frameR=100 #frameReduction
smoothening=5
pTime=0
plocX,plocY=0,0
clocX,clocY=0,0
cap=cv2.VideoCapture(0)
cap.set(3,wCam)
cap.set(4,hCam)
detector=HandDetector(maxHands=1)
wScr,hScr=autopy.screen.size()
while True:
success,img=cap.read()
img=detector.findHands(img)
lmList,bbox=detector.findPosition(img)
if len(lmList)!=0:
x1,y1 =lmList[8][0:]
x2,y2 = lmList[12][0:]
#print(x1,y1,x2,y2)
fingers=detector.fingersUp()
x3=np.interp(x1,(frameR,wCam-frameR),(0,wScr))
y3=np.interp(y1,(frameR,hCam-frameR),(0,hScr))
autopy.mouse.move(wScr-clocX,clocY)
cv2.circle(img,(x1,y1),15,(255,0,255),cv2.FILLED)
plocX,plocY=clocX,clocY
cTime=time.time()
fps=1/(cTime-pTime)
pTime=cTime
cv2.putText(img,str(int(fps)),(20,50),cv2.FONT_HERSHEY_PLAIN,3,
(255,0,0),3)
cv2.imshow("image",img)
cv2.waitKey(1)
2)VIRTUAL KEYBOARD
import cv2
import cvzone
from cvzone.HandTrackingModule import HandDetector
from time import sleep
from pynput.keyboard import Controller
cap = cv2.VideoCapture(0)
cap.set(3, 1280)
cap.set(4, 720)
detector=HandDetector(detectionCon=0.8)
keys=[["Q", "W", "E", "R", "T", "Y", "U", "I", "O", "P"],
["A", "S", "D", "F", "G", "H", "J", "K", "L", ";"],
["Z", "X", "C", "V", "B", "N", "M", ",", ".", "/"]]
finalText=""
keyboard=Controller()
def drawAll(img,buttonList):
class Button():
def __init__(self,pos,text,size=[85,85]):
self.pos=pos
self.size=size
self.text=text
buttonList=[]
for i in range(len(keys)):
for j, key in enumerate(keys[i]):
buttonList.append(Button([100 * j + 50, 100 * i + 50], key))
while True:
success, img = cap.read()
img = cv2.flip(img, 1)
img = detector.findHands(img)
lmList, bboxInfo = detector.findPosition(img)
img=drawAll(img, buttonList)
if lmList:
for button in buttonList:
x,y=button.pos
w,h=button.size
print(l)
if l<30:
keyboard.press(button.text)
cv2.rectangle(img, button.pos, (x + w, y + h), (0, 255, 0), cv2.FILLED)
cv2.putText(img, button.text, (x + 20, y + 65),
cv2.FONT_HERSHEY_PLAIN, 4, (255, 255, 255), 4)
finalText+=button.text
sleep(0.15)
cv2.imshow("Image", img)
cv2.waitKey(1)
OUTPUT:
#Distance between index finger and middle finger
158.31613941730643
188.66372200293304
185.36720314014556
189.42280749687984
43.289721643826724
import cv2
import numpy as np
import os
from cvzone.HandTrackingModule import HandDetector
brushThickness=15
eraserThickness=50
folderPath="Header"
myList=os.listdir(folderPath)
print(myList)
overlayList=[]
for imPath in myList:
image=cv2.imread(f'{folderPath}/{imPath}')
overlayList.append(image)
print(len(overlayList))
header=overlayList[0]
drawColor=(255,0,255)
cap=cv2.VideoCapture(0)
cap.set(3,1280)
cap.set(4,720)
detector=HandDetector(detectionCon=0.85)
xp,yp=0,0
imgCanvas=np.zeros((720,1280,3),np.uint8)
if len(lmList)!=0:
#print(lmList)
fingers=detector.fingersUp()
#print(fingers)
if drawColor==(0,0,0):
cv2.line(img, (xp, yp), (x1, y1), drawColor, eraserThickness)
cv2.line(imgCanvas, (xp, yp), (x1, y1), drawColor, eraserThickness)
else:
cv2.line(img,(xp,yp),(x1,y1),drawColor,brushThickness)
cv2.line(imgCanvas, (xp, yp), (x1, y1), drawColor, brushThickness)
xp,yp=x1,y1
imgGray=cv2.cvtColor(imgCanvas,cv2.COLOR_BGR2GRAY)
_,imgInv=cv2.threshold(imgGray,50,255,cv2.THRESH_BINARY_INV)
imgInv=cv2.cvtColor(imgInv,cv2.COLOR_GRAY2BGR)
img=cv2.bitwise_and(img,imgInv)
img=cv2.bitwise_or(img,imgCanvas)
img[0:125,0:1280]=header
#img=cv2.addWeighted(img,0.5,imgCanvas,0.5,0)
OUTPUT:
['1.jpg', '2.jpg', '3.jpg', '4.jpg']
Selection mode
Selection mode
Selection mode
Selection mode
Drawing mode
Drawing mode
Drawing mode
A)OBJECT DETECTION
import cv2
import numpy as np
frameWidth = 640
frameHeight = 480
cap = cv2.VideoCapture(0)
cap.set(3, frameWidth)
cap.set(4, frameHeight)
def empty(a):
pass
while True:
success, img = cap.read()
imgHsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cap.release()
cv2.destroyAllWindows()
#h_min,h_max,s_min,s_max,v_min,v_max
3 179 0 255 0 255 # Hue Saturation Value (Minimum and Maximum Values)
B)PAINTING:
import cv2
import numpy as np
frameWidth = 1920#640
frameHeight = 1080#480
cap = cv2.VideoCapture(0)
cap.set(3, frameWidth)
cap.set(4, frameHeight)
cap.set(10, 150)
myColors =[[84,144,28,255,0,255],
[48,66,42,159,156,255],
[57,76,0,100,255,255]]
myColorValues = [[255,0,0],
[0,255,0],
[0,0,255]]
myPoints = []
def getContours(img):
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_NONE)
x, y, w, h = 0, 0, 0, 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area>500:
# cv2.drawContours(imgResult, cnt, -1, (255, 0, 0), 3)
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02 * peri, True)
x, y, w, h = cv2.boundingRect(approx)
return x + w // 2, y
8. APPLICATIONS
• The framework may be useful for controlling different types of games and
other applications dependent on the controlled through user defined gestures.
• Virtual Painting can be used in online classes while teaching.
• The framework may be useful for security reasons like recognizing the hand
pattern and giving the access to a system for the recognized hand only.
• Tv remote control
• High-tech and industrial sectors
• Hand gesture to control the home appliances like MP3 player ,TV etc.
• Virtual reality and immersive reality systems are computer-generated
environments that replicate a scenario or situation, either inspired by reality or
created out of imagination.
Start the tool Starting off the camera Camera fragment gets Pass
fragment started
Color Calibration The colors and it can be Yellow are calibraed Pass
Calibrated
Check center Verify the centres of Centers of all the three Pass
the three colors colors detected
a) Smart Movement: Due to the current recognition process are limited within
25cm radius, an adaptive zoomin/out functions are required to improve the
covered distance, where it can automatically adjust the focus rate based on the
distance between the users and the webcam.
b) Better Accuracy & Performance: The response time are heavily relying on
the hardware of the machine, this includes the processing speed of the processor,
the size of the available RAM, and the available features of webcam. Therefore,
the program may have better performance when it's running on a decent machine
with a webcam that performs better in different types of lightings.
c) Mobile Application: In future this web application also able to use on Android
devices, where touchscreen concept is replaced by hand gestures.
The main achievement from this project is the fingertip detection used. The
method used to detect the fingertip (R-SoG) was implemented by us, with no
reference. We couldn’t find cases using this method for this purpose.
2. Su, Xiaolin ,Zhang, Yunzhou ; Zhao, Qingyang ; Gao, Liang ,2015 Virtual
keyboard: A human-computer interaction device based on laser and image
processing, Virtual keyboard: A human-computer interaction device based on
laser and image processing, College of Information Science and Engineering,
Northeastern University, Shenyang, China
6. Jun Hu, Guolin Li, Xiang Xie, Zhong Lv, and Zhihua Wang, Senior Member,
IEEE:Bare-fingers Touch Detection by the Button’s Distortion in a Projector–
Camera System
7. K. P. Vinay, “Cursor control using hand gestures,” International Journal of
Critical Accounting, vol. 0975–8887, 2016.
8. Google, Mediapipe, Opencv.