1 PB PDF
1 PB PDF
1 PB PDF
net/publication/326034733
CITATIONS READS
0 644
3 authors, including:
Seifedine Kadry
Beirut Arab University
244 PUBLICATIONS 637 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Seifedine Kadry on 29 June 2018.
ORIGINAL ARTICLES
Received: May 7, 2018 Accepted: June 6, 2018 Online Published: June 27, 2018
DOI: 10.5430/ijrc.v1n1p6 URL: https://doi.org/10.5430/ijrc.v1n1p6
A BSTRACT
This project attempts to implement an Arduino robot to simulate a brainwave-controlled wheelchair for paralyzed patients with an
improved controlling method. The robot should be able to move freely in anywhere under the control of the user and it is not
required to predefine any map or path. An accurate and natural controlling method is provided, and the user can stop the robot any
time immediately to avoid risks or danger. This project is using a low-cost brainwave-reading headset, which has only a single
lead electrode (Neurosky mind wave headset) to collect the EEG signal. BCI will be developed by sending the EEG signal to the
Arduino Mega and control the movement of the robot. This project used the eye blinking as the robot controlling method as the
eye blinking will cause a significant pulse in the EEG signal. By using the neural network to classify the blinking signal and the
noise, the user can send the command to control the robot by blinking twice in a short period of time. The robot will be evaluated
by driving in different places to test whether it can follow the expected path, avoid the obstacles, and stop on a specific position.
Key Words: Rain-computer interface, Electroencephalogram, Neural network, Neurosky sensor, Wheelchair
China.
terms of mobility, no any map or path need to be predefined ment of attention and meditation level are based on human
and the robot should be able to move freely in anywhere un- mental activity. Tightening or relaxing the muscles may not
der the control of the user. In terms of accuracy, a controlling have an immediate change in the strength of attention and
method with at least 85% of accuracy should be adopted. meditation.[11]
In terms of safety, an immediate command should be pro-
vided to stop the robot immediately to avoid risks or danger.
In terms of cost-effective, the time cost of each controlling
command selection should be less than 1 second. In terms
of simplicity, the robot should provide a natural controlling
method that does not require left blinking or right blinking.
Autonomous obstacle avoidance should be included to avoid
the jerky movement.
at all the time. However, it is impossible to find someone that and the biggest challenge of EMG system is the electrical
can provide the support in every moment. In the worst situa- noise.[10] Therefore, the idea of EMG system is possible, but
tion, they can only lie in bed when they are alone in a room. it is not an effective way to detect the eye blinks.
Although wheelchair is a good tool when walking is difficult
As mentioned the most commonly used techniques for blink
or impossible, paralyzed patients cannot control them easily.
detection before, it is obvious that the existing blink detection
For fully paralyzed patients, they can go to somewhere only
approach required lots of data and algorithm to handle the
under the help of someone. There is no doubt that a mind
identification of eye blinks. A new way to detect eye blinks is
wave-controlled wheelchair can help in their daily life.
using the EEG signals. By employing this method, only one
brain wave reading device is required for the whole system
4. C OMPARISON OF WHEELCHAIR CON - to extract the EEG signals. Compared to use the built-in
TROLLING METHODS webcam with various lights, employing EEG approach has a
higher mobility. This new technique can also overcome the
Almost all the existing wheelchair controlling methods are deficiencies of the electromyography and image processing
relying on the physical motion of the human. Even the elec- techniques for detecting blinks. For example, according to
tric wheelchair, it required the control of the joystick. The the website of Neurosky, the accuracy of the brainwave data
muscles of the mouth are used for the speech-controlled obtained is about 96%.[16] And this method involved fewer
wheelchair. These wheelchairs controlling methods are not algorithms and computation, which means it has a less time
suitable for the paralyzed patients in the final stage of the cost and able to command the wheelchair rapidly.[10] Also, it
disease. is a lightweight device which required fewer accessories.[10]
As mentioned in the problem statements, some of the paral- Hence, compared to the existing blink detection approach,
ysis like the ALS, their eye muscles are functional even in analyzing the EEG signals is more efficient and effective.
the final stages of the disease.[12] Therefore, the preferable It should be the best solution to employ for controlling the
wheelchair controlling methods for them should be eye blinks wheelchair.
and non-physical motion, like brain motion. A lot of tech-
niques can be used for the eye blink detection while most
5. C OMPARISON OF EEG WHEELCHAIR SYS -
of them have some limitations. One of a new approach is to
use a build-in webcam for the face and eye detection base on TEMS
the image processing, but the first tests showed some people Brice Rebsamen and his team introduced an indoor
cannot successfully control and communication through this wheelchair controlling method using though.[17] This system
solution.[14] was introduced in 2007. The idea of this system is to build
a mind-controlled wheelchair that able to navigate inside
Using image processing techniques to detect the eye blinks
the hospital or a typical house autonomously. The control
also has some limitations. First, the face should always be
mechanism of this system is based on the P300 EEG BCI
steady in a specified position. Otherwise, the camera cannot
which allows the patients to choose a destination listed on
take a clear frame for computation. The second limitation
the menu. The destination listed on the menu will flash one
is adequate light should be prepared for the detection. Fur-
by one. By identifying the positive potential peak of the
thermore, it is hard to implement to a real-time system as the
EEG signal, the system can know the patient is focusing on
time cost is varied. Based on these limitations, this design
which destination. Although the system provides a simple
is not an effective way to obtain eye blinks for controlling
controlling method, the usage of the wheelchair is only lim-
the wheelchair. Another approach that commonly adopted
ited to a specific environment with predefined paths. If the
at present for the eye blink detection is using three small
environment changed, a new map is required to load into
electrodes.[15] These electrodes are stuck to the human skin
the system. Moreover, the wheelchair can only arrive at a
around the orbicularis oculi muscle to get the electromyog-
specific point, but the patient cannot adjust the position. It
raphy (EMG) data. Obviously, an EMG system can detect
seems that the usage environment is the biggest limitation of
the eye blinks efficiently, but this approach is not very ac-
this system.
curate.[10] The accuracy of EMG system is easily affected
because of variability skin conductance and sight position- A new wheelchair controlling method through thought has
ing changes. The raw EMG data of the muscles are hard been introduced by Vaibhav and his team.[18] A monitor mod-
to obtain. It is because the baseline will easily drift when ule is used in this system for patients to select the movement
patient changing the sight positioning. Moreover, the skin option. The selection pointer moves from column to column
conductance is not a constant as it can change dynamically, with a defined time interval. Patients can perform left-hand
8 ISSN 2577-7742 E-ISSN 2577-7769
http://ijrc.sciedupress.com International Journal of Robotics and Control 2018, Vol. 1, No. 1
sensor and infrared sensor to make the final decision of the Otherwise, maintain the motion in the last time step and run
car movement. The ultrasonic sensors will detect the obsta- obstacle avoidance. And the LEDs will display the status of
cles and avoid it automatically. The infrared sensors will the controlling command (e.g. the front LED will on when
detect the distance between the robot body and the ground the user sends a “forward” movement command).
to prevent falling from the stair. If the command from eye
Below is the flow chart showing how the system works (see
blinking is received, the robot will follow this command.
Figure 3).
before, a safety mechanism is the most important part of 8.2 Prototype using 2 LEDs
designing the robot. To avoid the danger caused by care- By default, the robot will move forward until the user sends
less or incorrect control, the system required to provide an an eye blinking command to it. At the same time, the LED
immediate command for the user to stop the robot. In this installed on the robot will blinks one by one with a short
prototype, the user can blink three times or more in a short delay. 2 LEDs are working on the robot to represent the
period to stop the robot. The robot will stop immediately, direction of left and right. User can blink twice in a short
while the LED will keep lighting up one by one. User can period to change the direction of the robot. The movement
blink three times or more in a short period to restart the robot.of the robot is depending on which LED is lighted up at that
The movement of the robot is depending on which LED is moment. If the left LED lighted up, the robot rotates left.
lighted up at that moment. If right LED lighted up, the robot rotates right. During the
This first prototype can work fine base on the eye blinking rotation of the robot, the user can blink twice at any time to
command. Although the accuracy is quite low, it can move end the rotation session. After that, the robot will go forward
in different directions and stop at any time. Also, it is a bit as a default. Again, during the testing, although the accuracy
difficult for the robot to follow a specific path. It is because is not very high, this prototype can work fine base on the eye
the 4 LEDs light up one by one with a short delay. Although blinking command. Compared to the first prototype, it has
the delay is short, the 4 LEDs light up one by one cost a huge fewer LEDs, so it can turn in just about one second, which
of time. The user needs to wait until a proper LED light up is acceptable. Moreover, it can follow a specific path like a
and send the eye blinking command. rectangle, and even some irregular path accurately. Overall,
it is a good controlling method.
irregular path. Overall, it is also a good controlling method. wave pattern will be stable.
However, the wave of noise may also have a peak higher than If the peak is higher than the specific value X, it will come to
the specific value, so it is not a good classification method the the next stage. At this stage, several points are used to record
identify whether the wave is human blinking or not. In order the wave patterns as the example (see Figure 8).
to have a higher accuracy in the robot controlling process, an
When the strength reaches value X, the timer will start to
artificial neural network was adopted.
record the time until the peak is reached. The time spent in
In this project, MATLAB is used to build the neural network this part is represented by T1.
and send the controlling command after classifying the wave
The peak of a wave is represented by Strength1. When the
of noise and the human blinking. The wave pattern of human
peak is reached, the timer will start again until the strength
blinking will be further identified to different controlling
of wave drops back to value X. The time spent in this part is
command, such as blinking twice rapidly belong to the turn
represented by T2.
left command, blinking twice slowly belong to the turn right
command. When the strength drops below value X, the timer starts until
the strength of the wave drops below value Y. The time spent
10. C LASSIFY THE BLINKING AND NOISE in this part is represented by T3.
Wave in the EEG signal is captured for two stages of filtering. When the strength is below value Y, the timer start until the
Firstly, if the peak is lower than a specific value X as labeled trough is reached. The time spent in this part is represented
in red color below, it will not count as a blinking (see Figure by T4.
7).
The trough of a wave is represented by Strength2. When the
trough is reached, the timer starts until the strength of wave
reach value Y. The time spent in this part is represented by
T5.
Below are the pseudocode showing the wave pattern captur-
ing process.
If(raw>X)
Tic
If(raw>peak)
peak = raw
T1 = toc
if(raw<X)
T2=toc
If(raw < trough)
Figure 7. First-stage noise filtering through = raw
T4 = toc
If(raw<Y)
T3=toc
If(raw < trough)
through = raw
T4 = toc
If(raw>Y)
T5 = toc
wavePattern = (peak through T1 T2 T3 T4 T5)
NetI= wavePattern
personal neural network, the universal neural network should By adopting the modified method, users are able to get closer
be more accurate in terms of classifying human blinking and to the obstacle as they may want. And users can control the
noise. Also, the universal neural network can be a reference robot by changing the direction even one or more sensors
for building a personal neural network as it already has a have an unexpected error. Therefore, both limitations are
good classification of human blinking and noise. solved.
By including autonomous terrain detection, the motion can
13. AUTONOMOUS CONTROLLING FUNC -
be modified and prevent the robot falling from the stair. If
TION the command from eye blinking is received, the robot will
Because of the safety issue, 5 ultrasonic sensors and 3 in- follow this command. Otherwise, maintain the motion in the
frared sensors are added in the robot. The ultrasonic sensors last time step and run terrain detection.
in the front part of the robot will detect the obstacles. Each
The same method will be used as mentioned above. By
sensor will echo an ultrasonic wave with a time delay to
adopting the modified method, users are able to get closer to
avoid the wave-conflict problem. The infrared sensors are
the stair as they may want. And users can control the robot
installed at the bottom part of the robot, they are used to
by changing the direction even one or more sensors have an
detect the distance between the robot body and the ground to
unexpected error.
prevent falling from the stair (see Figure 9).
By including autonomous obstacle avoidance, the motion can 14. T ESTING
be modified and require less controlling command even the Accuracy and safety are the most important part of this
road has lots of obstacles. If the command from eye blinking project. In order to test the accuracy of the robot the fol-
is received, the robot will follow this command. Otherwise, lowing methods are used, and each method will be tested for
maintain the motion in the last time step and run obstacle three rounds.
avoidance. In terms of performance, the car will become
smoother in motion. During the test, the number of correct command means the
motion of the robot match the command sent by the user; the
number of the wrong command means either: 1) The motion
does not match the command. 2) The command is seen as
noise. 3) Noise is seen as a command.
The first test is focused on the autonomous terrain detection
(see Figure 10). During the testing, the robot should detect
the terrain and avoid falling from the stair. The testing crite-
ria are to count the times of falling from the stair. In this test,
a higher-level ground will be used to simulate the stair.
During the test, the robot can avoid falling from stair suc-
cessfully. When the sensors detected the distance between
the robot body and the ground is too large, it will go back
and turn to avoid falling from stairs.
The next test is based on a simple rectangular map (see Figure
11). During the testing, the robot should follow the rectangle
drawn on the floor. The testing criteria are to count the times Figure 12. Test on an irregular map
of incorrect command received. In this test, 5 checkpoints
are labeled on the ground. The robot should reach each The next test is based on an irregular map (see Figure 12)
checkpoint in the order of: red, orange, yellow, green, and During the testing, the robot should follow the irregular path
finally stop at blue. on the floor. The testing criteria is to count the times of
incorrect command received. In this test, 5 checkpoints are
labeled on the ground. The robot should reach each check-
point in the order of: red, orange, yellow, green, and finally
stop at blue.
The testing results are listed in Table 3.
Same with the previous test, the robot can pass through all
the checkpoints successfully. The accuracy is around 90%.
Figure 11. Test on a simple map
During the test, the robot can pass through all the checkpoints
successfully. Even sometime the robot may not recognize the
user’s command correctly, the user can send the command
again to avoid the robot being derailed or directly stop the
robot by blinking three times and more rapid. The stop com-
mand is the most sensitive and accurate command so that it
can use to prevent the risk happen. Figure 13. Static obstacle avoidance
The next test is focused on the static obstacle avoidance func- Table 5. Testing results of dynamic obstacle avoidance
tion (see Figure 13). During the testing, the robot should Number of times
reach all checkpoints and avoid the obstacle automatically, Round of placing the Impacted Avoided Accuracy
even the user is not sending any command. The testing cri- obstacles
1 22 1 16 94.1%
teria is to count the times of incorrect command received.
And counting the times of hitting the obstacle. In this test,
5 checkpoints are labeled on the ground. The robot should During the test, the robot can avoid almost all dynamic obsta-
reach each checkpoint in the order of: red, orange, yellow, cles successfully. The robot only impacted when the speed
green, and finally stop at blue. Several static obstacles are of the dynamic obstacle is faster than the speed of the robot.
placed on the path, the robot should avoid it automatically.
The testing results are listed in Table 4. 15. E VALUATION
In terms of mobility, this project provided a method that
Table 4. Testing results of static obstacle avoidance
The number
no any map or path need to be predefined. Although some
Obsta- other systems provided a simple controlling method, the us-
Round of commands Correct Wrong Accuracy
cle hit
sent age of the wheelchair is limited to a specific environment
1 24 20 4 83.3% 0 because the system require the predefined paths.[17] If the
2 25 22 3 88% 0
environment changed, a new map is required to load into the
3 26 21 5 80.7% 0
system. In this project, the robot can move anywhere like a
real wheelchair. The user can blink three times or more to
Although the accuracy becomes lower in this test, the average
stop or start the robot. And blink twice to start turning or
accuracy can keep in around 85%, which is acceptable. One
stop turning. This is a simple controlling method that allows
possible reason of the accuracy drop is that we need to find
robots to move in any direction and not to rely on any prede-
a good path to hit the obstacle, so we can test the obstacle
fined path. However, it is important to ensure the Bluetooth
avoidance function during the test. This abnormal motion
connection between different devices must be established
may cause some confusion in terms of robot control.
and the strength should be stable.
In terms of accuracy, this project provided a robot control-
ling method with around 85% accuracy on average, which is
an acceptable performance. During the test, we found that
the accuracy is relatively high in level 1 and level 2. The
accuracy can reach 100%. However, start from level 4, the
accuracy becomes lower. One possible reason behind is that
we need to find a good path to hit the obstacle so that we
can test the obstacle avoidance function. This abnormal mo-
tion may cause some confusion to the robot controller. And
this situation may not happen in the real life as we will not
want to hit the obstacle by using the wheelchair. Although
the accuracy is getting lower from level 4, this method still
provided 85% accuracy on average.
In terms of safety, this project provided an immediate com-
mand to stop the robot to avoid the risk. Also, obstacle
avoidance and autonomous terrain detection are included to
Figure 14. Dynamic obstacle avoidance
enhance the safety. Some other similar systems require users
to select a command to stop while the command selection
The final testing is focused on the dynamic obstacle avoid- time may take up to 7 second. Therefore, compare to other
ance function (see Figure 14). During the test, the robot similar system, this project has a better performance in terms
should avoid the dynamic obstacle automatically, even the of safety as it provided an immediate command to stop the
user is not sending any command. The testing criteria is to robot.[9, 17, 18]
count the times of hitting the obstacle.
In terms of cost-effective, all the controlling command of
The testing results are listed in Table 5. the robot are in real time. The user does not need to wait be-
Published by Sciedu Press 17
http://ijrc.sciedupress.com International Journal of Robotics and Control 2018, Vol. 1, No. 1
fore sending any command. Also, all the commands are just user and it is not required to predefine any map or path. An
simple blinking which can be sent by the user immediately. accurate and natural controlling method is provided, and the
Some other similar systems require users to spend a long user can stop the robot any time immediately to avoid risks
time to select a controlling command. And the commands or danger.
are difficult to perform, for example, performing the motor
This project used the eye blinking as the robot controlling
imagery, keeping in a high attention and performing stress
method as the eye blinking will cause a significant pulse in
blinking.[17–19] This project provided a set of simple and
the EEG signal. By using the neural network to classify the
effective command for the user to control the robot. Blinking
blinking signal and the noise, user can send the command to
three times or more means start or stop the robot, blinking
control the robot by blinking twice in a short period of time.
twice means start or stop turning. These two simple com-
mands can perform rapidly, so that the user can control the Autonomous obstacle avoidance and autonomous terrain de-
robot in real time. Therefore, compare to other similar sys- tection are used to reduce the frequency of sending the con-
tem, the method used in this project is more cost-effective as trolling commands, and avoid the risks and danger immedi-
it provided an immediate command to stop the robot. ately. This project has been evaluated by driving the robot
in different places to test whether it can follow the expected
In terms of simplicity, this project provided a natural con-
path, avoid the obstacles, and stop on a specific position. The
trolling method that does not require unnatural blinking (e.g.
accuracy is around 85%, which is acceptable, and the robot
left blinking, right blinking, strong blinking, long blinking).
can arrive all the checkpoint, avoid all the obstacles and stop
The user can blink twice to turn and blink three times or
at a specific point accurately.
more to stop. Also, as autonomous obstacle avoidance and
autonomous terrain detection are included, the user can send
less command to avoid the jerky blinking. Some other similar 17. F UTURE WORK
systems require users to avoid the obstacles manually. Con-
The future work will be implementing this method to a real
sidering the daily-life situation, there must be some static and
wheelchair and take experiments with some paralysis patients
dynamic obstacle in the street, the users may need to send a
like ALS. It is important to ensure that the patients can also
lot of command if they are required to avoid the obstacles
be able to use our mind wave-controlled wheelchair system
manually. Therefore, the controlling method implemented
satisfactorily. As mentioned by a survey,[27] the wheelchair
in this project has included the autonomous obstacle avoid-
functions for the ALS patients is different to the normal mo-
ance and autonomous terrain detection function to reduce the
torized wheelchair. When implementing this method to a real
frequency of sending the command.
wheelchair for the ALS patients, collecting the user experi-
Overall, all the objectives of this project are met, which is a ence will be a very important step to make the wheelchair
huge success in the development of mind-wave controlled success.
robot.
Moreover, in terms of scalability, current system can be fur-
ther modified to be more scalable and add more module to
16. C ONCLUSION
it. For example, it can be further extended to a wheelchair
This project has provided a new method for paralyzed pa-
with an arm[28] or combining other BCI application to this
tients to control the wheelchair using a low-cost brainwave-
system.[29]
reading headset. This project has implemented an Arduino
robot to simulate a brainwave-controlled wheelchair for par- To achieve higher mobility and lower cost, instead of a lap-
alyzed patients with an improved controlling method. The top, a powerful single-board computer like raspberry pi can
robot can move freely in anywhere under the control of the be used to implement and train the neural network.[30]
[5] McFarland DJ, Wolpaw JR. Brain-computer interfaces for communi- [19] Stephygraph LR, Arunkumar N, Venkatraman V. Wireless mobile
cation and control. Communications of the ACM. 2011; 54(5): 60-66. robot control through human machine interface using brain signals.
PMid:21984822. https://doi.org/10.1145/1941487.194150 2015 International Conference on Smart Technologies and Manage-
6 ment for Computing, Communication, Controls, Energy and Materi-
[6] He B. Neural Engineering. Boston (MA): Kluwer Academic/Plenum als (ICSTM); 2015.
Publishers, 2005. [20] “Bluetooth Module HC-05”, 2017. [Internet]. Available from:
[7] Girase P, Deshmukh M. Mindwave Device Wheelchair Control. Inter- https://wiki.eprolabs.com/index.php?title=Bluetoot
national Journal of Science and Research (IJSR). 2016; 5(6): 2172- h_Module_HC-05 [Accessed: 19- Nov- 2017].
2176. https://doi.org/10.21275/v5i6.NOV164722 [21] L298 Dual H-Bridge Motor Driver. 2017. [Internet]. Available
[8] Stamps K, Hamam Y. Towards Inexpensive BCI Control for from: http://www.robotshop.com/media/files/pdf/data
Wheelchair Navigation in the Enabled Environment – A Hardware sheet-mot103b1m.pdf [Accessed: 19-Nov-2017].
Survey. Brain Informatics Lecture Notes in Computer Science. 2010: [22] Nguyen HT, Trung N, Toi V, et al. An autoregressive neural network
336-345. for recognition of eye commands in an EEG-controlled wheelchair.
[9] Arzak M, Sunarya U, Hadiyoso S. Design and Implementation of 2013 International Conference on Advanced Technologies for Com-
Wheelchair Controller Based Electroencephalogram Signal using munications (ATC 2013), Ho Chi Minh City. 2013. p. 333-338.
Microcontroller. International Journal of Electrical and Computer
[23] Ning B, Li MJ, Liu T, et al. Human Brain Control of Electric
Engineering (IJECE). 2016; 6(6): 2878. https://doi.org/10.1
Wheelchair with Eye-Blink Electrooculogram Signal. Intelligent
1591/ijece.v6i6.11452
Robotics and Applications Lecture Notes in Computer Science. 2012:
[10] Ghorpade S, Patil A. Mindwave-A New Way to Detect an Eye Blink. 579-5882.
IJARCCE. 2015: 82-84.
[24] Levenberg-Marquardt (trainlm): Backpropagation (Neural Network
[11] Yasui Y. A Brainwave Signal Measurement and Data Process-
Toolbox). [Internet]. Available from: http://matlab.izmiran
ing Technique for Daily Life Applications. Journal of Physiolog-
.ru/help/toolbox/nnet/backpr12.html [Accessed: 05-Jan-
ical Anthropology. 2009; 28(3): 145-150. PMid:19483376. https:
2018].
//doi.org/10.2114/jpa2.28.145
[25] Yu H, Wilamowski BM. LevenbergMarquardt Training Industrial
[12] Tjust A. Extraocular Muscles in Amyotrophic Lateral Sclerosis. PhD
Electronics Handbook, vol. 5 Intelligent Systems, 2nd Edition, chap-
dissertation, Umeå; 2017.
ter 12, pp. 12-1 to 12-15, CRC Press 2011.
[13] Udayashankar A, Kowshik A, Chandramouli S, et al. Assistance for
the Paralyzed Using Eye Blink Detection. 2012 Fourth International [26] Documentation, MATLAB & Simulink. [Internet]. Available
Conference on Digital Home. 2012: 104-108. from: https://www.mathworks.com/help/nnet/ref/trainl
m.html [Accessed: 10-Jan-2018]
[14] Ayudhya C. A Method for Real-Time Eye Blink Detection and Its
Application. IEEE Computer Society Conf. on Computer Vision and [27] Trail M, Nelson N, Van JN, et al. Wheelchair use by patients with
Pattern Recognition (CVPR); 2017. amyotrophic lateral sclerosis: A survey of user characteristics and
[15] Aramideh M, Eekhof J, Bour L, et al. Electromyography and recovery selection preferences. Archives of Physical Medicine and Rehabilita-
of the blink reflex in involuntary eyelid closure: a comparative study. tion. 2001; 82(1): 98-102. PMid:11239293. https://doi.org/10
Journal of Neurology, Neurosurgery & Psychiatry. 1995; 59(6): 662. .1053/apmr.2001.18062
https://doi.org/10.1136/jnnp.59.6.662-b [28] Bousseta R, Ouakouak IE, Gharbi M, et al. EEG Based Brain Com-
[16] NeuroSky. Exercise Equipment for Your Mind. Exercise Equipment puter Interface for Controlling a Robot Arm Movement Through
for Your Mind. Thought. Irbm. 2018; 39(2): 129-135. https://doi.org/10.101
[17] Rebsamen B, Burdet E, Guan C, et al. Controlling a wheelchair using 6/j.irbm.2018.02.001
a BCI with low information transfer rate. 2007 IEEE 10th Interna- [29] Sanjana M. Brain Computer Interface and its Applications – A Re-
tional Conference on Rehabilitation Robotics. 2007: 1003-1008. view. 2017 Jun.; 8(5).
[18] Gandhi V, Prasad G, Coyle D, et al. EEG-Based Mobile Robot Con- [30] Kucukyildiz G, Ocak H, Karakaya S, et al. Design and Implementa-
trol Through an Adaptive Brain–Robot Interface. IEEE Transactions tion of a Multi Sensor Based Brain Computer Interface for a Robotic
on Systems, Man, and Cybernetics: Systems. 2014; 44(9): 1278- Wheelchair. Journal of Intelligent & Robotic Systems. 2017; 87(2):
1285. https://doi.org/10.1109/TSMC.2014.2313317 247-263. https://doi.org/10.1007/s10846-017-0477-x