Attention Detection in ASD - 2022
Attention Detection in ASD - 2022
Attention Detection in ASD - 2022
and Computational
Applications
Article
Attention Measurement of an Autism Spectrum Disorder User
Using EEG Signals: A Case Study
José Jaime Esqueda-Elizondo 1,2 , Reyes Juárez-Ramírez 2 , Oscar Roberto López-Bonilla 1 ,
Enrique Efrén García-Guerrero 1 , Gilberto Manuel Galindo-Aldana 3 , Laura Jiménez-Beristáin 2 ,
Alejandra Serrano-Trujillo 2 , Esteban Tlelo-Cuautle 4 and Everardo Inzunza-González 1, *
EEG records. There are also studies such as [32], where they used the free artifact signal of
two electrodes to detect ASD. In [33], they used a hybrid light-weighted feature extractor
from signal to spectrogram images.
Recent studies have a focus on the relationship between human and machine behavior,
based on the premise that diverse social and psychological backgrounds correspond in
practice with different modalities of human–computer interaction [34]. In general, EEG
feature extraction techniques have offered strong clinical consistency since the beginning
of their use for assessing and diagnosing different cognitive and neurological domains
in ASD [35], learning difficulties [36], and attention [37]. It is widely accepted that AI
techniques are helpful for automatic diagnosis and rehabilitation procedures in ASD cases.
For example, in [38], a review of DL methods focusing on neuroimaging-based approaches
is presented. Furthermore, the authors report a review of studies based on DL networks
for diagnosing ASD and the challenges in automatized detection and ASD rehabilitation.
Nowadays, there are some DL applications for brain disease diagnoses, such as the ones
presented in [39] , which presents a review of automated multiple sclerosis (MS) detection
methods based on MRI. They notice that the most used architectures for MS detection
are convolutional neural networks (CNNs), autoencoders (AEs), generative adversarial
networks (GANs), and CNN-RNN models. Schizophrenia (Sz) is another brain disease
detected with DL methods using EEG signal processing [40]. The authors compare their
results with the traditional AI methods, such as support vector machine (SVM), k-nearest
neighbors, decision tree, naïve Bayes, random forest, extremely randomized trees, and
bagging. The DL models used are long short-term memories (LSTMs), one-dimensional
convolutional networks (1D-CNNs), and 1D-CNN-LSTMs. Convolutional neural networks
and LSTMs perform best, cross-validated with a k-fold of 5. Moreover, epileptic seizures
are detectable by using EEG signal processing; for example, in [41], the authors present a
novel diagnostic procedure that uses fuzzy theory and DL techniques. They propose an
adaptive neuro-fuzzy inference system (ANFIS) with a breeding swarm optimization (BS)
method. These ANFIS-BS methods present accuracy of 99.74 % in a two-class classification
task. Appendix A summarizes in Tables A1 and A2 the state of the art and shows a
comparison with the proposed method, considering the dataset, data source, preprocessing,
methods/algorithm, main findings, and applications.
The research questions that motivate this paper are: (1) What brain regions activate on
average when attention increases? At what levels? Depending on the type of activity to
be developed? (2) Can the level of the attention span of a person with Autism Spectrum
Disorder be quantified as a feature using time-frequency analysis methods? (3) Is there a
relationship between the increment in the power of electroencephalographic signals and
attention span in a child with Autism Spectrum Disorder?
In this paper, the hypothesis is that measuring and quantifying the brain’s electrical
activity (power spectrum density) makes it possible to assess the level of attention when
performing various cognitive activities and interacting with different software or systems.
Therefore, this article aims to detect when an ASD user has high attention levels while devel-
oping learning activities based on the EEG signals acquired by an Epoc+ Brain–Computer
Interface (BCI). The novelty of this paper is the use of ML algorithms to classify the “Atten-
tion” and “No Attention” states of an ASD user. This research presents a new methodology
based on EEG signals and ML algorithms for classifying the attention of a 13-year-old
boy with ASD. This research formulates a method for processing electroencephalographic
signals to determine attention lapses in people with ASD, tested by performing various
learning activities and interacting with computer programs.
The rest of this paper is organized as follows. Section 2 presents the materials and the
proposed methodology. Section 3 shows the findings of this paper. Section 4 presents the
discussion. Finally, Section 5 summarizes our conclusions.
Math. Comput. Appl. 2022, 27, 21 4 of 20
AF3 AF4
FP1 FP2
F7 F8
F3 F4
FC5 FC6
T7 T8
CMS DRL
P3 P4
P7 P8
O1 O2
Figure 1. Electrode location (left side) of the Epoc+ headset (right side) of the Emotiv Inc., taken
from Emotiv website https://emotiv.gitbook.io/epoc-user-manual/, accessed on 29 December 2021.
The signal was sampled at 2048 Hz, filtered with a dual-notch filter at 50 Hz and 60 Hz
and a low-pass filter at 64 Hz, and then downsampled to 128 Hz for transmission. It was
necessary to multiply the signal by 0.51 µ to convert it to a voltage.
The proposed data acquisition process is as follows:
Step 1. Place the headset with the electrodes hydrated on the test subject.
Step 2. Start the video recording and the EEG data acquisition.
Step 3. Give the worksheet to the test subject and the instructions.
Step 4. Let the test subject start the activity, and give him additional instructions if neces-
sary, as in a regular school session.
Step 5. When the activity is over, stop video recording and data acquisition.
Figure 2 shows the EEG acquisition process and how the boy worked with the activity
sheets using the Epoc+ headset.
Math. Comput. Appl. 2022, 27, 21 5 of 20
Figure 2. Data acquisition process with the Emotiv Epoc+. The EEG recordings start once the
localization of the headset is correct, and the signal quality, and the electrode contacts are verified and
in green level. Pictures are from http://imagentv.uabc.mx/videos/electro-encefalograf%C3%ADas-
y-autismo-uabc-no-se-detiene-imago, accessed on 29 December 2021.
Figure 3. Example of reading, following instructions, and drawing activity sheet. This activity
requires the child to read and follow instructions. The activity sheets are from https://familiaycole.
com/, accessed on 29 December 2021.
Math. Comput. Appl. 2022, 27, 21 6 of 20
Figure 4. Example of counting animals activity sheet. This activity requires the child to identify,
classify, count the animals, and write the number in the white square. The activity sheets are from
https://www.actividadesdeinfantilyprimaria.com/, accessed on 29 December 2021.
Feature validation
classification training
Figure 5. Block diagram of the proposed method. The first stage is signal preprocessing, after the
band power separation, and then the feature extraction stage. Next is the feature validation process,
the machine learning training stage, and finally, the attention quantification result.
Math. Comput. Appl. 2022, 27, 21 7 of 20
Figure 6. Band power separation example, Welch power spectral density estimate (illustrative figure).
The Emotiv software uses two-second windows to calculate the power spectrum
density in absolute values, with units µV2 /Hz, and then separates it into bands. The
two-second window involves 256 samples [47,48].
Figure 7 shows an example of band power separation. For this paper, we use the
electrodes F3, P7, F4, and P8 because they show high coherence in attention tasks [44,45].
(a)
60
Power
40
20
0
0 100 200 300 400 500 600 700 800
seconds
(b)
100
Power
50
0
0 100 200 300 400 500 600 700 800
seconds
(c)
60
Power
40
20
0
0 100 200 300 400 500 600 700 800
seconds
(d)
150
100
Power
50
0
0 100 200 300 400 500 600 700 800
seconds
Figure 7. Band power separation example from F4 electrode. (a) Theta band power, (b) Alpha band
power, (c) Beta band power, (d) Total band power.
Math. Comput. Appl. 2022, 27, 21 8 of 20
Feature Equation
Theta Relative Power θ
TRP =T
α
Alpha Relative Power ARP = T
β
Beta Relative Power BRP = T
Theta–Beta Ratio TBR = βθ
Theta–Alpha Ratio TAR = αθ
Theta θ
Alpha + Beta TBAR= β+ α
T = θ + α + β is the total power [48].
Figure 8 depicts the Theta, Alpha, and Beta relative powers (R.P.) obtained for the
F4 electrode using the equations presented in Table 1. These R.P. values change with
the time and function of the activity performance. Figure 9 shows the Theta–Beta Ratio,
Theta–Alpha Ratio, and Theta/(Alpha–Beta Ratio) for the same F4 electrode.
Figure 8. Example of relative powers obtained from F4 electrode. (a) F4 Theta relative power, (b) F4
Alpha relative power, and (c) F4 Beta relative power.
Math. Comput. Appl. 2022, 27, 21 9 of 20
Figure 9. Example of ratios obtained from F4 electrode. (a) Theta–Beta Ratio, (b) Theta–Alpha Ratio,
(c) Theta/(Alpha–Beta Ratio).
Begin 1
Evaluate the
End
1
3. Results
To evaluate the ML models, we rely on the metrics of the Scikit Learn library [49].
The metrics used to evaluate the scoring of the ML models are the confusion matrix (true
positives, true negatives, false positives, false negatives), accuracy, F1 score, precision,
sensitivity/recall, and specificity.
Table 2 shows the scoring parameters obtained for the ML models tested in this paper.
The first four parameters correspond to the results of the confusion matrix. Naive Bayes
with an accuracy of 0.7628, SGD with 0.8619, decision tree with 0.8697, SVM-RBF with
0.8940, KNN with 0.8968, MLP-NN with 0.9298, random forest with 0.9291, and finally extra
trees with an accuracy of 0.9270. Therefore, the extra trees model has the best accuracy score.
Regarding the F1 score parameter, it is observable that naive Bayes, SGD, decision
trees, and SVM-RBF obtained a score lower than 0.90. Meanwhile, the KNN, MLP-NN,
random forest, and extra trees models obtained a score greater than 0.90, with extra trees
achieving the highest score. Regarding the specificity/precision, we observed that the
naive Bayes model was the lowest, while the extra trees and MLP-NN models were the
highest, with 0.8896 and 0.9155, respectively. Regarding the sensitivity/recall score, all the
models obtained a result greater than 0.90, except decision trees with 0.8720, and the extra
trees model achieved the best result with 0.9738.
Table 3 shows the performance metrics obtained for each ML model. The metrics used
to evaluate the performance of the ML models were the Area Under the Curve (AUC),
the Cohen’s Kappa coefficient, Hamming loss, and the Matthews correlation coefficient.
Regarding the AUC metric, we notice that the naive Bayes, stochastic gradient descent, and
decision trees models are the lowest, with 0.7642, 0.8624, and 0.8697, while the support
vector machine (SVM)-RBF, KNN, extra trees, MLP-NN, and random forest (R.F.) models
are the ones that obtained the best AUC, with 0.8944, 0.8972, 0.9274, 0.9299, and 0.9294,
respectively, with the MLP-NN model obtaining a better AUC. This measure compares
labelings by different human annotators, not a classifier versus ground truth, regarding
Cohen’s Kappa coefficient. The Kappa score is a number between −1 and 1. Scores
above 0.8 indicate good agreement; zero or lower means no agreement (practically random
labels). We observe that the naive Bayes, stochastic gradient descent, decision trees, support
vector machine (SVM)-RBF, and KNN models obtained a Kappa coefficient less than
0.80 but greater than zero. However, the extra trees, MLP-NN, and random forest (R.F.)
models obtained Kappa coefficients of 0.8542, 0.8597, and 0.8583, respectively, which
are more significant than 0.80. Therefore, it means that these ML models have good
Math. Comput. Appl. 2022, 27, 21 11 of 20
agreement. We notice that the model MLP-NN is the one that obtained the highest Cohen’s
Kappa coefficient.
Machine-Learning Algorithm
Naive Decision (SVM)- Random Extra
Scoring Parameters SGD KNN MLP-NN
Bayes Trees RBF Forest (RF) Trees
True positive 1984 2720 2967 2874 2892 3126 3039 3013
True negative 3194 3131 2937 3195 3196 3186 3268 3280
False positive 1436 700 453 546 528 294 381 407
False negative 174 237 431 173 172 182 100 88
Accuracy 0.7628 0.8619 0.8697 0.8940 0.8968 0.9298 0.9291 0.9270
F1 Score 0.7986 0.8698 0.8691 0.8988 0.9012 0.9304 0.9314 0.9278
Specificity/Precision 0.6898 0.8172 0.8663 0.8540 0.8582 0.9155 0.8955 0.8896
Sensitivity/Recall 0.9483 0.9296 0.8720 0.9486 0.9489 0.9459 0.9703 0.9738
Performance Metrics
Matthews
Machine Learning Cohen’s Kappa
AUC Hamming Loss Correlation
Algorithm Coefficient
Coefficient
Naive Bayes 0.7642 0.5269 0.2371 0.5674
Stochastic Gradient
0.8624 0.7241 0.1380 0.7310
Descent
Decision Trees 0.8697 0.7395 0.1302 0.7395
Support Vector
0.8944 0.7883 0.1059 0.7931
Machine (SVM)-RBF
KNN 0.8972 0.7939 0.1031 0.7983
Extra Trees 0.9274 0.8542 0.0729 0.8580
MLP-NN 0.9299 0.8597 0.0701 0.8602
Random Forest (RF) 0.9294 0.8583 0.0708 0.8613
Regarding the Hamming loss, this Hamming loss should be zero; that is, the closer it
is to zero, the model tends to be perfect or ideal. In this case, the extra trees, MLP-NN, and
random forest (R.F.) models have the lowest Hamming loss. The MLP-NN model has the
lowest Hamming loss, with 0.0701. We use in machine learning the Matthews correlation
coefficient (MCC) or phi coefficient as a measure of the quality of binary (two-class) clas-
sifications, introduced by biochemist Brian W. Matthews [50]. In this case, the three best
models are extra trees, MLP-NN, and random forest (R.F.), with 0.8580, 0.8602, and 0.8613,
respectively, with random forest being the best (R.F.).
Figure 11 depicts the ROC curve of the top five ML models trained for attention
classification using EEG data. The ROC curve shows the trade-off between sensitivity (TPR)
and specificity (1-FPR). Classifiers that give curves closer to the top-left corner indicate
better performance. The closer the curve comes to the 45-degree diagonal of the ROC space,
the less accurate the test is. The SVM-RBF and KNN models are closer to the 45-degree
diagonal, resulting in less accuracy. On the other hand, the random forest, extra trees,
and MLP-NN models are closest to the upper left. Therefore, they are the ones with the
best performance.
Math. Comput. Appl. 2022, 27, 21 12 of 20
Figure 11. The receiver operating characteristic curve (ROC) of the top five ML models trained for
attention classification using EEG data.
Figure 12 depicts the training time of the eight ML models tested in this study. The
N.B., SGD, KNN, and D.T. models have the shortest training time. However, according to
the results shown in Tables 2 and 3, they have the lowest performance metrics. In contrast,
the SVM-RBF, R.F., and MLP-NN models have a longer training time of 17.01, 21.14, and
73.10 s, with the MLP-NN model having a longer training time. However, the model
also has better performance metrics, as shown in Tables 2 and 3. Therefore, the classifier
designer must conduct a cost–benefit analysis in terms of accuracy and processing time. In
most cases, programmers prefer better accuracy, sacrificing training time since this process
(training) is only done once and only uses the trained model. For this reason, in this study,
it would be more convenient to choose the MLP-NN model.
90.0000
80.0000
TRAINING TIME [SECONDS]
73.1001
70.0000
60.0000
50.0000
40.0000
30.0000
20.0000 21.1470
17.0145
10.0000
2.2181
0.2015 0.0865 0.2177 0.0246
0.0000
RF SVM-RBF MLP-NN ET DT KNN SGD NB
Figure 12. Training time of the eight ML models evaluated in this study.
4. Discussion
In this research, we observed that the power spectrum density (PSD) is helpful for
attention detection, as proposed in the hypothesis. The features based on band PSD, such
as Relative Theta Power (RTP), Relative Alpha Power (RAP), Relative Beta Power (RBP),
Theta–Beta Ratio (TBR), Theta–Alpha Ratio (TAR), and the TBAR are good features for
attention classification. With these features, the multi-layer perceptron neural network
model (MLP-NN) achieved the best performance, with an AUC of 0.9299, Cohen’s Kappa
coefficient of 0.8597, Matthews correlation coefficient of 0.8602, and Hamming loss of
0.0701. Nevertheless, MLP-NN requires a longer training time of up to 73.1 s. However,
the results presented in Tables 2 and 3 and Figures 11 and 12 show that the random forest
and extra trees models have good performance metrics and a training time of 21.14 and
2.21, respectively. Therefore, the classifier designer must perform a cost–benefit analysis
in terms of accuracy and processing time. In most cases, designers prefer better accuracy,
sacrificing training time since this process (training) is only performed once, and then only
the trained model is used. For this reason, in this study, it would be more convenient to
choose the MLP-NN model.
Math. Comput. Appl. 2022, 27, 21 13 of 20
5. Conclusions
In this paper, a methodology for the classification of attention by EEG signals of an
ASD user was presented. The EEG data acquisition was performed while the ASD user
performed some didactic learning activities. In addition, our dataset was created for the
post-processing of the information and training of the ML algorithms. To create the dataset,
it was necessary to perform preprocessing, filtering, and feature extraction. The proposed
features can be used to train and evaluate several ML models to classify attention using
EEG signals.
On the other hand, with these findings, therapists, teachers, and psychologists can
develop better learning scenarios according to the cognitive needs of ASD users. In addi-
tion, diagnosis accuracy can be improved by acquiring individual EEG features, which
provide relevant information for differential clinical neurodevelopmental symptomatology
classification. Furthermore, with the proposed methodology, one can obtain quantifiable
information about the performance of ML models when an ASD user performs didac-
tic/learning activities, the above with the purpose of reinforcing the perception of the
teacher or therapist.
The future work will involve implementing the proposed method on a real-time
embedded system—for example, a stand-alone version using an edge device, novel deep
learning methods, and internet of things (IoT). It is possible to explore the feasibility of a
mobile-based platform that links with a BCI, instead of a computer. Furthermore, future
replication of this methodology is needed to approach a broad spectrum of attention
processes and standard estimation.
Supplementary Materials: The following supporting information can be downloaded at: https:
//www.mdpi.com/article/10.3390/mca27020021/s1.
Author Contributions: Conceptualization, R.J.-R. and E.I.-G.; Data curation, L.J.-B. and A.S.-T.;
Formal analysis, E.E.G.-G., L.J.-B. and A.S.-T.; Funding acquisition, J.J.E.-E. and O.R.L.-B.; Investi-
gation, J.J.E.-E. and E.T.-C.; Methodology, J.J.E.-E., G.M.G.-A. and E.I.-G.; Project administration,
O.R.L.-B.; Resources, J.J.E.-E. and O.R.L.-B.; Software, J.J.E.-E.; Supervision, R.J.-R. and E.I.-G.; Val-
idation, G.M.G.-A.; Visualization, A.S.-T.; Writing—original draft, J.J.E.-E.; Writing—review and
editing, E.E.G.-G., E.T.-C. and E.I.-G. All authors have read and agreed to the published version of
the manuscript.
Math. Comput. Appl. 2022, 27, 21 14 of 20
Funding: This research had funds provided by the Universidad Autónoma de Baja California (UABC)
through the grants number 679 and 300/2610.
Institutional Review Board Statement: The study was conducted according to the guidelines of the
Declaration of Helsinki, and approved by the Ethics Committee and Research for Pre-Graduates
and Post-Graduates of the Facultad de Ingeniería y Negocios Guadalupe Victoria of the Universidad
Autónoma de Baja California; it was approved on 8 October 2020, with the POSG/020-1-04 code.
Informed Consent Statement: Written informed consent has been obtained from the patient to
publish this paper.
Data Availability Statement: We share the dataset as Supplementary Material.
Acknowledgments: We want to thank Escudero Garrido Elena Patricia (Interdisciplinary Professional
Unit of Biotechnology of the National Polytechnic Institute, UPIBI-IPN), Gutiérrez Montiel Ixchel (In-
terdisciplinary Professional Unit of Biotechnology of the National Polytechnic Institute, UPIBI-IPN),
Macarena de Haro Iliana Elizabeth (Interdisciplinary Professional Unit of Biotechnology of the Na-
tional Polytechnic Institute, UPIBI-IPN), Martínez Amaya Laura Fernanda (Manizalez Autonomous
University, Colombia), Martínez Oliva Gonzalo Guillermo (Interdisciplinary Pro-fessional Unit of
Biotechnology of the National Polytechnic Institute, UPIBI-IPN), López Rivas Andrea (Autonomous
University of Baja California, UABC) and Becerril Valenzuela Brando (Autonomous University of
Baja California, UABC) and Martinez Verdin Annette Sofia (Autonomous University of Baja Cali-
fornia, UABC), for their participation in this project via the Summer of Scientific and Technological
Research of the Pacific or Research Activities of UABC. We want to thank the Chemical Sciences
and Engineering Faculty of the Autonomous University of Baja California (UABC) for supporting
the project with grant number 300/2610. The authors would like to thank INAOE for accepting
researcher Everardo Inzunza-González to carry out his sabbatical stay. Thanks are given to PRODEP
(Professional Development Program for Professors) for supporting the academic groups to increase
their degree of consolidation. We also want to thank the Tijuana Special Education School Eduke and
the teachers Jessica Avelar, Letycia Gutiérrez, and Atziri Torres, for supplying the activity sheets and
guiding us in working with the ASD person.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or
in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
AI Artificial Intelligence
ANFIS Adaptive Neuro-Fuzzy Inference System
ASD Autism Spectrum Disorder
BS Breeding Swarm
CNN Convolutional Neural Networks
DL Deep Learning
fMRI Functional Magnetic Resonance Imaging
GAN Generative Adversarial Networks
LSTM Long Short-Term Memories
MBP Mindfulness-Based Program
ML Machine Learning
MRI Magnetic Resonance Imaging
PSD Power Spectrum Density
RAP Relative Alpha Power
RBP Relative Beta Power
RNN Recurrent Neural Network
RTP Theta Relative Power
TAR Theta–Alpha Ratio
TBR Theta–Beta Ratio
TD Typically Developed
Math. Comput. Appl. 2022, 27, 21 15 of 20
Appendix A. Comparison of the Proposed Method with the State of the Art
References
1. Howe, T.R.; Trotter, J.S.; Davis, A.S.; Schofield, J.W.; Allen, L.; Millians, M.; Bolt, N. Attention Span; Springer: New York, NY, USA,
2011. [CrossRef]
2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5, 5th ed.; American Psychiatric
Publishing: Washington, DC, USA, 2013; p. 947.
3. Goqvkqpcn, E.; Ogpvcn, E. Autism Spectrum Disorder. Nat. Rev. Dis. Prim. 2020, 6, 6. [CrossRef]
4. Ishizaki, Y.; Higuchi, T.; Yanagimoto, Y.; Kobayashi, H.; Noritake, A.; Nakamura, K.; Kaneko, K. Eye gaze differences in school
scenes between preschool children and adolescents with high-functioning autism spectrum disorder and those with typical
development. BioPsychoSoc. Med. 2021, 15, 2. [CrossRef] [PubMed]
5. Egger, H.L.; Dawson, G.; Hashemi, J.; Carpenter, K.L.H.; Espinosa, S.; Campbell, K.; Brotkin, S.; Schaich-Borg, J.; Qiu, Q.; Tepper,
M.; et al. Automatic emotion and attention analysis of young children at home: A ResearchKit autism feasibility study. NPJ Digit.
Med. 2018, 1, 20. [CrossRef] [PubMed]
6. Son, J.; Ai, L.; Lim, R.; Xu, T.; Colcombe, S.; Franco, A.R.; Cloud, J.; LaConte, S.; Lisinski, J.; Klein, A.; et al. Evaluating fMRI-Based
Estimation of Eye Gaze During Naturalistic Viewing. Cereb. Cortex 2020, 30, 1171–1184. [CrossRef]
7. Lawrence, S.J.D.; Formisano, E.; Muckli, L.; De Lange, F.P. Laminar fMRI: Applications for cognitive neuroscience. NeuroImage
2019, 197, 785–791. [CrossRef]
8. Ridderinkhof, A.; De Bruin, E.I.; Driesschen, S.v.d.; Bögels, S.M. Attention in Children with Autism Spectrum Disorder and the
Effects of a Mindfulness-Based Program. J. Atten. Disord. 2018, 24, 681–692. [CrossRef] [PubMed]
9. Ababkova, M.; Leontieva, V.; Trostinskaya, I.; Pokrovskaia, N. Biofeedback as a cognitive research technique for enhancing
learning process. IOP Conf. Ser. Mater. Sci. Eng. 2020, 940, 012127. [CrossRef]
10. Lau-Zhu, A.; Lau, M.; McLoughlin, G. Mobile EEG in research on neurodevelopmental disorders: Opportunities and challenges.
Dev. Cogn. Neurosci. 2019, 36, 100635. [CrossRef]
11. Mehmood, F.; Ayaz, Y.; Ali, S.; Amadeu, R.D.C.; Sadia, H. Dominance in Visual Space of ASD Children Using Multi-Robot Joint
Attention Integrated Distributed Imitation System. IEEE Access 2019, 7, 168815–168827. [CrossRef]
12. Wang, H.; Song, Q.; Ma, T.; Cao, H.; Sun, Y. Study on Brain-Computer Interface Based on Mental Tasks. In Proceedings of the 5th
Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, Shenyang, China,
8–12 June 2015; pp. 841–845. [CrossRef]
13. Ismail, L.E.; Karwowski, W. Applications of EEG indices for the quantification of human cognitive performance: A systematic
review and bibliometric analysis. PLoS ONE 2020, 15, e0242857. [CrossRef] [PubMed]
14. Niemarkt, H.J.; Jennekens, W.; Maartens, I.A.; Wassenberg, T.; van Aken, M.; Katgert, T.; Kramer, B.W.; Gavilanes, A.W.;
Zimmermann, L.J.; Bambang Oetomo, S.; et al. Multi-channel amplitude-integrated EEG characteristics in preterm infants with a
normal neurodevelopment at two years of corrected age. Early Hum. Dev. 2012, 88, 209–216. [CrossRef] [PubMed]
15. Micoulaud-Franchi, J.A.; Batail, J.M.; Fovet, T.; Philip, P.; Cermolacce, M.; Jaumard-Hakoun, A.; Vialatte, F. Towards a Pragmatic
Approach to a Psychophysiological Unit of Analysis for Mental and Brain Disorders: An EEG-Copeia for Neurofeedback. Appl.
Psychophysiol. Biofeedback 2019, 44, 151–172. [CrossRef] [PubMed]
16. Singh, M.I.; Singh, M. Development of low-cost event marker for EEG-based emotion recognition. Trans. Inst. Meas. Control 2017,
39, 642–652. [CrossRef]
17. Yang, L.; Wilke, C.; Brinkmann, B.; Worrell, G.A.; He, B. Dynamic imaging of ictal oscillations using non-invasive high-resolution
EEG. Neuroimage 2011, 56, 1908–1917. [CrossRef]
18. Ismail, L.I.; Shamsudin, S.; Yussof, H.; Hanapiah, F.A.; Zahari, N.I. Estimation of concentration by eye contact measurement in
Robot-based Intervention Program with autistic children. Procedia Eng. 2012, 41, 1548–1552. [CrossRef]
19. Zhang, S.; Chen, D.; Tang, Y.; Zhang, L. Children ASD Evaluation Through Joint Analysis of EEG and Eye-Tracking Recordings
with Graph Convolution Network. Front. Hum. Neurosci. 2021, 15, 651349. [CrossRef] [PubMed]
20. Alotaibi, N.; Maharatna, K. Classification of Autism Spectrum Disorder from EEG-Based Functional Brain Connectivity Analysis.
Neural Comput. 2021, 33, 1914–1941. [CrossRef] [PubMed]
21. Cerrada, M.; Trujillo, L.; Hernández, D.E.; Correa Zevallos, H.A.; Macancela, J.C.; Cabrera, D.; Vinicio Sánchez, R. AutoML for
Feature Selection and Model Tuning Applied to Fault Severity Diagnosis in Spur Gearboxes. Math. Comput. Appl. 2022, 27, 6.
[CrossRef]
22. Enríquez Zárate, J.; Gómez López, M.d.l.A.; Carmona Troyo, J.A.; Trujillo, L. Analysis and Detection of Erosion in Wind Turbine
Blades. Math. Comput. Appl. 2022, 27, 5. [CrossRef]
23. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [CrossRef]
24. Fong-Mata, M.; García-Guerrero, E.; Mejia-Medina, D.; López-Bonilla, O.; Villarreal-Gomez, L.; Zamora-Arellano, F.; López-
Mancilla, D.; Inzunza-González, E. An Artificial Neural Network Approach and a Data Augmentation Algorithm to Systematize
the Diagnosis of Deep-Vein Thrombosis by Using Wells’ Criteria. Electronics 2020, 9, 1810. [CrossRef]
25. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks,
and Deep Learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [CrossRef]
26. Navarro-Espinoza, A.; López-Bonilla, O.R.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; López-Mancilla, D.; Hernández-Mejía, C.;
Inzunza-González, E. Traffic Flow Prediction for Smart Traffic Lights Using Machine Learning Algorithms. Technologies 2022, 10,
5. [CrossRef]
Math. Comput. Appl. 2022, 27, 21 19 of 20
27. Kang, J.; Han, X.; Song, J.; Niu, Z.; Li, X. The identification of children with autism spectrum disorder by SVM approach on EEG
and eye-tracking data. Comput. Biol. Med. 2020, 120, 103722. [CrossRef] [PubMed]
28. Radhakrishnan, M.; Ramamurthy, K.; Choudhury, K.K.; Won, D.; Manoharan, T.A. Performance analysis of deep learning models
for detection of Autism Spectrum Disorder from EEG signals. Trait. Signal 2021, 38, 853–863. [CrossRef]
29. Thirumal, S.; Thangakumar, J. Investigation of Statistical Feature Selection Techniques for Autism Classification Using EEG
Signals. J. Adv. Res. Dyn. Control Syst. 2020, 12, 1254–1263. [CrossRef]
30. Tawhid, M.N.A.; Siuly, S.; Wang, H.; Whittaker, F.; Wang, K.; Zhang, Y. A spectrogram image based intelligent technique for automatic
detection of autism spectrum disorder from EEG. PLoS ONE 2021, 16, e0253094. [CrossRef]
31. Sundaresan, A.; Penchina, B.; Cheong, S.; Grace, V.; Valero-Cabré, A.; Martel, A. Evaluating deep learning EEG-based mental
stress classification in adolescents with autism for breathing entrainment BCI. Brain Inform. 2021, 8. [CrossRef]
32. Grossi, E.; Valbusa, G.; Buscema, M. Detection of an Autism EEG Signature From Only Two EEG Channels Through Features
Extraction and Advanced Machine Learning Analysis. Clin. EEG Neurosci. 2021, 52, 330–337. [CrossRef] [PubMed]
33. Baygin, M.; Dogan, S.; Tuncer, T.; Datta Barua, P.; Faust, O.; Arunkumar, N.; Abdulhay, E.W.; Emma Palmer, E.; Rajendra Acharya,
U. Automated ASD detection using hybrid deep lightweight features extracted from EEG signals. Comput. Biol. Med. 2021,
134, 104548. [CrossRef] [PubMed]
34. Hagendorff, T. Linking Human And Machine Behavior: A New Approach to Evaluate Training Data Quality for Beneficial
Machine Learning. Minds Mach. 2021, 31, 563–593. [CrossRef] [PubMed]
35. Swatzyna, R.J.; Boutros, N.N.; Genovese, A.C.; MacInerney, E.K.; Roark, A.J.; Kozlowski, G.P. Electroencephalogram (EEG) for
children with autism spectrum disorder: Evidential considerations for routine screening. Eur. Child Adolesc. Psychiatry 2019,
28, 615–624. [CrossRef] [PubMed]
36. Kurgansky, A.V.; Machinskaya, R. Bilateral frontal theta-waves in EEG of 7–8-year-old children with learning difficulties:
Qualitative and quantitative analysis. Hum. Physiol. 2012, 38, 255–263. [CrossRef]
37. Machinskaya, R.; Semenova, O.A.; Absatova, K.A.; Sugrobova, G.A. Neurophysiological factors associated with cognitive deficits
in children with ADHD symptoms: EEG and neuropsychological analysis. Psychol. Neurosci. 2014, 7, 461–473. [CrossRef]
38. Khodatars, M.; Shoeibi, A.; Sadeghi, D.; Ghaasemi, N.; Jafari, M.; Moridian, P.; Khadem, A.; Alizadehsani, R.; Zare, A.; Kong, Y.;
et al. Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review. Comput. Biol.
Med. 2021, 139, 104949. [CrossRef]
39. Shoeibi, A.; Khodatars, M.; Jafari, M.; Moridian, P.; Rezaei, M.; Alizadehsani, R.; Khozeimeh, F.; Gorriz, J.M.; Heras, J.; Panahiazar,
M.; et al. Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging:
A review. Comput. Biol. Med. 2021, 136, 104697. [CrossRef] [PubMed]
40. Shoeibi, A.; Sadeghi, D.; Moridian, P.; Ghassemi, N.; Heras, J.; Alizadehsani, R.; Khadem, A.; Kong, Y.; Nahavandi, S.; Zhang, Y.D.;
et al. Automatic Diagnosis of Schizophrenia in EEG Signals Using CNN-LSTM Models. Front. Neuroinform. 2021, 15. [CrossRef]
41. Shoeibi, A.; Ghassemi, N.; Khodatars, M.; Moridian, P.; Alizadehsani, R.; Zare, A.; Khosravi, A.; Subasi, A.; Rajendra Acharya, U.;
Gorriz, J.M. Detection of epileptic seizures on EEG signals using ANFIS classifier, autoencoders and fuzzy entropies. Biomed.
Signal Process. Control 2022, 73, 103417. [CrossRef]
42. Khng, K.H.; Mane, R. Beyond BCI—Validating a wireless, consumer-grade EEG headset against a medical-grade system for
evaluating EEG effects of a test anxiety intervention in school. Adv. Eng. Inform. 2020, 45, 101106. [CrossRef]
43. Fouad, I. A robust and reliable online P300 based BCI system using Emotiv EPOC Headset. J. Med. Eng. Technol. 2021, 45, 94–114.
[CrossRef]
44. Dubrovinskaya, N.; Machinska, R.; Kulakovsky, Y. Brain Organization of Selective Tasks Preceding Attention: Ontogenetic
Aspects. In Complex Brain Functions: Conceptual Advances in Russian Neurocience; CRC Press: Boca Raton, FL, USA, 2000; Volume 1,
pp. 169–180. [CrossRef]
45. Dubrovinskaya, N.V.; Machinskaya, R.I. Reactivity of Teta and Alpha EEG Frequency Bands in Voluntary Attention in Junior
Schoolchildren. Hum. Physiol. 2002, 28, 522–527. [CrossRef]
46. Emotiv, I. Data Sample Object. Cortex API. Available online: https://emotiv.gitbook.io/cortex-api/data-subscription/data-
sample-object (accessed on 29 December 2021).
47. Emotiv, I. Frequency Bands Emotiv PRO v3.0. Available online: https://emotiv.gitbook.io/emotivpro-v3/ (accessed on
29 December 2021).
48. Fahimi, F.; Guan, C.; Wooi, B.G.; Kai Keng, A.; Choon, G.L.; Tih, S.L. Personalized features for attention detection in children with
Attention Deficit Hyperactivity Disorder. In Proceedings of the Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, EMBS, Jeju, Korea, 11–15 July 2017; pp. 414–417. [CrossRef]
49. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.;
et al. Scikit-Learn: Machine Learning in Python. 2011. Available online: https://scikit-learn.org (accessed on 29 December 2021).
50. Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta
(BBA)—Protein Struct. 1975, 405, 442–451. [CrossRef]
51. Ein Shoka, A.A.; Alkinani, M.H.; El-Sherbeny, A.S.; El-Sayed, A.; Dessouky, M.M. Automated seizure diagnosis system based on
feature extraction and channel selection using EEG signals. Brain Inform. 2021, 8, 1. [CrossRef] [PubMed]
Math. Comput. Appl. 2022, 27, 21 20 of 20
52. Misiunas, A.V.M.; Meskauskas, T.; Samaitiene, R. Machine Learning Based EEG Classification by Diagnosis: Approach to EEG
Morphological Feature Extraction. AIP Conf. Proc. 2019, 2164, 080005. [CrossRef]
53. Boroujeni, Y.K.; Rastegari, A.A.; Khodadadi, H. Diagnosis of attention deficit hyperactivity disorder using non-linear analysis of
the EEG signal. IET Syst. Biol. 2019, 13, 260–266. [CrossRef] [PubMed]