Blockchain 33

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Blockchain-Secured Recommender System for

Special Need Patients Using Deep Learning

Abstract
As a result of the healthcare sector's tireless efforts, patients with complex medical diseases
are increasingly receiving tailored, high-quality care. This highlights the importance of
developing a deep learning-based blockchain-based recommender system. This study
proposes a novel way of developing a trustworthy and accurate recommendation system for
patients with special needs. Deep learning algorithms and blockchain are combined in this
way to maximise their potential. The system uses blockchain technology, which is immutable
and open, to safeguard data security and privacy. Deep learning models then analyse and
learn from enormous volumes of patient data to deliver tailored recommendations. The
recommendations of the recommender system can assist medical professionals in making
better decisions and developing treatment plans that are tailored to the needs of each patient.
This solution combines deep learning and blockchain technology to improve security, trust,
and accuracy, resulting in better results and care for those with special needs.

1. Introduction
In an effort to address the obstacles to giving standard nutritional treatment to patients who
have very unique dietary needs and suffer from atypical ailments, a dependable
recommendation system was recently devised. This solution employs blockchain technology
to protect patients with special needs and the hospital's data management unit from
unauthorised access to their information while maintaining the data's integrity. In the
pharmaceutical sector, artificial neural networks (ANNs) have been utilised for a variety of
activities, including medicine distribution, cancer categorization, and pharmaceutical science
research. Prior research has thoroughly investigated these applications. Furthermore, a system
for classifying and summarising documents collected via analytics has been developed, and
the results of numerous trials have demonstrated that this method works. Personalised meals
have been developed using deep learning algorithms in the context of medical datasets based
on a variety of parameters, including a person's weight, gender, age, and disease type. The
purpose of this study is to improve, secure, and assess the overall performance of a nutrition
theory introduced into the Internet of Medical Things (IoMT) using blockchain privacy
mechanisms and deep learning techniques.

This research produced contributions to the IoMT by exploring and analysing deep and
machine learning approaches such as Naive Bayes, linear regression, recurrent neural
networks (RNN), and long short-term memory (LSTM). Additional contributions include the
design and implementation of a solid blockchain privacy system, as well as the design and
implementation of a system to safely assist patients with special nutritional demands. The
major goal is to establish disease- and patient-specific models in order to develop tailored
dietary recommendations for each individual. The study demonstrates the ability of computer
science and deep learning to handle a wide range of patient conditions and individual requests
by utilising customised recommendation evidence safeguarded by a bitcoin privacy
mechanism.

The intended study on a blockchain-secured recommender system based on deep learning for
patients with special requirements is outlined below, along with the expected key
contributions. First, efforts to develop a reliable recommendation system address the issues of
fitting patients' unique situations and dietary preferences. Furthermore, because blockchain
technology ensures the secrecy and veracity of data, it increases trust and security. Third,
deep learning algorithms enable the analysis of massive volumes of patient data, allowing for
the generation of tailored suggestions. Overall, the findings of the study reveal a novel
approach to improving the quality of medical treatment for people with special needs by
combining deep learning and blockchain technologies.

The suggested examination of a blockchain-secured recommender system for patients with


special needs would face substantial challenges in developing a credible recommendation
system that takes into consideration the specific circumstances and dietary preferences of
specific patients. Using blockchain technology is critical for building user trust and
maintaining a secure workplace. Another barrier to developing personalised suggestions for
each patient is the difficulty of accurately digesting massive amounts of medical data using
deep learning algorithms. This project is expected to considerably advance the profession by
developing a trustworthy recommendation system, boosting data privacy and integrity with
blockchain technology, and applying deep learning to improve medical treatment for
individuals with unique medical needs.
2. Structure & Background

The remainder of this article provides a thorough analysis of the research on the subject. The
foundation section contains a comprehensive and up-to-date summary of several sources of
critical information. A full examination of the methodology, system components, and
artificial intelligence (AI) implementation may be found under "System Model." Experiments
were conducted for this inquiry, and the results are presented below. The publication
concludes with a section titled "Closing and Future Research," in which the authors explain
the study's findings and predict where future research will go.

2.1. In the comparative study of five related methods for a Blockchain-Secured


Recommender System for Special Need Patients Using Deep Learning, several
approaches were evaluated and compared based on their effectiveness and
applicability to the given context. The following methods were examined:

1. Secure Recommendations with Blockchain Privacy Mechanism: This method


focused on integrating blockchain technology to ensure data privacy and security
in the recommender system. It aimed to protect the sensitive information of
special need patients while providing personalized recommendations.

2. Artificial Neural Networks (ANNs) for Pharmaceutical Applications: This


method explored the use of ANNs in the pharmaceutical industry and examined
their potential in addressing various challenges. ANNs were considered for
applications such as drug distribution, cancer categorization, and pharmaceutical
research.

3. Document Analytics Categorization and Summarization: This method proposed


a technique to handle the challenges of categorizing, analyzing, and summarizing
documents in the context of the recommender system. The algorithm's
effectiveness was evaluated through experimental results, considering factors like
accuracy and processing time.

4. Deep Learning on Medical Datasets for Dietary Recommendations: This


method employed deep learning algorithms to analyze medical datasets and
generate optimal dietary recommendations for special need patients. It utilized a
combination of machine learning (ML) and deep learning (DL) algorithms to
enhance the precision and effectiveness of the recommendations.

5. Chain Confidentiality Network and Deep Learning Techniques for IoMT: This
method aimed to improve the performance of a healthy nutrition theory integrated
into the Network of Medical Things (IoMT). It utilized a chain confidentiality
network along with deep learning techniques to enhance the security and
assessment of the nutrition recommendations.

The comparative study assessed the strengths, limitations, and practical


implications of each method in the context of a blockchain-secured recommender
system for special need patients. The findings aimed to identify the most suitable
approach that ensures privacy, provides accurate recommendations, and addresses
the specific requirements of patients with special needs.
2.2.
Here's a comparison table for 10 traditional methods in the context of a Blockchain-Secured
Recommender System for Special Need Patients Using Deep Learning:

Method Description

Rule-based systems Traditional approach using predefined rules to make recommendations.

Collaborative Recommends items based on user behavior and preferences by analyzing similarities between
filtering users or items.

Content-based
filtering Recommends items based on the similarity of their content to the user's preferences.
Method Description

Combination of collaborative filtering and content-based filtering to provide more accurate


Hybrid systems recommendations.

Demographic
filtering Recommends items based on demographic information such as age, gender, and location.

Knowledge-based
systems Utilizes expert knowledge and rules to make recommendations.

Recommends items based on utility values assigned to various features or


Utility-based systems attributes.

Groups similar users or items together to make recommendations based on the


Clustering algorithms patterns within the clusters.

Association rule Extracts relationships between items and generates recommendations based on
mining item associations found in the data.

Utilizes evolutionary algorithms to optimize recommendation models and


Genetic algorithms improve the quality of recommendations over time.

3. Related works

One of the numerous situations in which the new and difficult topic of blockchain technology
is being investigated is the development of medical recommendation systems. In this context,
reference (6) emphasises how blockchain technology has the potential to strengthen and
improve such systems by providing additional levels of security. The information in reference
(7) is intended to give people with cardiovascular difficulties tailored dietary advice based on
their medical history, dietary preferences, and vital signs. Similarly, it provides a framework
for providing nutrition advice to children with the goal of persuading them to adopt healthier
food options based on their individual needs. (9) describes a system concept that uses
machine learning techniques to automatically arrange users' diets based on their health data.
The article (10), which looks at how internet health services have affected users in China and
Ukraine, discusses the elements that influence how these services are embraced and used.
(11) provides hypertension patients with proper dietary counselling that takes into account the
patient's age, food allergies, and dietary preferences. (12) examines how the Internet of
Things (IoT) and blockchain technology can be used to effectively monitor supply chains,
with a focus on the transparency and traceability that these technologies provide. (13)
proposes a system that combines user profiles, healthcare guidelines, and recommendations
to generate personalised AI-based replies to diabetic American Indians' nutritional demands.
(14) examines how deep learning can be used to create a complete system of healthcare
recommendations that opens the path for more personalised and cost-effective medical care.
The paper (15) investigates user-generated content and peer recommendations in health-
related internet forums. This study underlines the importance of genuine social bonds. (16)
investigates how machine learning techniques are applied to recommendation systems and
offers future study directions in software engineering. (17) provides cloud-based process
optimisation and data analysis technologies.

The primary goal of these systems is to identify the elements that influence various processes.
Article 18 provides an outline of the potential applications of big data analytics in healthcare
organisations. This article also looks at how data-driven tactics might improve patient care
and decision-making. (19) discusses the topic of recommending relevant threads within
health communities in order to increase access to knowledge exchange and assistance. (20)
provides a way for improving the overall performance of recommender systems while still
protecting user privacy via rigorous data analysis. (21) summarises the advances in this
discipline and how they have influenced personalised recommendations. This report
summarises the research on deep learning-based recommender systems. The research (22)
finishes with the introduction of a deep learning system. This technology analyses massive
amounts of data to find abnormalities and make predictions about each person's performance
statistics. These studies suggest that merging blockchain technology with other technologies,
such as deep learning, has the potential to improve the security, personalization, and accuracy
of recommender systems in the healthcare business.

Researchers in (23), in an effort to help people suffering from diet-related disorders and to
promote better living, developed a recommendation system that delivers personalised
nutritional suggestions based on the user's health profile. The goal was to encourage healthy
practises. To address the need for performance forecasting across healthcare networks, (24)
proposed an approach for capturing and forecasting essential performance data. (25), for
example, created a system with user-configurable nutritional components to help people build
healthy eating habits. As part of their research, the twenty-six studied how dietary and
nutritional product shortages fit into the greater context of the scarcity literature. When
creating a framework for tailored meal suggestions, the author, a 27-year-old lady, considered
the reader's likes and dietary limitations. (28) A recommender system based on dietary
clustering was developed with diabetic patients in mind. (29) discovered certain gaps in the
diet management literature that need to be filled with additional research. The authors of (30)
emphasised the importance of adherence to relevant standards and rules when developing an
AI-based recommendation system for diabetes therapy that takes into consideration people's
preferences and medical situations. The patients' wishes were taken into account. These
findings encourage the formulation of personalised dietary recommendations that take into
account each person's unique needs and state of health.

The authors of (31) suggested the idea of a recommender system based on the pathology
report of the user. To give tailored menus and nutrition advice, this system employs an
algorithm known as the anthill algorithm. Researchers in (32–34) found various causes of
nutritional inaccuracy and offered options for future system development, one of which was
the incorporation of nutritional claims in databases. The authors of (35) presented a
monitoring system as part of their research endeavour to stimulate the establishment of
nutritionally suitable diets for children. When researchers (36) investigated the impact of
economic growth on obesity, they observed a link between increased affluence and higher
obesity rates in low-income Malaysian areas. The study was conducted in Malaysia.
Individualised dietary recommendations, according to (37), should be developed while taking
into account each person's needs, preferences, and expectations. Researchers established a
rating system for the relevance of nutrients to consider while creating meals for jaundice
patients as part of a study reported in (38). It was stated in reference number 39 how deep
learning techniques and artificial neural networks (ANNs) have aided in the advancement of
medical technology. The authors of (40) investigated a variety of tactics for encouraging
healthy eating as well as the challenges of investigating various recommendation
technologies. The Internet of Things (41) prompted the implementation of a privacy-
protecting secure framework (PPSF), which was followed by the installation of two-level
privacy techniques and intrusion detection systems. Finally, Prabadevi et al. (42) emphasised
the benefits of edge-of-things (BEoT) cryptocurrencies in terms of security and services.
Among these benefits are trust management, vulnerability screening, data privacy, and access
authorization. The findings of these studies aid in providing personalised dietary
recommendations, eliminating nutritional mistakes, and improving the privacy and security
functions of recommendation systems.

4. Dataset

For our study, we used a dataset that contained information from the cloud and the Internet of
Things about a thousand things and fifty different patients. The medications were tested on a
wide range of sick people. The dataset has over 17,000 individual records and over 13
separate features. Table 1 includes the product's properties; Figure 2 lists the patient's
characteristics; and Table 3 lists the accuracy of the retrained models in terms of BPS.

Table-1

Table-2
Table-3

5. Data preservation and protection

Patients have access to the suggested BPS for secure data archiving and storage that
recommendation engines can use. This kind of data may include, for example, diagnoses,
treatment plans, demographic information, medications used, and results. All of this type of
data is encrypted by the BPS, making it very difficult for unauthorised persons to access it.
Additional data, such as patient health histories and diets, which have historically been
gathered by the hospital information system, indicate the real performance results that are
essential for the recommenders (HDS). The BPS is preferred over the patient database for
sending and storing such sensitive user data, as demonstrated in Figure 1. This shields private
information from snoopy parties like hackers. Such sensitive user data is transmitted and
stored using the BPS (shown in Figure 1). Patients who have the necessary authorization can
access their medical records through the BPS in a trustworthy and secure manner. A
notification will be delivered to the patient's registered smart device as soon as the BPS
recognises a need for a suggestion, prescription, check-up, or emergency. Users (patients) are
given enough information about how data is collected in hospitals.

Fig-1

6. Cooperative filtering

The Blockchain-Secured Recommender System (BPS) has a substantial advantage over other
privacy technologies now on the market since it can effectively execute information
computations while also preserving the privacy of all input data. When a user (a patient)
requests advice on sensitive medical data, the hospital is notified in the manner depicted in
Figure 2. In order to handle a large number of users and patients, the hospital's recommender
systems use a collaborative filtering algorithm to find and rank the most useful ideas. The
healthcare database manager has access to only the most fundamental health data, whereas
the BPS is in charge of maintaining crucial data and computations. Finally, an authorised user
is granted secure access to the recommended next steps and receives an alert signal.
Fig-2

7. Data Processing
7.1. Data normalization

Following the selection of a dataset, data cleaning operations are carried out to remove
duplicate records and standardise other features of the data. To do this, regularisation seeks to
fit the dataset into a defined scale. This will be done since the scale values in the dataset are
not regularly distributed. All numbers, whether one digit, two digits, or three digits, are
normalised to a single scale for the most efficient usage of machine learning models. This
motivated us to use a min-max normalisation method. The data had to be scaled and
normalised using a min-max technique so that they fit within the interval [0, 1] for the study.
The formula is presented in the following section.

(1) Ni=cimin(ci)max(ci)min(ci)min(ci)min(ci)min(ci)min(ci

The answer to the equation Ni=ci-min(ci)max(ci)-min(ci) (1) is the total number of features,
y = (C1, C2,..., Cn). While feature Ni is a compiled set of previously normalised attributes,
feature Ci is the one that requires normalisation. This endeavour resulted in the development
of a standardised set of criteria with consistent importance ratings.Data Encoding

It was decided prior to encoding that duplicate or inconsistent values would be removed from
the collection. This was accomplished throughout the course of the research. The notional
attributes are then allocated numerical values. Before any machine learning models are
developed, the purpose is to verify that mathematical values are employed in the models'
back-end procedures. In this investigation, non-arithmetic data were converted to arithmetic
data before the data encoding technique could be performed. Before passing the data to the
suggested model, machine learning (ML) algorithms performed arithmetic value comparisons
on it in the background.

7.2. Optimal feature visualisation

These figures illustrate that the caloric content of a product accounts for more than half of its
total weight. The product contains 12% fat calories, which is a large proportion. In addition to
8% protein, the product contains 8% carbohydrate. The product has a salt level of 6%, which
is rather significant. Given the gravity of the situation, the user count's 5% share is amazing.
Furthermore, the primary portions of fibre, user fat, and protein in the product are all 2%.
Age, user size and calorie consumption, illness, product UPC, and user carbohydrate content
are all significant factors in the dataset. The Random Forest approach is built around decision
trees. When all of the decision trees that are now available are combined, a more accurate
forecast can be produced. Not only can it be used for regression and classification, but it can
also be used to extract information from the most essential components of the data. It has
several advantages, one of which is that it can be used for both classification and regression.
The target was anticipated based on the general consensus established by the Random Forest
classifier. Instead of using a set number at each node to evaluate regression, Random Forest
uses a threshold defined by the average of all judgement trees. This cutoff point is then used
by the performance, which includes splitting. The cutoff is determined by combining the gain
index and entropy computations. A handful of their computations are given here for your
convenience.

(2) Entropy: The equation K(b1, b2,..., bs) = si = 1(b1 log(1b1)) (2) can be used to calculate
the entropy of a set of classes. In this formula, the terms b1, b2,..., and bs represent the
probability of the classes.

Gain(R,S)=K(R)

S=1x(R1)K(R1) (3)

Gain(R,S)=K(R)

I = 1Sx(R1)K(R1) (3) (3)

Deep Learning is being used to improve categorization.

7.3. Multilayer perception

Nowadays, there are numerous types of neural networks. All currently employed ANNs can
be classified based on the transfer functions of their PEs, or processing elements. Using a
variety of strategies in the classroom, including link equations The skeleton of artificial
neural networks is made up of processing units. It collects and averages signals from various
processors. This format is the bare minimum for ANN transmission. The input layer contains
cells from the previous iteration of the first layer. The building's core has concealed layers.
Last but not least, the output modules round out the structure. One of the things the input unit
must supply is information from the outside world. Data is collected and sent to a hidden
layer, where weights are applied to improve it. The signals are generated and then sent to the
output layer. An ANN's hidden layers are critical to its ability to classify data. Synapses are
the connections between the several layers underneath the surface. The data will have the
same number of features as the number of input attributes (b1, b2, etc.). The following
equation (4) shows how the features are merged to form an ANN and then amplified using
weights (e1, e2,..., ep).

E.B. = 1 = 1 peibi + E1B1, E2B2, and so forth + EPBPP (4) p1 = 1 eibi, p2 = 1 eibi, p3 = 1
eibi, and p4 = 1 eibi are all identical to E.B. The letter "p," which stands for the number of
features in the dataset, represents the B-feedback to the input. The value given by "e" is the
sum of all feature weights multiplied by the factor. It's also known as the dot product. The
inner product function then adds the bias function. In this example, the resulting equation is
(5).

a=p1=1eibi+bias(5) is an example.

There are five examples of a=1=1peibi+bias.

In Equation (5), the letter a represents the auto-encoder f. (a). The data from the first hidden
layer and synapse can then be retrieved. This process can be repeated until no new inputs or
weights are added to the system.

Interconnected Neurons in a Long-Term Plastic Network

RNNs (or recurrent neural networks) demonstrate their dynamic activity using a graph
structure made up of linkages between nodes. This structure is then governed by physical
rules. One sort of ANN that has recently gained popularity employs purposely looped
memories.

7.4. Long short-term memory

This research investigates a design to increase RNN memory capacity. The architecture uses
sigmoid activation and three 32-order LSTM layers. The loss function evaluation metric is
binary cross entropy, and the Adams function optimises results. These carefully researched
improvements to the RNN model's architecture, order, optimisation, and loss functions are
intended to improve its performance when doing the aforementioned task.

7.5. Gated recurrent Units

The GRU (Gated Recurrent Unit) is a more recent discovery that provides better performance
in some areas than the LSTM (Long Short-Term Memory), which was developed quite some
time ago. It significantly reduces the time required for model training and provides a flexible
structure that is simple to alter. When it comes to addressing problems that require larger
memory storage, the LSTM surpasses the GRU. The qualities of the datasets employed have
a direct impact on the performance of both systems. Both LSTM and GRU are deep learning
frameworks; however, they differ dramatically in several crucial respects.

The LSTM has three gates in total, but the GRU has only two.

In contrast to LSTMs, the identifiable covert form-contradictory memory present in


convolution units is absent in general-purpose recurrent units (GRUs).

In contrast to LSTMs, GRUs do not use a second nonlinearity to compute the output of their
models.

8. Classifiers learned by machine


8.1. Logistics regression

This is a common sort of classification algorithm in machine learning. The final decision is
essentially bifurcated. Its purpose is to assist you in taking the appropriate actions in reaction
to a certain event by determining the causes of that outcome. The logit (odds) function is used
in this strategy. The algorithm is illustrated in Equation (6) below:

Log(r(b)1)r(b)=o+J1b(6) is the formula.

You may calculate log(r(b)1-r(b)) by solving the equation o+J1b(6).

Logit is defined mathematically as log(r(b)1-r(b)). The aforementioned equation is made up


of these two functions: log(r(b)1-r(b)) and odd = (r(b)1-r(b))r(b)1-r(b)). The percentage of
implantation is given by the odds as compared to the likelihood of failure or a lack of
features. This method is frequently used after the inputs have been blended directly into log-
odds. This is because the result can only be calculated using the log-odds representations of
the inputs. As the inverse of the function we just studied, we get the following expression:

R(B) is written as (7) lo+ 1b1+ lo+ 1b.

R (B) is calculated as follows: lo+ 1b1+ lo+ 1b7

The formula (7) defines a sigmoid function as a mathematical expression that takes values
between 0 and 1 and turns them into an "S" shape. To increase the potential of investigating
sample values, we chose logarithmic variables with probability values ranging from 0 to 1.
This action was taken to enhance the ability to inspect sample values.

8.2. Naïve Bayes

The feature pairs in this set of algorithms are totally free-floating; no feature pairings are
preset. This is one of the algorithm's two hypotheses: there is no overlap between
departments, and so forth.

If the attributes are presented in text form, the phrases must be transformed into numerical
values. The Bayes' Theorem is accurate, as shown in Equation (8) below.

RGH is simply the same as R(GH)R(G)R(H) (8).

In the case when G and H are occurrences, the posterior probability is represented by R (G),
whereas the joint probability is represented by R (GH). R (GH), also known as Hs, represents
the likelihood of GH occurring in the future. The following equation (9) illustrates the actual
use of Bayes' theorem.

You should write RvB in the following format: R(Bv) = R(Bv)(9)

RvB is calculated using the following formula: RvB = R(Bv)R(v)R(B) (9), where B is the n-
dimensional dependent feature vector, B = (B1, B2, B3,..., Bn), and v is the classification
variable. B is represented by the notation B = (B1, B2, B3,..., Bn). We have established the
following list of unconstrained subcategories to make it easier to categorise the supporting
data for events G and H: R (G, H) = R (G) + R (H) is a notational representation of this. In
this situation, we get the following results: R(vB1...n)=R(B1v)R(b2v)......R(bnv)R(B1)R(B2).

R(Bn)(10)R(vB1...n) = R(B1v)R(b2v). R(Bn)(10)(10) R(Bnv) R(B1)R(B2) R(Bn)(10) R(Bn)


(10) R(Bn)(10) R(Bn

The equation The following math (11) can be used to express this connection:

The result is R(vb1...n) = R(v)ni = 1. For each n, R(Biv)R(B1)R(B2)... R(Bn)=(11).


R(vb1...n)=R(v)i=1 n R(Biv)=R(B1)R(B2).....R(Bn)(11)

Because continuous input is taken into account, the following equation is correct:

R(vb1…n)=∞R(v)(12) ni=1R(Biv)

R(vb1...n)== R(v)== 1nR(Biv)

In order to build a set of standards and practises, it is necessary to calculate the probabilities
connected with v's inputs and select the most likely conclusion. As shown in Solution (13)
below, this leads to the following:

(13) ni = 1R (Biv) and v= argmaxR (v)

(13) i = 1nR (Biv) and v= argmaxR (v) (13)

All that remains to be determined at this point is the difference between the logit function,
denoted by R (v), and the original probability, denoted by R (biv).

Evaluation Criteria and Metrics

We have proposed several ways of determining the viability of the plan. To wit:

(Az + An) (Cz + Cn + An) equals reliability. (14)

The precision formula is as follows: (Az + An) (Cz + Cn + An) (14)

Precision's purpose is to investigate the relationship between True Positive (AZ) numbers and
False Plus (CZ) units.
AzAz plus Cz (15) equals accuracy.

(15) Accuracy = (AzAz+Cz)

As part of the recall process, you will evaluate both True Positive (AZ) and incorrectly
labelled True Negative (CN) units. Equation (16) contains the memory mathematical
formula, which is as follows:

Remember=Azaz+cn (16) Remember=Azaz+cn (16)

People's memories and ability to accurately analyse performance are unlikely to be at their
best during performance reviews. When it comes to mining algorithms, for example, if one
wants high accuracy but low recall, they must take a different strategy. The natural next step
is to investigate which algorithm meets its goals best. The F1-measure, which provides a
median of recall and accuracy, is used to approach this problem. The following formula is
used to determine the F1 measure:

The sum of recall and precision is two (17).

The sum of recall and precision is two (17).

8.3. Software analysis

On a workstation with a Core i7 processor, Colab software was utilised to


undertake computational intelligence research. The study's equipment
requirements were 16 GB of RAM, four CPUs, a 1.7 GHz engine, and
approximately 20 GB of Google Colab storage space. The dataset used in the
study was divided into three parts: training, cross-validation, and testing. K-
Fold Cross-validation was employed on both the training and test sets. All of the
data was set aside just for training purposes.

Table 4 shows the training accuracy of the various deep and machine learning
classifiers used in this study. The LSTM classifier achieved 95.45% training
accuracy by using a 3-layer, 32-batch LSTM model with a nonlinear activation
function. However, the MLP classifier performed the worst in terms of training
accuracy, achieving 86.5%. During training, the accuracy of the GRU, RNN,
LR, and Naive Bayes classifiers was 94.6%, 92.3%, 88.5%, and 87.2%,
respectively.

Table - 4

9. Experiments and results

Figures 2, 4, and 5 illustrate the training and validation scores for the Basis Function (rbf),
LR, and MLP classifiers, respectively. The blue line or curve in these graphs shows the
training results, whereas the red curve represents the cross-validation results. According to
the data provided in Figure 3, the Naive Bayes classifier was effective in reaching an average
score of 87.2% for both the retraining and cross-validation processes. Figure 4 depicts the
regression analysis training score initially increasing to 93.8%, where it remains stable for a
while before beginning a slow falling trend. The cross-validation score, on the other hand, is
maintained throughout the procedure while retaining its linearity and consistency. Figure 5
depicts the validation and training outcomes of the MLP classification algorithm. The results
of the experiments show that the MLP classifier's training and testing scores climb initially
but then fall.
Table 5 displays the test accuracies for the MLP classifiers. On the other hand, the MLP
classifier achieved the lowest possible score of 90.3% on the test, while the LSTM classifier
obtained 99.5% accuracy. GRU, Multilayer Perceptron, RNN, and LR are classifiers with
accuracy ranging from 98.8% to 95.67% to 95.55% to 90.8% in testing.

Table - 5

The combined testing precision and validation score for the Binary Classification and LR
models was 94.28% (see Figures 6 and 7 for additional information). Figure 8 shows similar
results, with the MLP model's testing and validation scores converged at 93.81 and 92.85,
respectively.
The models' effectiveness was further assessed through training and accuracy and loss
testing. Figures 9–11 depict the results of these analyses. The blue lines in the figures
represent test results, while the red lines represent accuracy and loss of performance
throughout training. Figure 9A, for example, illustrates the test and training grades, which
demonstrate the GRU classifiers' performance accuracy. The figures show that accuracy was
87.5 percent for the first 45 epochs before varying between higher and lower levels until
epoch 96.6, when it reached 93.3 percent. A blue curve represents the test score, which began
at 85.5% and remained steady throughout, eventually peaking at 90.2% after 100 cycles.
Figure 9B depicts the loss performance of the GRU classifier for the training and testing
scores. The graph shows that the training score loss efficiency ranged between 0.45 and 0.14.
The failure rate dropped from 0.38 to 0.102 under experimental conditions.

The accuracy of the LSTM on test and training scores is shown in Figure 10A, while the
model's loss efficiency is shown in Figure 10B. As indicated in Figure 10A, the range of
training success was between 86.8% and 94.3%. The accuracy test results showed a
comparable range of performance, with a minimum performance of 90.1% and a maximum
performance of 92.8%. Figure 10B depicts the loss of performance of the LSTM classifier
during training and testing. The retraining loss score falls from 0.46 at the start to 0.14 after
100 repetitions. The experiment's loss efficacy follows a similar pattern, decreasing from 0.46
at the start to 0.1 after 100 repeats.
Figure 11A shows the RNN model's training and testing accuracy, whereas Figure 11B shows
the model's loss performance. The experimental results show that after 100 epochs, the best
performance of 92.9% was achieved, while the training score begins at the lowest
performance of 89.5%. The accuracy of the test findings ranged from 91.3% at the start to
91.5% at the peak and 85.8% at the end. Figure 11B, on the other hand, depicts the training
and testing performance of the RNN classifier. After 100 iterations, the learning loss score
decreased from 0.435 to 0.185. The test loss performance followed a similar pattern, starting
at 0.435 and dropping to 0.12 after 100 repetitions.

The results of computational modelling and intelligent system classifiers are shown in Figure
12. The LSTM classifier outperforms the other classifiers in terms of recall, F1-measure, and
accuracy. There were two groups in the study: "allowed" and "not permitted." For the
"allowed" class, the LSTM classifier achieves 99% recall, accuracy, and F1-measure scores.
The LSTM classifier's performance in the "not permitted" category drops to 90% recall, 80%
accuracy, and 45% F1-measure. The experimental models often give outstanding results in
the "allowed" categories but less so in the "not permitted" areas.
10. Conclusion and future work

According to a recent study, the analysis of automated medical data has considerable promise
for the provision of meaningful advice and enhanced treatment for hospitalised patients with
special requirements. This is especially true when combined with advances in algorithmic
technology and the discovery of new information. Despite the fact that some patients and
institutions recognise the benefit of such tools, widespread adoption of safe recommendation
engines has been impeded by concerns that personal information may be misused. This study
solves these difficulties by proposing a secure, deep learning-based recommender system that
provides personalised nutritional and medical advice while protecting the user's confidential
medical data. In order to alleviate these concerns, the recommender system was implemented
in this study. Using demographic information, dietary habits, medical history, and other
pertinent characteristics, the system generates credible recommendations suited to the user's
specific situation. LSTM, MLP, GRU, RNN, Multilayer Perceptron, and LR are among the
deep learning classifiers tested in this work. The results reveal that LSTM and GRU
classifiers perform extremely well in both approved and unauthorised classes, with good
recall and precision and a high F1-measure. In the approved class, the LSTM classifier has a
recall rate of 90%, an accuracy rate of 80%, and an F1-measure of 45%. In the unapproved
class, on the other hand, it functions beautifully, obtaining 100% accuracy. Future iterations
of our protected diet guidance systems will include multidimensionality, allowing us to
assure complete anonymity for customers with special demands at every stage of the
operation.
11.Data availability statement

This paper's authors promote knowledge sharing and collaborative research as open and
transparent advocates. Accordingly, this study's raw data will be made available. We hope
that providing access to the original evidence will inspire more investigation, inspection, and
replication, enhancing science and our understanding of the topic.

12. Author Contribution

It has been reviewed and approved for publication. All of these authors made significant
theoretical and practical contributions to the field. Their insights helped shape and execute
the study.

13. Funding

This work was funded by Jiangsu Provincial Key Research and Innovation Strategy (Social
Development) Programmes BE2016630 and BE2017628 and Wuxi Municipality's Science
Development Project Z201603 in the Ministry of Family and Health Management. clashing
loyalties

The authors say they have no financial or business interests in the study.

14. Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily
represent those of their affiliated organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its
manufacturer, is not guaranteed or endorsed by the publisher.

References

[1] Wasid, M., & Kant, V. (2015). A particle swarm approach to collaborative filtering based
recommender systems through fuzzy features. Procedia Computer Science, 54, 440–448.

[2] Nemati, Y., & Khademolhosseini, H. (2020). Devising a profit-aware recommender


system using multi-objective GA. Journal of Advances in Computer Research, 11(3), 109–
120.

[3] Yadav, S., & Nagpal, S. (2018). An improved collaborative filtering based recommender
system using bat algorithm. Procedia Computer Science, 132, 1795–1803.
[4] Vignesh, M. S., Banu, M. K. S., & Kumar, M. K. M. (2014). Efficient algorithms systolic
tree with ABC based pattern mining algorithm for high utility itemsets from transactional
databases. International Journal of Computer Science and Mobile Computing, 3(7), 350–357.

[5] Ragone, A., Tomeo, P., Magarelli, C., et al. (2017). Schema-summarization in linked-
data-based feature selection for recommender systems. In Proceedings of the Symposium on
Applied Computing (pp. 330–335). Morocco.

[6] Khambra, G., & Shukla, P. (2021). Novel machine learning applications on fly ash based
concrete: an overview. Materials Today Proceedings, 2214–7853.

[7] Shukla, P. K., Sandhu, J. K., Ahirwar, A., Ghai, D., Maheshwary, P., & Shukla, P. K.
(2021). Multiobjective genetic algorithm and convolutional neural network based COVID19
identification in chest X-ray images. Mathematical Problems in Engineering, 1, Article ID
7804540.

[8] Rabanal, P., Rodríguez, I., & Rubio, F. (2019). Towards appling river formation dynamics
in continuous optimization problems. In Proceedings of the International Work-Conference
on Artificial Neural Networks (pp. 823–832). Springer, Cham.

[9] Cai, X., Hu, Z., & Chen, J. (2020). A many-objective optimization recommendation
algorithm based on knowledge mining. Information Sciences, 537, 148–161.

[10] Alhijawi, B., & Kilani, Y. (2020). A collaborative filtering recommender system using
genetic algorithm. Information Processing & Management, 57(6), Article ID 102310.

[11] Alhijawi, B., Kilani, Y., & Alsarhan, A. (2020). Improving recommendation quality and
performance of genetic-based recommender system. International Journal of Advanced
Intelligence Paradigms, 15(1), 77–88.

[12] Sri, S. R., Ravi, L., Vijayakumar, V., Gao, X. Z., Subramaniyaswamy, V., &
Sivaramakrishnan, N. (2020). An effective user clustering-based collaborative filtering
recommender system with grey wolf optimisation. International Journal of Bio-Inspired
Computation, 16(1), 44–55.

[13] Tohidi, N., & Dadkhah, C. (2020). Improving the performance of video collaborative
filtering recommender systems using optimization algorithm. International Journal of
Nonlinear Analysis and Applications, 11(1), 283–295.

[14] El-Ashmawi, W. H., Ali, A. F., & Slowik, A. (2020). Hybrid crow search and uniform
crossover algorithm-based clustering for top-N recommendation system. Neural Computing
& Applications, 33(12), 7145–7164.

[15] Wang, H., Niu, B., & Tan, L. (2021). Bacterial colony algorithm with adaptive attribute
learning strategy for feature selection in classification of customers for personalized
recommendation. Neurocomputing, 452, 747–755.

[16] Pandit, S., Shukla, P. K., Tiwari, A., Shukla, P. K., Maheshwari, M., & Dubey, R.
(2020). Review of video compression techniques based on fractal transform function and
swarm intelligence. International Journal of Modern Physics B, 34(08), Article ID 2050061.
[17] Si, L., & Jin, R. (2003). Flexible mixture model for collaborative filtering. 20th
International Conference on Machine Learning, 2, 704–711.

[18] Su, X., Greiner, R., Khoshgoftaar, T. M., & Zhu, X. (2007). Hybrid Collaborative
Filtering Algorithms Using a Mixture of Experts. In Proceedings of the IEEE/WIC/ACM
International Conference on Web Intelligence (pp. 645–649). Fremont, CA, USA.

[19] Wang, J., de Vries, A. P., & Reinders, M. J. T. (2008). Unified relevance models for
rating prediction in collaborative filtering. ACM Transactions on Information Systems, 26(3),
1–42.

[20] Leung, C. W. K., Chan, S. C. F., & Chung, F. L. (2006). A collaborative filtering
framework based on fuzzy association rules and multiple-level similarity. Knowledge and
Information Systems, 10(3), 357–381.

[21] Pavlov, D. Y., & Pennock, D. M. (2002). A maximum entropy approach to


collaborative filtering in dynamic, sparse, high-dimensional domains. Neural Information
Processing Systems, 1441–1448.

[22] Malarvizhi, S. P., & Sathiyabhama, B. (2014). Enhanced reconfigurable weighted


association rule mining for frequent patterns of web logs. International Journal of Computing,
13(2), 97–105.

[23] Zhang, Y., Zhou, Y., & Yao, J. (2020). Feature extraction with TFIDF and game-
theoretic shadowed sets. In Proceedings of the International Conference on Information
Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 722–733).
Springer, Cham.

[24] Redlarski, G., Dabkowski, M., & Palkowski, A. (2017). Generating optimal paths in
dynamic environments using River Formation Dynamics algorithm. Journal of Computational
Science, 20, 8–16.

[25] Roy, V., Shukla, P. K., Gupta, A. K., Goel, V., Shukla, P. K., & Shukla, S. (2021).
Taxonomy on EEG artifacts removal methods, issues, and healthcare applications. Journal of
Organizational and End User Computing, 33(1), 19–46.

[26] Nickabadi, A., Ebadzadeh, M. M., & Safabakhsh, R. (2011). A novel particle swarm
optimization algorithm with adaptive inertia weight. Applied Soft Computing, 11(4), 3658–
3670.

[27] Kalayci, C. B., & Gupta, S. M. (2013). River formation dynamics approach for
sequence-dependent disassembly line balancing problem. In Reverse supply chains: Issues
and Analysis (pp. 289–312).

[28] Wang, J. S., & Song, J. D. (2017). A hybrid algorithm based on gravitational search and
particle swarm optimization algorithm to solve function optimization problems. Engineering
Letters, 25(1).
[29] Rathore, N. K., Jain, N. K., Shukla, P. K., Rawat, U. S., & Dubey, R. (2021). Image
Forgery Detection Using Singular Value Decomposition with Some Attacks. National
Academy Science Letters, 44, 331–338.

You might also like