International Journal of Innovative Research in Engineering and Management (IJIREM)
ISSN (Online): 2350-0557, Volume-11, Issue-5, October 2024
https://doi.org/10.55524/ijirem.2024.11.5.5
Article ID IJIR3018, Pages 32-39
www.ijirem.org
Enhancing Cloud Scalability with AI-Driven Resource
Management
Amit Choudhury1, and Yuvaraj Madheswaran2
2
1
Department of Information Technology, Dronacharya College of Engineering, Gurgaon, Haryana, India
Lead Software Development Engineer/Lead Cloud Security Engineer, GM Financial Company, San Antonio, Texas, USA
Correspondence should be addressed to Amit Choudhury;
Received: 28 August 2024
Revised: 10 September 2024
Accepted: 23 September 2024
Copyright © 2024 Made Amit Choudhury et al. This is an open-access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ABSTRACT- This research paper aims at analyzing the
factors that can help improve scalability of cloud by
incorporating different machine learning algorithms in
management of resources. Since controlling and managing
cloud resources is becoming more challenging with
compounded base requirements, the majority of
conventional resource management solutions may not prove
adequate. This research assesses the performance of five
state-of-art machine learning techniques namely
Reinforcement Learning, Long Short-Term Memory,
Gradient Boosting Machines, Autoencoders and Neural
Architecture Search in minimizing operational cost and
enhancing resource utilization and overall system efficiency
for improving business outcomes. The findings reveal that
the use of RL-based approaches to optimize operational cost
reduction and minimizing provisioning delay by 20% and
30% respectively and LSTM network to increase the
accuracy of demand forecasting by 12% and overall
efficiency of resource utilization by 22%. The use of GBM
models in forecasts results in 30% error reduction in costs
that drop by 20% while service improves by 25%. Using
autoencoders, the models achieve 97% accuracy in
detecting anomalies and infinityai1411@gmail.com in turn
increasing the efficiency of allocation by 15 percent. The
NAS-optimized models yield increased accuracy by a
percentage point of 18 % as well as a 25% faster
computational speed. Altogether, these theoretical
developments demonstrate the ability of AI-based
methodologies to enhance the cloud scalability promising
and provide practical recommendations for improving
resource management approaches in the cloud environment.
KEYWORDS- Cloud scalability, AI-driven Resource
Management,
Machine
Learning
Algorithms,
Reinforcement Learning, Deep Q-Learning, Long ShortTerm Memory Networks, LSTM Forecasting, Gradient
Boosting Machines, XGBoost
I. INTRODUCTION
In the ever dynamic cloud computing environment, the
issue of scalability continues to be an essential barrier to
organizations’ optimal performance. When organizations
are using the cloud infrastructure to manage their operations
it is very crucial to scale up resources even in random
manner. The protraction of conflicts between resource
scarcity and distribution puts traditional models to manage
Innovative Research Publication
such resources inadequate when addressing the complexity
and size of cloud architectures. This is where artificial
intelligence (AI) comes as an innovational tool, which
points to the new paradigms for cloud scalability
improvement by means of intelligent resource usage. The
incorporation of AI in the management of clouds resources
is a major milestone as it brings system accuracy in capacity
demand changes [1].
Cloud computing solutions have completely altered the way
companies implement and extend their Information
Technology assets. Still, even in the case of on-demand
elastic resources, the conventional resource getting methods
may be poorly adapted to a quickly rising and fluctuating
demand. The traditional approaches of using the manual or
a set of rules frequently provide the solutions that are suboptimal in terms of wastage of resource or poor
performance, as the result of over-allocate or under-allocate
of the resources. Because of these limitations, there is need
for more complex ways of managing and utilizing cloud
resources [2].
Data driven approach with an ability to learn from data and
make intelligent infer- ences makes artificial intelligence a
strategic solution for such difficulties. Typically cloud
systems predict future demands and make necessary
adjustments real time through the use of AI algorithms and
machine learning models. This transition from operational
fire fighting to a tactical style of management can therefore
help in reducing cost as well as enhancing performance. For
instance, the infrastructure resources, such as computing
power, storage and the network bandwidth are capable of
being managed by AI to ensure efficient usage or upsize or
downsize [3].
Some of the advantages of AI use in cloud scalability is that
it can go through big amounts of data to make conclusions
different from manual computations. The demand data can
be very accurately forecasted since machine learning
models can learn from past data. Another strength is the
predictive characteristic that it provides cloud services with
the ability to allocate resources ahead of time so as to avoid
a significant delay between demands and resource
provision. Further, AI systems can extend from a real-time
usage perspective to adapt further and optimize them in
terms of refocusing and providing better responsiveness and
predictive capability [4].
Furthermore, through AI, resource management can also
lead to a reduction in operational overhead since tasks
32
International Journal of Innovative Research In Engineering and Management (IJIREM)
which are repetitive and time-consuming can be handled by
the AI. As in any other environment, management of
resources implies not only the provisioning and the deprovisioning processes, but also monitoring and changing
resources in order to meet changing demand. These are
processes ideal for being performed by AI system in that
they simplify them and can save a lot of human resources
for higher pitch jobs while at the same time eliminating the
probability of human errors. This automation also leads to
the aspect of cost optimization in that it is able to detect
unused or less utilized resources which can be costly to any
organization [5].
The use of AI in the cloud scalability also brings into the
equation possibilities for more complex resource utilization
control and/or optimization. For instance, AI algorithms are
capable of drawing performance data and usage figures in
order to make recommendations of additional perfect
changes beyond the basic direction of scaling up or scaling
down a business. This include load distribution over distinct
server; the best arrangement of storage assets; and
networking resources in compliance with an application’s
needs. It can result in optimizing the usage of cloud
infrastructure, and improve the performance of the whole
system [6].
With the developments in AI technologies this field
continues to develop and the possible applications of AI in
cloud scalability continues to grow. Thus, the development
of new and innovative AI approaches like reinforcement
learning or even neural architecture search could provide
even greater levels of details in order to improve the
resource management strategies. There is, for example, the
reinforcement learning that trains models to make decisions
based on the outcomes of their activities and thus gradually
develop the best practices in the use of resources. In the
same way, NAS is valuable for designing particular models
that would best suit certain cloud conditions and then
applying them to optimize AI-driven resource orchestration.
However, adopting resource management using artificial
intelligence makes it operational on cloud environments
have several drawbacks. Recent years, accuracy and
reliability of AI models more and more depending on the
data quality and testing. Also, the adoption of AI solutions
requires technical and practical concerns especially when
implementing in cloud hosting architectures. There is a
need for organizations to weighted their current systems and
processes so as fit into the new technologies like AI for
optimization to be achieved. Nevertheless, the benefits that
can be derived from the application of AI for resource
management cannot be underestimated, therefore attracting
many cloud service providers and consumers towards this
field.
It may therefore be concluded that the incorporation of
artificial intelligence in cloud resource management is a
progressive step towards improving cloud scalability. In
employing the strengths of AI for predictive planning,
automation, and optimization, it is possible to have betterresponsive resource management. This approach solves
many of the problems encountered when using conventional
methods while providing potential for increased
effectiveness, efficiency and system optimization at a lower
cost. Accordingly, because the development of AI
technologies will advance in the future, the position of the
related technology in cloud scalability will also increase,
Innovative Research Publication
stimulating future innovations and improvement.
II. LITERATURE REVIEW
In recent years, not only has the field of cloud computing
developed but also such appearance of the use of artificial
intelligence basically in the claim of resources.
Contemporary literature enlarges a vast number of works
that are more oriented towards the application of artificial
intelligence and machine learning for enhancing the
scalabilities of cloud environments. Research works of
2022, 2023, and 2024 therefore presented a number of
possibilities and developments which, in one way or the
other have shown how the application of AI in this area
holds the key to a great transformative power.
Starting from the year 2022, studies started featuring
extensively on how RL can be applied in managing the
cloud resources. One of the most recent studies conducted
by authors proposed a new RL-based approach to dynamic
resource allocation [7], In which the presented framework
achieved much higher results than conventional methods,
both in terms of cost and performance. The following
framework employed RL to flexibly regulate resources in
response to predictions on demand, highlighting how
machine learning can help put some more dynamics in
clouds systems concerning the changing workload. This
work was helpful in paving way for other studies by
showing that RL is useful when it comes to task of
allocation of resources.
Based on this tegmental work, the research carried out in
2023 elaborated the application of AI techniques with
emphasize in predictive analytics in cloud systems. Authors
[8] gave a detailed plan of incorporating time series
forecasting with deep learning for precise estimations of the
future requirements of resources. As another matter, their
model used Long Short-Term Memory (LSTM) networks
that were able to learn the dependency of resource usage
over time and therefore it was a reliable method for
resource usage prediction and hence resource pre-booking.
This study reveals that, one of the key challenges of AIbased resource allocation approach is the ability of the
system to provide accurate forecast which can help to avoid
over-provisioning as well as under-provisioning of
resources.
At the same time, the subject of including AMDS into cloud
management was continued. In the study the authors
examined the application of heuristic search methods to
improve resource utilisation optimality [9]. Thus, they
established that through utilizing AI with the optimization
techniques, many of the resource management tasks would
not require the involvement of human beings and it was
also proven that the use of robots brought in a section of
efficiency that would otherwise be complicated and
tiresome. This entailed use of an automated system that
adaptively acquired and allocated resources based on realtime performance and usage data They stressed on the
importance of automating such a process, especially given
the cloud’s scale and complexity.
As for the year 2024 advancements in this area are expected
to be even further with more complex AI approaches and
their uses. Contribution No. 1 was made by researchers who
proposed an AI based framework for multi-cloud
architecture [10]. They pointed out that the primary concern
that had been discovered in their research was the issue of
33
International Journal of Innovative Research In Engineering and Management (IJIREM)
resource management on different clouds and then came up
with a solution on how resource utilization as well as load
distribution could be done on cloud through the use of
Artificial intelligence. This framework uses machine
learning algorithms to operate resource allocation in
response to cross-cloud data to effectively support
complicated cloud environments.
One of the key highlights in 2024 was the use of nice for
enhancing the cloud resources management based on the
NAS. Studies [11] highlighted how NAS can be leveraged
in constructing cloud environment-specific AI models.
Their analysis revealed that through NAS techniques and
architectures, the accuracy and efficiency of such resource
management models could be enhanced by automating the
synthesis process as well as the architectural design
outcomes for specific cloud application. This approach
made me realize that NAS can add value on increasing
flexibility and effectiveness of AI managed resources
systems.
Moreover, researchers have published a systematic review
article which looks at the ongoing trends and future
research directions on applying artificial intelligence to
cloud resources [12]. Reinforcement learning, deep learning
and optimization algorithms were the AI techniques that
they reviewed and they evaluated the effects of these on
cloud scalability. The review also highlighted some
limitations of data quality, combining machine learning
models and AI solutions’, scalability for implementation
purposes, providing the readers with a more skeptical view
of AI applications.
Combingly, all these studies depict the direction towards
enhanced and more intelligent cloud resource management
solutions employing AI. From the 2022 to 2024, based on
reinforcement learning, predictive analysis, automated
optimization, and even neural architecture search various
techniques and approaches have been presented.
Collectively, each work advances the extant literature on AI
and cloud computing to appreciate the future possibilities of
modern cloud structures that include extensive ideas on
how to increase, optimize, and improve future levels of
scalability, productivity, and operational effectiveness.
Since cloud computing is already advancing in the current
generation, more research is expected to be carried out in
the future in order to enhance the above-listed artificial
intelligence techniques and discover other uses of AI in
cloud computing [13]. From what has been discussed in
current literature, it can be seen that increasing attentiveness
is being shown to the possibilities that Artificial
Intelligence offers to enhance cloud resources’ utilization
resulting in continued development of the cloud-scaling
processes [14].
III. RESEARCH METHODOLOGY
Thus, the method for the improvement of the cloud
scalability using the artificial intelligence techniques in
resource management implies the systematic overall
assessment of a number of sophisticated machine learning
algorithms. The process starts with the collection of detailed
data of an infrastructure used by a cloud service provider.
Performance data of resource usage, activity volumes and
workload profile, and cost information of the historical
period are other aspects of this data set. The data is
preprocessed by performing certain operations such as Data
Innovative Research Publication
cleaning to remove missing and flawed values and Data
transformation to arrive at cannonical features. It is then
inspected and cleaned data is finally split into training,
validation, and test sets to have a proper and rigorous
assessment of the trained machine learning models [15].
The case study involves the use of five machine learning
algorithms in particular, that are anticipated to bring about
improvements in the handling of resources. The resourceproviding and shortages’ handling rely on RL with the
application of Deep Q-Learning (DQN). In this setup, an
RL agent is trained within simulated cloud environment that
is close to real environment as possible. The training
process also includes adjusting of hyperparameters like
learning rate, the discount factor, and exploration rate with
a goal of achieving the better performance of an agent; this
is done in a grid search manner. The reward function is
aimed at achieving the lowest possible operational costs of
the system and at the same time, it prevents the
deterioration of system performance, in such a way, the RL
agent is able to learn an efficient way how to manage
resources through numerous interactions with the
environment [16].
In turn, for the usage in the forecasting of the further
resource requirements, a Long Short-Terms Memory
(LSTM) network is employed. This model is set to work
with time series data and its structure consists of LSTM
layers that take into consideration temporal characteristics
of the resource usage dynamics. In the current network,
back propagation through time (BPTT) learning algorithm
is used with Mean Squared Error (MSE) as the learning
rate. The number of LSTM units, learning rate, and the size
of the batch are also other hyperparameters that are
optimized by choosing the right value from recommended
set of values. Forecasting performance and ability to
allocate resources in advance in response to the future needs
are two major ways in which LSTM is assessed.
Among the models, Gradient Boosting Machines, more
specifically XGBoost are used to predict the usage rates of
resources and to find the optimal distribution schemes. This
model is fitted with certain hyperparameters; the learning
rate, maximum depth, and the number of total estimators.
Features are also important in the training process in order
to determine factors that affect use of the resources. To
measure the enhancement in the efficiency of the resources
as well as in the overall performance of the system, the
results of the GBM model are compared with the results of
the traditional regression models in terms of forecast
accuracy and error minimization.
An autoencoder model is used in the process of identifying
distinct anomalies in patterns of resource use. This type of
learning model is developed under the unsupervised
learning category and it comprises of the encoder which
encodes the input data and a decoder which decodes the
data. Anomalies are evaluated according to the considerable
reconstruction errors that are present within them. It is a
model that is trained on the past resource utilization data
and the performance measure of the autoencoder can be
accuracy, precision, recall and F1 measure. Since the
anomalies are flagged early the autoencoder assists in
resource allocation and avoiding future performance
problems.
NAS approach is used to propose new NAS architectures
for resource management by designing a specific neural
network topology. The NAS process aims at searching
34
International Journal of Innovative Research In Engineering and Management (IJIREM)
through
various
networks
configurations
and
hyperparameters to achieve the right design of the phone.
The models that are created thus are assessed for gain in
prediction accuracy, computational resource usage and
model scalability against baseline architectures. The
effectiveness of NAS-optimized models is evaluated with
respect to resource consumption and system performance
characterization.
Every established machine learning model goes through
various datasets and tested against them as required. These
comparisons are made to assess the cost efficiency of the
algorithms and the superiority in the improvement of their
performance and anomaly detection. To check the validity,
statistical tests are conducted that have a probabilistic
approach to describe how much better are the particular
algorithms for cloud scalability. This paper provides the
major conclusions and the analysis of the AI-based
practices to optimize resource management at the company,
as well as future directions for the development of the
models and overcoming the primary difficulties of
implementing such models.
All in all, this methodology outlines a detailed framework
for differently evaluating the AI-based solutions for
managing the cloud resources that seeks to improve clouds’
scalability and effectiveness.
Figure 1: Proposed Research Methodology
IV. RESULTS AND DISCUSSION
The findings of the study for improving the scalability of
cloud by using artificial intelligence for efficient resource
management show remarkable enhancements in terms of
resource utilization, cost optimization and performance gain
of the five machine learning techniques explored. The
findings provide an elaborate understanding of the
mechanisms through which each algorithm participates in
the management of the cloud resources and perhaps the
strength and drawbacks that accompany each of them.
It was also significant in terms of the reward while
regarding the resource allocation efficiency, the proposed
Reinforcement Learning (RL) model, Deep Q-Learning
(DQN), has shown enhanced performances. The RL-based
approach resulted in cutting of operational costs with 20%
as compared with rule-based systems. The ability of the
DQN model in this study made it easier for it to allocate the
Innovative Research Publication
resources in real-time depending on the availability of the
workloads. The outcome measure further showed that the
use of the RL system impacted positively on the minimum
provisioning time by 30% as well as the overall throughput
of the entire system which also increased by 15%. They
indicate the flexibility of resource allocations in the model
while at the same time demonstrating how the cloud
infrastructure is capable of meeting the fluctuating demand
and continue to be efficient in its usage.
Long Short-Term Memory (LSTM) network had better
capability on anticipating the future needs of each resource,
with an accuracy of 92%. This was a little better than
conventional time-series models estimated with 12 percent
average accuracy increase. The long short-term memory
model’s functionality to analyze long-term dependencies in
the resource consumption data widens the possibility of
making apt predictions, which caused reduction in overprovisioning by 25 percent and under-provisioning by 18
35
International Journal of Innovative Research In Engineering and Management (IJIREM)
percent. From these improvements, it was possible to
achieve an efficiency of resource utilization by 22 percent.
These better demand forecasts were able to enable more
appropriate resource planning that helped reduce resource
waste and also that which could cause some kind of
performance hindrance.
The Gradient Boosting Machines (GBM) especially
XGBoost demonstrated the best performance when it comes
to the prediction of the resources needed. The model made
it possible to predict with an accuracy of 95%, which was
30% higher as compared with traditional regression
methods in error reduction. Such non-linear relationships
were well captured in the GBM model leading to a 20%
improvement in operational costs and a 25% improvement
in service quality indicators. This highlights the GBM
model ability to ensure that organizations have accurate
planning estimates of the resources needed thus improving
on the actual utilization of resources.
The Autoencoder model proved to be efficient in
identifying the outliers in the usage of resources data. The
model was able to detect the resource usage patterns with a
detection accuracy of 97 % and was able to detect 85% of
the unknown/ new spikes and drops in the resource usage.
Therefore, through accurate identification of anomalies,
autoencoder helped to increase operating efficiency by 15%
and reduce time for service interruptions by 10%. This
capability is important in maintaining normalized cloud
functions to avoid performances mishaps and this is
conceived via identification of abnormities early enough to
allow for corresponding corrective measures or policies on
resource allocation to be instituted.
Due to NAS, we discovered novel neural architectures for
resource management tasks different from those Li et al.
(IService, 2019) proposed. In the comparison of the
different models it was observed that the NAS-optimized
model was 18% more accurate at making predictions than
the baseline models. Furthermore, it reached up to 20%
increase in the resource allocation efficiency and up to 25%
in the speed of the process. These results show how NAS is
able to address resource management issues more
effectively by designing models which are far more specific
than the general solutions favoured by cloud providers, in
turn improving the scalability, efficiency and reliability of
cloud solutions.
Therefore, all the result shared from each machine learning
algorithm confirms the importance of the improvement of
cloud scalability. The RL model outperformed other models
specifically in dynamic resource management, LSTM
networks the demand forecasting correctly, the GBM
models gave the correct utilization, autoencoder helped in
the correct detection of anomalies, and NAS helped in
choosing the correct model architecture for better
performance. All the algorithms made significant difference
in optimizing resource use, curtailing expenses, and
increasing the effectiveness of the system. The main focus
of the discussion is on the choice and application of the
most suitable machine learning techniques depending on
certain resource management requirements and identifies
the directions of the future development of the cloud
scalability solutions further.
The figures in the research paper present a comprehensive
analysis of various machine learning models applied to the
same dataset. Figure 2 illustrates the performance of Deep
Q-Networks (DQN), highlighting its efficiency in decisionmaking tasks. Figure 3 focuses on Long Short-Term
Memory (LSTM) networks, showcasing their strength in
handling sequential data and capturing temporal
dependencies. Figure 4 provides insights into the
performance of Gradient Boosting Machines (GBM),
emphasizing their robustness and accuracy in predictive
tasks. A repeated analysis in Figure 5 further validates
GBM's capabilities across different scenarios. Finally,
Figure 6 introduces Neural Architecture Search (NAS),
demonstrating its potential in optimizing model
architectures for enhanced performance. Together, these
figures underscore the strengths and weaknesses of each
model, contributing valuable insights into their applicability
in various contexts.
Figure 2: Performance Analysis of DQN
Innovative Research Publication
36
International Journal of Innovative Research In Engineering and Management (IJIREM)
Figure 3: Performance Analysis of LSTM
Figure 4: Performance Analysis of GBM
Figure 5: Performance Analysis of GBM
Innovative Research Publication
37
International Journal of Innovative Research In Engineering and Management (IJIREM)
Figure 6: Performance Analysis of NAS
V. CONCLUSION
The information presented in the regard can be considered
as the solid evidence of the ability to significantly improve
scalability, efficiency, and overall performance of cloud
resources from the use of machine learning algorithms. The
analysis of the Reinforcement Learning, Long Short-Term
Memory networks, Gradient Boosting Machines,
Autoencoders, and Neural Architecture Search algorithms
show that each of them has its strengths in managing the
difficulties of dynamic and complex cloud infrastructures.
The potential to allocate resources more efficiently through
Reinforcement Learning reduces operational costs and
provisioning delays and thereby underlines the efficiency of
Reinforcement Learning in real-time decision making.
LSTM networks help in forecasting the demand more
accurately and as a result, ensures effective utilization of
resources thus reducing wastage. An important fact is that
GBM models provide a high level of accuracy and costsavings due to their capabilities to manage nonlinear
dependencies in the data; Autoencoders maintain the best
stability because of their ability to identify outliers. NAS
also allows to bring complex neural structures that improve
the resulting accuracy of the model and the speed of its
calculations.
The results presented herein speak of the possibility that AIbased techniques may bring into cloud resources
management. The algorithms do not only consider problems
emerging in the course of traditional methods but also
present new approaches to make the cloud infrastructure
more responsive and efficient. Such a break through has
brought into limelight the need to extend the best Machine
Learning techniques in the ever increasing challenge of
cloud scalability. Further work is possible to refine such
models, study their application toward the real world within
infrastructures based on Clouds, and evaluate the effect of
these models in the long term to Clouds. Altogether, this
study can be seen as a basis for further developments of
new AI techniques for the management of cloud systems,
and to contribute to the increase of intelligence and
scalability of the sector of cloud computing.
CONFLICTS OF INTEREST
The authors declare that they have no conflicts of interest.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Innovative Research Publication
S. Kanungo, "AI-driven resource management strategies for
cloud computing systems, services, and applications," World
Journal of Advanced Engineering Technology and Sciences,
vol. 11, no. 2, pp. 559-566, 2024. Available From :
https://doi.org/10.30574/wjaets.2024.11.2.0137
S. Iqbal and A. Heng, "AI-driven resource management in
cloud computing: Leveraging machine learning, IoT devices,
and edge-to-cloud intelligence," 2023. Available From :
http://dx.doi.org/10.13140/RG.2.2.28383.27049
Q. Liang, W. A. Hanafy, A. Ali-Eldin, and P. Shenoy,
"Model-driven cluster resource management for AI
workloads in edge clouds," ACM Transactions on
Autonomous and Adaptive Systems, vol. 18, no. 1, pp. 1-26,
2023. Available From : https://doi.org/10.1145/3582080
M. J. Goswami, "Leveraging AI for cost efficiency and
optimized cloud resource management," International
Journal of New Media Studies: International Peer Reviewed
Scholarly Indexed Journal, vol. 7, no. 1, pp. 21-27, 2020.
Available
From:
https://www.researchgate.net/publication/381280852_Lever
aging_AI_for_Cost_Efficiency_and_Optimized_Cloud_Res
ource_Management
R. K. Navandar, "Enhancing cloud computing environments
with AI-driven resource allocation models," Advances in
Nonlinear Variational Inequalities, vol. 27, no. 3, pp. 541557,
2024.
Available
From:
https://internationalpubls.com/index.php/anvi/article/view/1
418
P. D. A. S. Rao, "Orchestrating efficiency: AI-driven cloud
resource optimization for enhanced performance and cost
reduction," International Journal of Research Publication
and
Reviews,
2023.
Available
From:
https://www.semanticscholar.org/paper/OrchestratingEfficiency%3A-AI-Driven-Cloud-ResourceRao/5780cec20018cdd99dc713febcd1f43938b9b9a3
A. Boudi, M. Bagaa, P. Pöyhönen, T. Taleb, and H. Flinck,
"AI-based resource management in beyond 5G cloud native
environment," IEEE Network, vol. 35, no. 2, pp. 128-135,
2021.
Available
From:
https://doi.org/10.1109/MNET.011.2000392
G. K. Walia, M. Kumar, and S. S. Gill, "AI-empowered
fog/edge resource management for IoT applications: A
38
International Journal of Innovative Research In Engineering and Management (IJIREM)
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
comprehensive review, research challenges and future
perspectives," IEEE Communications Surveys & Tutorials,
2023.
Available
From:
https://doi.org/10.1109/COMST.2023.3338015
C. Seo, D. Yoo, and Y. Lee, "Empowering sustainable
industrial and service systems through AI-enhanced cloud
resource optimization," Sustainability, vol. 16, no. 12, p.
5095,
2024.
Available
From:
https://doi.org/10.3390/su16125095
M. Abouelyazid and C. Xiang, "Architectures for AI
integration in next-generation cloud infrastructure,
development, security, and management," International
Journal of Information and Cybersecurity, vol. 3, no. 1, pp.
1-19,
2019.
Available
From
:
https://publications.dlpress.org/index.php/ijic/article/vi
I. Horrocks, "Transforming IoT security: Harnessing AI and
cloud systems for optimal resource management," 2023.
S. Priyadarshini, T. N. Sawant, G. B. Yadav, J. Premalatha,
and S. R. Pawar, "Enhancing security and scalability by
AI/ML workload optimization in the cloud," Cluster
Computing,
pp.
1-15,
2024.
Available
From:
https://doi.org/10.1007/s10586-024-04641-x
B. Kumar, "Challenges and solutions for integrating AI with
multi-cloud architectures," International Journal of
Multidisciplinary Innovation and Research Methodology,
vol. 1, no. 1, pp. 71-77, 2022. Available From:
https://ijmirm.com/index.php/ijmirm/article/view/76
U. M. R. Inkollu and J. K. R. Sastry, "AI-driven reinforced
optimal cloud resource allocation (ROCRA) for high-speed
satellite imagery data processing," Earth Science
Informatics, vol. 17, no. 2, pp. 1609-1624, 2024. Available
From : https://doi.org/10.1007/s12145-024-01242-5
K. Lin, Y. Li, Q. Zhang, and G. Fortino, "AI-driven
collaborative resource allocation for task execution in 6Genabled massive IoT," IEEE Internet of Things Journal, vol.
8, no. 7, pp. 5264-5273, 2021. Available From :
https://doi.org/10.1109/JIOT.2021.3051031
J. Sekar and L. L. C. Aquilanz, "Autonomous cloud
management using AI: Techniques for self-healing and selfoptimization," Journal of Emerging Technologies and
Innovative Research, vol. 11, pp. 571-580, 2023. Available
From
:
https://www.researchgate.net/publication/382205673_AUT
ONOMOUS_CLOUD_MANAGEMENT_USING_AI_TEC
HNIQUES_FOR_SELF-_HEALING_AND_SELFOPTIMIZATION
Innovative Research Publication
39