Abstract
In the edge computing enabled smart home scenario. Various smart home devices generate a large number of computing tasks, and users can offload these tasks to servers or perform them locally. Offloading to the server will result in lower delay, but it will also require paying the corresponding offloading cost. Therefore, users need to consider the low delay and additional costs caused by offloading. Different users have different trade-offs between latency and offload costs at different times. If the trade-off is set as a fixed hyperparameter, it will give users a poor experience. In the case of dynamic trade-offs, the model may have difficulty adapting to arrive at an optimal offloading decision. By jointly optimizing the task delay and offloading cost, We model it as a long-term cost minimization problem under dynamic trade-off (DT-LCMP). To solve the problem, we propose an offloading algorithm based on multi-agent meta deep reinforcement learning and load prediction (MAMRL-L). Combined with the idea of meta-learning, the DDQN method is used to train the network. By training the sampling data in different environments, the agent can adapt to the dynamic environment quickly. In order to improve the performance of the model, LSTNet is used to predict the load level of the next slot server in real time. The simulation results show that our algorithm has higher performance than the existing algorithms and benchmark algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others

References
Naeem, M., et al.: Trends and future perspective challenges in big data. Smart Innov., Syst. Technol. 253, 309–325 (2022)
Durao, F., et al.: A systematic review on cloud computing. J. Supercomput. 68(3), 1321–1346 (2014)
Cao, K., et al.: An overview on edge computing research. IEEE Access 8, 85714–85728 (2020)
Kopytko, V., et al.: Smart home and artificial intelligence as environment for the implementation of new technologies. Path Sci. 4(9), 2007–2012 (2018)
Huh, J.H., Seo, K.: Artificial intelligence shoe cabinet using deep learning for smart home. In: Park, J., Loia, V., Choo, K.K., Yi, G. (eds.) Advanced Multimedia and Ubiquitous Engineering. Lecture Notes in Electrical Engineering, vol. 518, pp. 825–834. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1328-8_108
Guo, X., et al.: Review on the application of artificial intelligence in smart homes. Smart Cities 2(3), 402–420 (2019)
Jeon, Y., et al.: Mobility-aware optimal task offloading in distributed edge computing. In: International Conference on Information Networking, 2021-January, pp. 65–68 (2021)
Wu, Y., et al.: Noma-assisted multi-access mobile edge computing: a joint optimization of computation offloading and time allocation. IEEE Trans. Veh. Technol. 67(12), 12244–12258 (2018)
Zhang, T.: Data offloading in mobile edge computing: a coalition and pricing based approach. IEEE Access 6, 2760–2767 (2017)
Huang, X., et al.: Vehicle speed aware computing task offloading and resource allocation based on multi-agent reinforcement learning in a vehicular edge computing network. In: Proceedings - 2020 IEEE 13th International Conference on Edge Computing, EDGE 2020, pp. 1–8 (2020)
Tu, Y., et al.: Task offloading based on LSTM prediction and deep reinforcement learning for efficient edge computing in IoT. Future Internet 14(2), 30 (2022)
Zhou, H., et al.: Energy efficient joint computation offloading and service caching for mobile edge computing: a deep reinforcement learning approach. IEEE Trans. Green Commun. Netw. 7(2), 950–961 (2023)
Huang, L., et al.: Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing. Digit. Commun. Netw. 5(1), 10–17 (2019)
Beck, J., et al.: A survey of meta-reinforcement learning (2023)
Finn, C., et al.: Model-agnostic meta-learning for fast adaptation of deep networks (2017). https://proceedings.mlr.press/v70/finn17a.html
Jiang, H., et al.: Joint task offloading and resource allocation for energy-constrained mobile edge computing. IEEE Trans. Mob. Comput. (2022)
Luo, J., et al.: QoE-driven computation offloading for edge computing. J. Syst. Architect. 97, 34–39 (2019)
Park, J., Chung, K.: Distributed DRL-based computation offloading scheme for improving QoE in edge computing environments. Sensors 23(8), 4166 (2023)
Zhou, Z., et al.: QoE-guaranteed heterogeneous task offloading with deep reinforcement learning in edge computing. In: Proceedings of 2022 8th IEEE International Conference on Cloud Computing and Intelligence Systems, CCIS 2022, pp. 558–564 (2022)
Zhu, B., et al.: Efficient offloading for minimizing task computation delay of NOMA-based multiaccess edge computing. IEEE Trans. Commun. 70(5), 3186–3203 (2022)
Zhang, S., et al.: DRL-based partial offloading for maximizing sum computation rate of wireless powered mobile edge computing network. IEEE Trans. Wirel. Commun. 21(12), 10934–10948 (2022)
Chen, Y., et al.: Dynamic task offloading for mobile edge computing with hybrid energy supply. Tsinghua Sci. Technol. 28(3), 421–432 (2023)
Tong, Z., et al.: Stackelberg game-based task offloading and pricing with computing capacity constraint in mobile edge computing. J. Syst. Architect. 137, 102847 (2023)
Seo, H., et al.: Differential pricing-based task offloading for delay-sensitive IoT applications in mobile edge computing system. IEEE Internet Things J. 9(19), 19116–19131 (2022)
Wang, X., et al.: Decentralized scheduling and dynamic pricing for edge computing: a mean field game approach. IEEE/ACM Trans. Netw. 31(3), 965–978 (2023)
Chen, S., et al.: Dynamic pricing for smart mobile edge computing: a reinforcement learning approach. IEEE Wirel. Commun. Lett. 10(4), 700–704 (2021)
Chen, Y., et al.: Dynamic task offloading for internet of things in mobile edge computing via deep reinforcement learning. Int. J. Commun. Syst., e5154 (2022)
Xu, J., et al.: Online learning for offloading and Autoscaling in energy harvesting mobile edge computing. IEEE Trans. Cogn. Commun Netw. 3(3), 361–373 (2017)
Qu, G., et al.: DMRO: a deep meta reinforcement learning-based task offloading framework for edge-cloud computing. IEEE Trans. Netw. Serv. Manage. 18(3), 3448–3459 (2021)
Wang, J., et al.: Fast adaptive task offloading in edge computing based on meta reinforcement learning. IEEE Trans. Parallel Distrib. Syst. 32(1), 242–253 (2021)
Yan, W., et al.: Survey on recent smart gateways for smart home: systems, technologies, and challenges. Trans. Emerg. Telecommun. Technol. 33(6), e4067 (2022)
Dabin, J.A., et al.: A statistical ultra-wideband indoor channel model and the effects of antenna directivity on path loss and multipath propagation. IEEE J. Sel. Areas Commun. 24(2), 752–758 (2006)
Cai, J., et al.: Deep reinforcement learning-based multitask hybrid computing offloading for multiaccess edge computing. Int. J. Intell. Syst. 37(9), 6221–6243 (2022)
Wang, W., et al.: Trade-off analysis of fine-grained power gating methods for functional units in a CPU. In: Symposium on Low-Power and High-Speed Chips - Proceedings for 2012 IEEE COOL. Chips. XV. (2012)
Chen, E., et al.: SaaSC: toward pay-as-you-go mode for software service transactions based on blockchain’s smart legal contracts. IEEE Trans. Serv., Comput. (2023)
Chargebee, what is pay as you go pricing model, on-line webpage (2022). https://www.chargebee.com/resources/glossaries/pay-as-you-go-pricing/
Zhao, N., et al.: Multi-agent deep reinforcement learning for task offloading in UAV-assisted mobile edge computing. IEEE Trans. Wirel. Commun. 21(9), 6949–6960 (2022)
Van Hasselt, H., et al.: Deep reinforcement learning with double Q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, pp. 2094–2100 (2016)
Lai, G., et al.: Modeling long- and short-term temporal patterns with deep neural networks. In: 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, pp. 95–104 (2018)
Liu, Z., et al.: Computation offloading and pricing in mobile edge computing based on Stackelberg game. Wirel. Netw. 27(7), 4795–4806 (2021)
Li, F., et al.: Stackelberg game-based computation offloading in social and cognitive industrial internet of things. IEEE Trans. Industr. Inform. 16(8), 5444–5455 (2020)
Liao, L., et al.: Online computation offloading with double reinforcement learning algorithm in mobile edge computing. J. Parallel Distrib. Comput. 171, 28–39 (2023)
Acknowledgments
This paper is supported by the National Nature Science Foundation of China under grant number: T2350710232.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Li, M., Li, S., Qi, W. (2024). Dynamic Offloading Based on Meta Deep Reinforcement Learning and Load Prediction in Smart Home Edge Computing. In: Gao, H., Wang, X., Voros, N. (eds) Collaborative Computing: Networking, Applications and Worksharing. CollaborateCom 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 561. Springer, Cham. https://doi.org/10.1007/978-3-031-54521-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-54521-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-54520-7
Online ISBN: 978-3-031-54521-4
eBook Packages: Computer ScienceComputer Science (R0)