留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于时序卷积网络的云服务器性能预测模型

廖恩红 舒娜 李加伟 庞雄文

廖恩红, 舒娜, 李加伟, 庞雄文. 基于时序卷积网络的云服务器性能预测模型[J]. 华南师范大学学报(自然科学版), 2020, 52(4): 107-113. doi: 10.6054/j.jscnun.2020068
引用本文: 廖恩红, 舒娜, 李加伟, 庞雄文. 基于时序卷积网络的云服务器性能预测模型[J]. 华南师范大学学报(自然科学版), 2020, 52(4): 107-113. doi: 10.6054/j.jscnun.2020068
LIAO Enhong, SHU Na, LI Jiawei, PANG Xiongwen. A Prediction Model of Cloud Server Performance Based on Temporal Convolutional Network[J]. Journal of South China normal University (Natural Science Edition), 2020, 52(4): 107-113. doi: 10.6054/j.jscnun.2020068
Citation: LIAO Enhong, SHU Na, LI Jiawei, PANG Xiongwen. A Prediction Model of Cloud Server Performance Based on Temporal Convolutional Network[J]. Journal of South China normal University (Natural Science Edition), 2020, 52(4): 107-113. doi: 10.6054/j.jscnun.2020068

基于时序卷积网络的云服务器性能预测模型

doi: 10.6054/j.jscnun.2020068
基金项目: 

广东省科技计划项目 2017B010126002

广东省科技计划项目 2017A010101008

广东省科技计划项目 2017A010101014

教育部科技发展中心项目 2018A02004

详细信息
    通讯作者:

    庞雄文,副教授,Email:augepang@163.com

  • 中图分类号: TP183

A Prediction Model of Cloud Server Performance Based on Temporal Convolutional Network

  • 摘要: 目前基于深度学习的主机性能预测模型大部分缺乏普适性,实验数据缺乏公正性,无法准确预测能耗或性能峰值点且时间开销较大.为解决这些问题,文章提出了一种基于改进时序卷积网络的云服务器性能预测模型(ATCN模型).该模型将CPU利用率作为主机过载的衡量标准,利用多维性能指标构建N+1维能耗向量,建立输入向量与预测标准之间的关系;调整TCN中的卷积核大小并不断增大扩张因子,实现长期记忆效果.基于阿里云开源数据集的实验结果表明:ATCN模型具有强自适应性,在不同硬件配置和资源使用情况下,预测准确率和效率方面比LSTM模型提升大约20%.
  • 图  1  ATCN网络架构图

    Figure  1.  The network architecture of ATCN

    图  2  扩展卷积示意图

    Figure  2.  The extended convolution diagram

    图  3  卷积过程图

    Figure  3.  The convolution process

    图  4  ATCN模型与LSTM模型预测实验对比图

    Figure  4.  The comparison of prediction experiments between ATCN model and LSTM model

    表  1  不同卷积核大小的模型预测实验结果

    Table  1.   The results of experiment on model prediction with different convolution kernel sizes

    网络结构 Epoch
    次数
    RMSE
    K=2 K=3 K=4
    5*(5*5)*1 1 4.240 738 4.250 448 4.232 623
    5*(5*5)*1 2 4.214 804 4.224 319 4.205 785
    5*(5*5)*1 3 4.185 173 4.194 477 4.175 114
    5*(5*5)*1 4 4.151 128 4.160 328 4.139 865
    5*(5*5)*1 5 4.125 518 4.128 336 4.108 604
    下载: 导出CSV

    表  2  不同学习率的模型预测实验结果

    Table  2.   The results of experiment on model prediction with different learning rates

    网络结构 Epoch次数 RMSE
    α=0.1 α=0.01
    5*(5*5*5)*1 1 4.755 311 4.710 108
    5*(5*5*5)*1 2 4.739 319 3.820 585
    5*(5*5*5)*1 3 4.728 839 3.622 646
    5*(5*5*5)*1 4 4.685 500 3.566 161
    5*(5*5*5)*1 5 4.665 762 3.540 056
    下载: 导出CSV

    表  3  不同隐藏层层数的模型预测实验结果

    Table  3.   The results of experiment on model prediction for different hidden layers

    网络结构 Epoch
    次数
    RMSE
    h=1 h=3 h=5
    5*(5*5)*1 1 4.715 307 3.943 657 6.590 475
    5*(5*5)*1 2 4.018 473 3.884 918 6.497 486
    5*(5*5)*1 3 4.017 610 3.809 758 6.489 330
    5*(5*5)*1 4 4.013 072 3.713 727 6.403 135
    5*(5*5)*1 5 4.009 924 3.591 695 6.403 134
    下载: 导出CSV

    表  4  对比实验测试结果

    Table  4.   The comparison test results

    网络结构 Epoch
    次数
    RMSE
    ATCN LSTM
    5*(5*5*5)*1 1 3.642 454 3.556 143
    5*(5*5*5)*1 2 3.210 256 3.268 746
    5*(5*5*5)*1 3 3.012 058 3.219 875
    5*(5*5*5)*1 4 3.002 158 3.111 235
    5*(5*5*5)*1 5 3.001 924 3.110 457
    下载: 导出CSV
  • [1] QIAN Q F, LI C L, ZHANG X Q. Survey of virtual resource management in cloud data center[J]. Application Research of Computers, 2012, 29(7):2411-2410. http://www.oalib.com/paper/1620927
    [2] SONI G, KALRA M. A novel approach for load balancing in cloud data center[C]//Proceedings of 2014 IEEE International Advance Computing Conference. Gurgaon, India: IEEE, 2014: 807-812.
    [3] TSENG F H, WANG X, CHOU L D, et al. Dynamic resource prediction and allocation for cloud data center using the multiobjective genetic algorithm[J]. IEEE Systems Journal, 2017, 12(2):1688-1699. http://ieeexplore.ieee.org/document/7987741/
    [4] SOFIE L, WARD V H, WILLEM V, et al. Worldwide electricity consumption of communication networks[J]. Optics Express, 2012, 20(26):513-524. doi: 10.1364/OE.20.00B513
    [5] NI J C, BAI X L. A review of air conditioning energy performance in data centers[J]. Renewable & Sustainable Energy Reviews, 2017, 67:625-640. http://www.sciencedirect.com/science/article/pii/S136403211630541X
    [6] DAYARATHNA M, WEN Y, FAN R. Data center energy consumption modeling:a survey[J]. Communications Surveys & Tutorials, 2016, 18(1):732-794. http://cn.bing.com/academic/profile?id=88dca82d314f40d06fda538a466330d5&encoded=0&v=paper_preview&mkt=zh-cn
    [7] TRAN V G, DEBUSSCHERE V, BACHA S. Data center energy consumption simulator from the servers to their cooling system[C]//Proceedings of 2013 IEEE Grenoble Conference. Grenoble, France: IEEE, 2014: 1-6.
    [8] YANG Q P, PENG C L, ZHAO H, et al. A new method based on PSR and EA-GMDH for host load prediction in cloud computing system[J]. Journal of Supercomputing, 2014, 68(3):1402-1417. http://dl.acm.org/citation.cfm?id=2633499
    [9] HUANG P J, YE D S, FAN Z W, et al. Discriminative model for Google host load prediction with rich feature set[C]// Proceedings of 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing. Shen-zhen: IEEE, 2015: 1193-1196.
    [10] DI S, KONDO D, CIRNE W. Host load prediction in a Google compute cloud with a Bayesian model[C]//Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. Salt Lake City, UT: IEEE, 2012: 1-11.
    [11] TIAN C J, MA J, ZHANG C H, et al. A deep neural network model for short-term load forecast based on long short-term memory network and convolutional neural network[J]. Energies, 2018, 11(12):3493/1-13. doi: 10.3390/en11123493
    [12] YU Z Y, NIU Z W, TANG W H, et al. Deep learning for daily peak load forecasting-a novel gated recurrent neural network combining dynamic time warping[J]. IEEE Access, 2019, 7:17184-17194. doi: 10.1109/ACCESS.2019.2895604
    [13] DI S, KONDO D, CIRNE W. Google hostload prediction based on Bayesian model with optimized feature combination[J]. Journal of Parallel and Distributed Computing, 2014, 74(1):1820-1832. http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=842ebbe0624a9345c7c356f71da52d91
    [14] SONG B B, YU Y, ZHOU Y, et al. Host load prediction with long short-term memory in cloud computing[J]. The Journal of Supercomputing, 2018, 74(12):6554-6568. doi: 10.1007/s11227-017-2044-4
    [15] YANG Q P, ZHOU Y, YU Y, et al. Multi-step-ahead host load prediction using autoencoder and echo state networks in cloud computing[J]. The Journal of Supercomputing, 2015, 71(8):3037-3053. doi: 10.1007/s11227-015-1426-8
    [16] RONG H G, ZHANG H M, XIAO S, et al. Optimizing ener-gy consumption for data centers[J]. Renewable and Sustainable Energy Reviews, 2016, 58:674-691. doi: 10.1016/j.rser.2015.12.283
    [17] ZHU H, DAI H D, YANG S Z, et al. Estimating power consumption of servers using Gaussian Mixture model[C]//Proceedings of 2017 Fifth International Symposium on Computing and Networking. Aomori, Japan: IEEE, 2017: 427-433.
    [18] LIU N, LIN X, WANG Y Z. Data center power management for regulation service using neural network-based power prediction[C]//Proceedings of 2017 18th International Symposium on Quality Electronic Design. Santa Clara: IEEE, 2017: 367-372.
    [19] LI Y L, HU H, WEN Y G, et al. Learning-based power prediction for data centre operations via deep neural networks[C]//Proceedings of the 5th International Workshop on Energy Efficient Data Centres. New York: Association for Computing Machinery, 2016: 1-10.
    [20] KUMAR J, SINGH A K. Workload prediction in cloud using artificial neural network and adaptive differential evolution[J]. Future Generation Computer Systems, 2018, 81:41-52. doi: 10.1016/j.future.2017.10.047
    [21] BAI S J, KOLTER J Z, KOLTUN V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling[J/OL]. arXiv, (2018-03-04)[2020-04-10]. https://arxiv.org/abs/1803.01271.
    [22] RAUBER T, RVNGER G. Modeling the energy consumption for concurrent executions of parallel tasks[C]//Proceedings of the 14th Communications and Networking Symposium. Boston: ACM, 2011: 11-18.
    [23] OORD A, DIELEMAN S, ZEN H, et al. Wavenet: a genera-tive model for raw audio[J/OL]. arXiv, (2016-09-19) [2020-04-10]. https://arxiv.org/abs/1609.03499.
    [24] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[J/OL]. arXiv, (2016-04-30) [2020-04-10]. https://arxiv.org/abs/1511.07122.
    [25] SALIMANS T, KINGMA D P. Weight normalization: a simple reparameterization to accelerate training of deep neural networks[C]//Proceedings of Neural Information Processing Systems. Barcelona, Spain, 2016: 901-909.
    [26] Alibaba Inc. Cluster data collected from production clusters in Alibaba for cluster management research[DS/OL]. (2018-12-13)[2019-12-13]. https://github.com/alibaba/clusterdata/tree/master/cluster-trace-v2018.
    [27] SMITH L N. Cyclical learning rates for training neural networks[C]//Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision. Santa Rosa, California: IEEE, 2017: 464-472.
  • 加载中
图(4) / 表(4)
计量
  • 文章访问数:  481
  • HTML全文浏览量:  387
  • PDF下载量:  31
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-08
  • 刊出日期:  2020-08-25

目录

    /

    返回文章
    返回