期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
An Enhanced Task Migration Technique Based on Convolutional Neural Network in Machine Learning Framework
1
作者 Hamayun Khan Muhammad Atif Imtiaz +5 位作者 Hira Siddique Muhammad Tausif Afzal Rana Arshad Ali Muhammad Zeeshan Baig Saif ur Rehman Yazed Alsaawy 《Computer Systems Science & Engineering》 2025年第1期317-331,共15页
The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address th... The migration of tasks aided by machine learning(ML)predictions IN(DPM)is a system-level design technique that is used to reduce energy by enhancing the overall performance of the processor.In this paper,we address the issue of system-level higher task dissipation during the execution of parallel workloads with common deadlines by introducing a machine learning-based framework that includes task migration using energy-efficient earliest deadline first scheduling(EA-EDF).ML-based EA-EDF enhances the overall throughput and optimizes the energy to avoid delay and performance degradation in a multiprocessor system.The proposed system model allocates processors to the ready task set in such a way that their deadlines are guaranteed.A full task migration policy is also integrated to ensure proper task mapping that ensures inter-process linkage among the arrived tasks with the same deadlines.The execution of a task can halt on one CPU and reschedule the execution on a different processor to avoid delay and ensure meeting the deadline.Our approach shows promising potential for machine-learning-based schedulability analysis enables a comparison between different ML models and shows a promising reduction in energy as compared with other ML-aware task migration techniques for SoC like Multi-Layer Feed-Forward Neural Networks(MLFNN)based on convolutional neural network(CNN),Random Forest(RF)and Deep learning(DL)algorithm.The Simulations are conducted using super pipelined microarchitecture of advanced micro devices(AMD)XScale PXA270 using instruction and data cache per core 32 Kbyte I-cache and 32 Kbyte D-cache on various utilization factors(u_(i))12%,31%and 50%.The proposed approach consumes 5.3%less energy when almost half of the CPU is running and on a lower workload consumes 1.04%less energy.The proposed design accumulatively gives significant improvements by reducing the energy dissipation on three clock rates by 4.41%,on 624 MHz by 5.4%and 5.9%on applications operating on 416 and 312 MHz standard operating frequencies. 展开更多
关键词 Convolutional neural network(CNN) energy conversation dynamic thermal management optimization methods ANN multiprocessor systems-on-chips artificial neural networks artificial intelligence multi-layer feed-forward neural network(MLFNN) random forest(RF)and deep learning(DL)
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部