为提高列车自动监控系统中安全相关控制命令执行结果的可靠性和安全性,满足基于车-车通信的列车自主运行系统项目安全需求,设计一种基于中央处理单元(CPU)和图形处理单元(GPU)双链计算和显示的安全控制执行结果显示方法。操作终端的C P ...为提高列车自动监控系统中安全相关控制命令执行结果的可靠性和安全性,满足基于车-车通信的列车自主运行系统项目安全需求,设计一种基于中央处理单元(CPU)和图形处理单元(GPU)双链计算和显示的安全控制执行结果显示方法。操作终端的C P U计算的执行结果以字符串格式输出到指定位置显示,操作终端的GPU计算的执行结果以图元格式输出到标题栏位置显示。同时采用不同编码方法、相异的算法等方式,避免同一硬件设备在编码语言、算法和硬件平台的共模失效。展开更多
Efficient utilization of processor and memory resources is essential for sustaining performance and energy efficiency in modern computing infrastructures.While earlier research has emphasized CPU utilization forecasti...Efficient utilization of processor and memory resources is essential for sustaining performance and energy efficiency in modern computing infrastructures.While earlier research has emphasized CPU utilization forecasting,joint prediction of CPU and memory usage under real workload conditions remains underexplored.This study introduces a machine learning–based framework for real-time prediction of CPU and RAM utilization using the Google Cluster Trace 2019 v3 dataset.The framework combines Extreme Gradient Boosting(XGBoost)with a MultiOutputRegressor(MOR)to capture nonlinear interactions across multiple resource dimensions,supported by a leakage-safe imputation strategy that prevents bias frommissing values.Nested cross-validation was employed to ensure rigorous evaluation and reproducibility.Experiments demonstrated that memory usage can be predicted with higher accuracy and stability than processor usage.Residual error analysis revealed balanced error distributions and very low outlier rates,while regime-based evaluations confirmed robustness across both low and high utilization scenarios.Feature ablation consistently highlighted the central role of page cache memory,which significantly affected predictive performance for both CPU and RAM.Comparisons with baseline models such as linear regression and random forest further underscored the superiority of the proposed approach.To assess adaptability,an online prequential learning pipeline was deployed to simulate continuous operation.The system preserved offline accuracy while dynamically adapting to workload changes.It achieved stable performance with extremely low update latencies,confirming feasibility for deployment in environments where responsiveness and scalability are critical.Overall,the findings demonstrate that simultaneous modeling of CPU and RAM utilization enhances forecasting accuracy and provides actionable insights for cache management,workload scheduling,and dynamic resource allocation.By bridging offline evaluation with online adaptability,the proposed framework offers a practical solution for intelligent and sustainable cloud resource management.展开更多
数据驱动建模方法改变了发电机传统的建模范式,导致传统的机电暂态时域仿真方法无法直接应用于新范式下的电力系统。为此,该文提出一种基于数据-模型混合驱动的机电暂态时域仿真(data and physics driven time domain simulation,DPD-T...数据驱动建模方法改变了发电机传统的建模范式,导致传统的机电暂态时域仿真方法无法直接应用于新范式下的电力系统。为此,该文提出一种基于数据-模型混合驱动的机电暂态时域仿真(data and physics driven time domain simulation,DPD-TDS)算法。算法中发电机状态变量与节点注入电流通过数据驱动模型推理计算,并通过网络方程完成节点电压计算,两者交替求解完成仿真。算法提出一种混合驱动范式下的网络代数方程组预处理方法,用以改善仿真的收敛性;算法设计一种中央处理器单元-神经网络处理器单元(central processing unit-neural network processing unit,CPU-NPU)异构计算框架以加速仿真,CPU进行机理模型的微分代数方程求解;NPU作协处理器完成数据驱动模型的前向推理。最后在IEEE-39和Polish-2383系统中将部分或全部发电机替换为数据驱动模型进行验证,仿真结果表明,所提出的仿真算法收敛性好,计算速度快,结果准确。展开更多
文摘为提高列车自动监控系统中安全相关控制命令执行结果的可靠性和安全性,满足基于车-车通信的列车自主运行系统项目安全需求,设计一种基于中央处理单元(CPU)和图形处理单元(GPU)双链计算和显示的安全控制执行结果显示方法。操作终端的C P U计算的执行结果以字符串格式输出到指定位置显示,操作终端的GPU计算的执行结果以图元格式输出到标题栏位置显示。同时采用不同编码方法、相异的算法等方式,避免同一硬件设备在编码语言、算法和硬件平台的共模失效。
文摘Efficient utilization of processor and memory resources is essential for sustaining performance and energy efficiency in modern computing infrastructures.While earlier research has emphasized CPU utilization forecasting,joint prediction of CPU and memory usage under real workload conditions remains underexplored.This study introduces a machine learning–based framework for real-time prediction of CPU and RAM utilization using the Google Cluster Trace 2019 v3 dataset.The framework combines Extreme Gradient Boosting(XGBoost)with a MultiOutputRegressor(MOR)to capture nonlinear interactions across multiple resource dimensions,supported by a leakage-safe imputation strategy that prevents bias frommissing values.Nested cross-validation was employed to ensure rigorous evaluation and reproducibility.Experiments demonstrated that memory usage can be predicted with higher accuracy and stability than processor usage.Residual error analysis revealed balanced error distributions and very low outlier rates,while regime-based evaluations confirmed robustness across both low and high utilization scenarios.Feature ablation consistently highlighted the central role of page cache memory,which significantly affected predictive performance for both CPU and RAM.Comparisons with baseline models such as linear regression and random forest further underscored the superiority of the proposed approach.To assess adaptability,an online prequential learning pipeline was deployed to simulate continuous operation.The system preserved offline accuracy while dynamically adapting to workload changes.It achieved stable performance with extremely low update latencies,confirming feasibility for deployment in environments where responsiveness and scalability are critical.Overall,the findings demonstrate that simultaneous modeling of CPU and RAM utilization enhances forecasting accuracy and provides actionable insights for cache management,workload scheduling,and dynamic resource allocation.By bridging offline evaluation with online adaptability,the proposed framework offers a practical solution for intelligent and sustainable cloud resource management.