The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
In this study,we conducted an experiment to construct multi-model ensemble(MME)predictions for the El Niño-Southern Oscillation(ENSO)using a neural network,based on hindcast data released from five coupled oceana...In this study,we conducted an experiment to construct multi-model ensemble(MME)predictions for the El Niño-Southern Oscillation(ENSO)using a neural network,based on hindcast data released from five coupled oceanatmosphere models,which exhibit varying levels of complexity.This nonlinear approach demonstrated extraordinary superiority and effectiveness in constructing ENSO MME.Subsequently,we employed the leave-one-out crossvalidation and the moving base methods to further validate the robustness of the neural network model in the formulation of ENSO MME.In conclusion,the neural network algorithm outperforms the conventional approach of assigning a uniform weight to all models.This is evidenced by an enhancement in correlation coefficients and reduction in prediction errors,which have the potential to provide a more accurate ENSO forecast.展开更多
Given the extremely high inter-patient heterogeneity of acute myeloid leukemia(AML),the identification of biomarkers for prognostic assessment and therapeutic guidance is critical.Cell surface markers(CSMs)have been s...Given the extremely high inter-patient heterogeneity of acute myeloid leukemia(AML),the identification of biomarkers for prognostic assessment and therapeutic guidance is critical.Cell surface markers(CSMs)have been shown to play an important role in AML leukemogenesis and progression.In the current study,we evaluated the prognostic potential of all human CSMs in 130 AML patients from The Cancer Genome Atlas(TCGA)based on differential gene expression analysis and univariable Cox proportional hazards regression analysis.By using multi-model analysis,including Adaptive LASSO regression,LASSO regression,and Elastic Net,we constructed a 9-CSMs prognostic model for risk stratification of the AML patients.The predictive value of the 9-CSMs risk score was further validated at the transcriptome and proteome levels.Multivariable Cox regression analysis showed that the risk score was an independent prognostic factor for the AML patients.The AML patients with high 9-CSMs risk scores had a shorter overall and event-free survival time than those with low scores.Notably,single-cell RNA-sequencing analysis indicated that patients with high 9-CSMs risk scores exhibited chemotherapy resistance.Furthermore,PI3K inhibitors were identified as potential treatments for these high-risk patients.In conclusion,we constructed a 9-CSMs prognostic model that served as an independent prognostic factor for the survival of AML patients and held the potential for guiding drug therapy.展开更多
With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeo...With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.展开更多
Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workload...Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.展开更多
Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competi...Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.展开更多
In this research, we study the relationship between mental workload and facial temperature of aircraft participants during a simulated takeoff flight. We conducted experiments to comprehend the correlation between wor...In this research, we study the relationship between mental workload and facial temperature of aircraft participants during a simulated takeoff flight. We conducted experiments to comprehend the correlation between work and facial temperature within the flight simulator. The experiment involved a group of 10 participants who played the role of pilots in a simulated A-320 flight. Six different flying scenarios were designed to simulate normal and emergency situations on airplane takeoff that would occur in different levels of mental workload for the participants. The measurements were workload assessment, face temperatures, and heart rate monitoring. Throughout the experiments, we collected a total of 120 instances of takeoffs, together with over 10 hours of time-series data including heart rate, workload, and face thermal images and temperatures. Comparative analysis of EEG data and thermal image types, revealed intriguing findings. The results indicate a notable inverse relationship between workload and facial muscle temperatures, as well as facial landmark points. The results of this study contribute to a deeper understanding of the physiological effects of workload, as well as practical implications for aviation safety and performance.展开更多
Noise is one of the environmental factors with mental and physical effects.The workload is also the multiple mental and physical demands of the task.Therefore,his study investigated the relationship between noise expo...Noise is one of the environmental factors with mental and physical effects.The workload is also the multiple mental and physical demands of the task.Therefore,his study investigated the relationship between noise exposure and mood states at different levels of workload.The study recruited 50 workers from the manufacturing sector(blue-collar workers)as the exposed group and 50 workers from the office sector(white-collar workers)as the control group.Their occupational noise exposure was measured by dosimetry.The Stress-Arousal Checklist(SACL)and the NASA Task Load Index(NASA-TLX)were used to measure mood and workload,respectively.The equivalent noise exposure level of the exposed group at high and very high workload levels was 85 and 87 dBA,respectively.The mean mood score of the exposed group was 76 at very high workload.The correlation coefficient between noise exposure level and mood state based on workload levels ranged from 0.3 at medium workload to 0.57 at very high workload.Noise exposure at high workload levels can increase its adverse effects,so controlling and optimizing the multiple demands of the task in the workplace can be used as a privative measure to reduce the adverse effects of noise.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
In the continually evolving landscape of data-driven methodologies addressing car crash patterns,a holistic analysis remains critical to decode the complex nuances of this phenomenon.This study bridges this knowledge ...In the continually evolving landscape of data-driven methodologies addressing car crash patterns,a holistic analysis remains critical to decode the complex nuances of this phenomenon.This study bridges this knowledge gap with a robust examination of car crash occurrence dynamics and the influencing variables in the Greater Melbourne area,Australia.We employed a comprehensive multi-model machine learning and geospatial analytics approach,unveiling the complicated interactions intrinsic to vehicular incidents.By harnessing Random Forest with SHAP(Shapley Additive Explanations),GLR(Generalized Linear Regression),and GWR(Geographically Weighted Regression),our research not only highlighted pivotal contributing elements but also enriched our findings by capturing often overlooked complexities.Using the Random Forest model,essential factors were emphasized,and with the aid of SHAP,we accessed the interaction of these factors.To complement our methodology,we incorporated hexagonalized geographic units,refining the granularity of crash density evaluations.In our multi-model study of car crash dynamics in Greater Melbourne,road geometry emerged as a key factor,with intersections showing a significant positive correlation with crashes.The average land surface temperature had variable significance across scales.Socio-economically,regions with a higher proportion of childless populations were identified as more prone to accidents.Public transit usage displayed a strong positive association with crashes,especially in densely populated areas.The convergence of insights from both Generalized Linear Regression and Random Forest’s SHAP values offered a comprehensive understanding of underlying patterns,pinpointing high-risk zones and influential determinants.These findings offer pivotal insights for targeted safety interventions in Greater Melbourne,Australia.展开更多
订单审核与投放(Order Review and Release,ORR)是一种适用于面向订单制造(Make to Order,MTO)企业的生产控制技术。生产部门接收到顾客订单后,采用ORR确定所需物料投放到车间的时间,以使订单能够按时完工。另外ORR还对车间和机器的负...订单审核与投放(Order Review and Release,ORR)是一种适用于面向订单制造(Make to Order,MTO)企业的生产控制技术。生产部门接收到顾客订单后,采用ORR确定所需物料投放到车间的时间,以使订单能够按时完工。另外ORR还对车间和机器的负荷进行限制,以降低车间库存和订单在车间内的加工时间。由此ORR考虑的是一个多目标优化问题。以往的研究中一般选取多个评价指标,分别分析后再对ORR做出一个综合定性判断,导致很难定量、客观的评价不同ORR。基于此,这里提出一个总成本公式,综合考虑了按时完工和降低车间生产时间两类目标的信息,可以对不同ORR进行全面的量化分析。在实验设计的基础上,采用总成本指标对常见ORR的实际效果进行了比较,分析结论对ORR的选择有一定的借鉴意义。展开更多
虽然异构计算系统的应用可以加快神经网络参数的处理,但系统功耗也随之剧增。良好的功耗预测方法是异构系统优化功耗和处理多类型工作负载的基础,基于此,通过改进多层感知机-注意力模型,提出一种面向CPU/GPU异构计算系统多类型工作负载...虽然异构计算系统的应用可以加快神经网络参数的处理,但系统功耗也随之剧增。良好的功耗预测方法是异构系统优化功耗和处理多类型工作负载的基础,基于此,通过改进多层感知机-注意力模型,提出一种面向CPU/GPU异构计算系统多类型工作负载的功耗预测算法。首先,考虑服务器功耗与系统特征,建立一种基于特征的工作负载功耗模型;其次,针对现有的功耗预测算法不能解决系统特征与系统功耗之间的长程依赖的问题,提出一种改进的基于多层感知机-注意力模型的功耗预测算法Prophet,该算法改进多层感知机实现各个时刻的系统特征的提取,并使用注意力机制综合这些特征,从而有效解决系统特征与系统功耗之间的长程依赖问题;最后,在实际系统中开展相关实验,将所提算法分别与MLSTM_PM(Power consumption Model based on Multi-layer Long Short-Term Memory)和ENN_PM(Power consumption Model based on Elman Neural Network)等功耗预测算法对比。实验结果表明,Prophet具有较高的预测精准性,与MLSTM_PM算法相比,在工作负载blk、memtest和busspd上将平均相对误差(MRE)分别降低了1.22、1.01和0.93个百分点,并且具有较低的复杂度,表明了所提算法的有效性及可行性。展开更多
目的基于医院信息系统(hospital information system,HIS)中的数据构建儿童口腔门诊护理项目框架表。方法于2023年8月—2024年5月开展文献回顾并提取HIS系统中的相关项目和其它日常工作条目,形成初步条目框架列表。应用德尔菲专家函询法...目的基于医院信息系统(hospital information system,HIS)中的数据构建儿童口腔门诊护理项目框架表。方法于2023年8月—2024年5月开展文献回顾并提取HIS系统中的相关项目和其它日常工作条目,形成初步条目框架列表。应用德尔菲专家函询法,邀请全国22位专家进行2轮函询,并根据专家意见进行修订和完善,形成终版项目框架表。结果2轮函询中专家的积极系数均为100.0%,专家的权威系数分别为0.935、0.940,专家意见协调程度的Kendall′s W为0.200(P<0.01)。基于专家函询结果最终形成的儿童口腔门诊护理项目框架表,包含配合医生操作项目、直接工作项目和个人活动项目3个维度共计61个条目。结论本研究构建的儿童口腔门诊护理项目框架表具有较好的可靠性和科学性,可作为统计儿童口腔门诊护理工作量的科学工具,为临床护理管理提供数据支持和决策依据。展开更多
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金The fund from Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)under contract No.SML2021SP310the National Natural Science Foundation of China under contract Nos 42227901 and 42475061the Key R&D Program of Zhejiang Province under contract No.2024C03257.
文摘In this study,we conducted an experiment to construct multi-model ensemble(MME)predictions for the El Niño-Southern Oscillation(ENSO)using a neural network,based on hindcast data released from five coupled oceanatmosphere models,which exhibit varying levels of complexity.This nonlinear approach demonstrated extraordinary superiority and effectiveness in constructing ENSO MME.Subsequently,we employed the leave-one-out crossvalidation and the moving base methods to further validate the robustness of the neural network model in the formulation of ENSO MME.In conclusion,the neural network algorithm outperforms the conventional approach of assigning a uniform weight to all models.This is evidenced by an enhancement in correlation coefficients and reduction in prediction errors,which have the potential to provide a more accurate ENSO forecast.
基金supported by the National Natural Science Foundation of China(Grant Nos.32200590 to K.L.,81972358 to Q.W.,91959113 to Q.W.,and 82372897 to Q.W.)the Natural Science Foundation of Jiangsu Province(Grant No.BK20210530 to K.L.).
文摘Given the extremely high inter-patient heterogeneity of acute myeloid leukemia(AML),the identification of biomarkers for prognostic assessment and therapeutic guidance is critical.Cell surface markers(CSMs)have been shown to play an important role in AML leukemogenesis and progression.In the current study,we evaluated the prognostic potential of all human CSMs in 130 AML patients from The Cancer Genome Atlas(TCGA)based on differential gene expression analysis and univariable Cox proportional hazards regression analysis.By using multi-model analysis,including Adaptive LASSO regression,LASSO regression,and Elastic Net,we constructed a 9-CSMs prognostic model for risk stratification of the AML patients.The predictive value of the 9-CSMs risk score was further validated at the transcriptome and proteome levels.Multivariable Cox regression analysis showed that the risk score was an independent prognostic factor for the AML patients.The AML patients with high 9-CSMs risk scores had a shorter overall and event-free survival time than those with low scores.Notably,single-cell RNA-sequencing analysis indicated that patients with high 9-CSMs risk scores exhibited chemotherapy resistance.Furthermore,PI3K inhibitors were identified as potential treatments for these high-risk patients.In conclusion,we constructed a 9-CSMs prognostic model that served as an independent prognostic factor for the survival of AML patients and held the potential for guiding drug therapy.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the National Key Research and Development Plan of China(No.2016YFB1000600,2016YFB1000601)the Major Program of National Natural Science Foundation of China(No.61432006)
文摘With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the State Key Development Program for Basic Research of China(No.2014CB340402)
文摘Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.
基金supported by the NationalNatural Science Foundation of China(No.61972118)the Key R&D Program of Zhejiang Province(No.2023C01028).
文摘Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.
文摘In this research, we study the relationship between mental workload and facial temperature of aircraft participants during a simulated takeoff flight. We conducted experiments to comprehend the correlation between work and facial temperature within the flight simulator. The experiment involved a group of 10 participants who played the role of pilots in a simulated A-320 flight. Six different flying scenarios were designed to simulate normal and emergency situations on airplane takeoff that would occur in different levels of mental workload for the participants. The measurements were workload assessment, face temperatures, and heart rate monitoring. Throughout the experiments, we collected a total of 120 instances of takeoffs, together with over 10 hours of time-series data including heart rate, workload, and face thermal images and temperatures. Comparative analysis of EEG data and thermal image types, revealed intriguing findings. The results indicate a notable inverse relationship between workload and facial muscle temperatures, as well as facial landmark points. The results of this study contribute to a deeper understanding of the physiological effects of workload, as well as practical implications for aviation safety and performance.
文摘Noise is one of the environmental factors with mental and physical effects.The workload is also the multiple mental and physical demands of the task.Therefore,his study investigated the relationship between noise exposure and mood states at different levels of workload.The study recruited 50 workers from the manufacturing sector(blue-collar workers)as the exposed group and 50 workers from the office sector(white-collar workers)as the control group.Their occupational noise exposure was measured by dosimetry.The Stress-Arousal Checklist(SACL)and the NASA Task Load Index(NASA-TLX)were used to measure mood and workload,respectively.The equivalent noise exposure level of the exposed group at high and very high workload levels was 85 and 87 dBA,respectively.The mean mood score of the exposed group was 76 at very high workload.The correlation coefficient between noise exposure level and mood state based on workload levels ranged from 0.3 at medium workload to 0.57 at very high workload.Noise exposure at high workload levels can increase its adverse effects,so controlling and optimizing the multiple demands of the task in the workplace can be used as a privative measure to reduce the adverse effects of noise.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金Linking Health,Place and Urban Planning through the Australian Urban Observatory by Ian Potter Foundation,Australia.
文摘In the continually evolving landscape of data-driven methodologies addressing car crash patterns,a holistic analysis remains critical to decode the complex nuances of this phenomenon.This study bridges this knowledge gap with a robust examination of car crash occurrence dynamics and the influencing variables in the Greater Melbourne area,Australia.We employed a comprehensive multi-model machine learning and geospatial analytics approach,unveiling the complicated interactions intrinsic to vehicular incidents.By harnessing Random Forest with SHAP(Shapley Additive Explanations),GLR(Generalized Linear Regression),and GWR(Geographically Weighted Regression),our research not only highlighted pivotal contributing elements but also enriched our findings by capturing often overlooked complexities.Using the Random Forest model,essential factors were emphasized,and with the aid of SHAP,we accessed the interaction of these factors.To complement our methodology,we incorporated hexagonalized geographic units,refining the granularity of crash density evaluations.In our multi-model study of car crash dynamics in Greater Melbourne,road geometry emerged as a key factor,with intersections showing a significant positive correlation with crashes.The average land surface temperature had variable significance across scales.Socio-economically,regions with a higher proportion of childless populations were identified as more prone to accidents.Public transit usage displayed a strong positive association with crashes,especially in densely populated areas.The convergence of insights from both Generalized Linear Regression and Random Forest’s SHAP values offered a comprehensive understanding of underlying patterns,pinpointing high-risk zones and influential determinants.These findings offer pivotal insights for targeted safety interventions in Greater Melbourne,Australia.
文摘订单审核与投放(Order Review and Release,ORR)是一种适用于面向订单制造(Make to Order,MTO)企业的生产控制技术。生产部门接收到顾客订单后,采用ORR确定所需物料投放到车间的时间,以使订单能够按时完工。另外ORR还对车间和机器的负荷进行限制,以降低车间库存和订单在车间内的加工时间。由此ORR考虑的是一个多目标优化问题。以往的研究中一般选取多个评价指标,分别分析后再对ORR做出一个综合定性判断,导致很难定量、客观的评价不同ORR。基于此,这里提出一个总成本公式,综合考虑了按时完工和降低车间生产时间两类目标的信息,可以对不同ORR进行全面的量化分析。在实验设计的基础上,采用总成本指标对常见ORR的实际效果进行了比较,分析结论对ORR的选择有一定的借鉴意义。
文摘虽然异构计算系统的应用可以加快神经网络参数的处理,但系统功耗也随之剧增。良好的功耗预测方法是异构系统优化功耗和处理多类型工作负载的基础,基于此,通过改进多层感知机-注意力模型,提出一种面向CPU/GPU异构计算系统多类型工作负载的功耗预测算法。首先,考虑服务器功耗与系统特征,建立一种基于特征的工作负载功耗模型;其次,针对现有的功耗预测算法不能解决系统特征与系统功耗之间的长程依赖的问题,提出一种改进的基于多层感知机-注意力模型的功耗预测算法Prophet,该算法改进多层感知机实现各个时刻的系统特征的提取,并使用注意力机制综合这些特征,从而有效解决系统特征与系统功耗之间的长程依赖问题;最后,在实际系统中开展相关实验,将所提算法分别与MLSTM_PM(Power consumption Model based on Multi-layer Long Short-Term Memory)和ENN_PM(Power consumption Model based on Elman Neural Network)等功耗预测算法对比。实验结果表明,Prophet具有较高的预测精准性,与MLSTM_PM算法相比,在工作负载blk、memtest和busspd上将平均相对误差(MRE)分别降低了1.22、1.01和0.93个百分点,并且具有较低的复杂度,表明了所提算法的有效性及可行性。
文摘目的基于医院信息系统(hospital information system,HIS)中的数据构建儿童口腔门诊护理项目框架表。方法于2023年8月—2024年5月开展文献回顾并提取HIS系统中的相关项目和其它日常工作条目,形成初步条目框架列表。应用德尔菲专家函询法,邀请全国22位专家进行2轮函询,并根据专家意见进行修订和完善,形成终版项目框架表。结果2轮函询中专家的积极系数均为100.0%,专家的权威系数分别为0.935、0.940,专家意见协调程度的Kendall′s W为0.200(P<0.01)。基于专家函询结果最终形成的儿童口腔门诊护理项目框架表,包含配合医生操作项目、直接工作项目和个人活动项目3个维度共计61个条目。结论本研究构建的儿童口腔门诊护理项目框架表具有较好的可靠性和科学性,可作为统计儿童口腔门诊护理工作量的科学工具,为临床护理管理提供数据支持和决策依据。