Objectives:This study aimed to explore the characteristics of outpatient blood collection center visit fluctuation and nursing workforce allocation based on a time series model,and the application effect was evaluated...Objectives:This study aimed to explore the characteristics of outpatient blood collection center visit fluctuation and nursing workforce allocation based on a time series model,and the application effect was evaluated.Methods:To enhance the efficiency of phlebotomy at the hospital outpatient window and improve patient satisfaction,the First Affliated Hospital with Nanjing Medical University implemented a time series analysis model in 2024 to optimize nursing staff allocation.The management team was led by a head nurse of the outpatient blood collection department with extensive experience.It included one director of the nursing department,six senior clinical nurses,one informatics expert,and one nursing master's degree holder.Retrospective time-series data from the hospital's smart blood collection system(including hourly blood collection volumes and waiting times)were extracted between January 2020 and December 2023.Time series analysis was used to identify annual,seasonal,monthly,and hourly variation patterns in blood collection volumes.Seasonal decomposition and the Autoregressive Integrated Moving Average Model(ARIMA)were employed to forecast blood collection fluctuations for 2024 and facilitate dynamic scheduling.A comparison was conducted to evaluate differences in blood collection efficiency and patient satisfaction before(January-June 2023)and after(January-June 2024)implementing the dynamic scheduling model based on the time series analysis and forecasting.Results:Visit volumes showed periodicity and slow growth,peaking every second and third quarter of the year and daily at 8:00-9:00 a.m.and 2:00-3:00 p.m.The ARIMA model demonstrated a good fit(R2=0.692,mean absolute percentage error=8.28%).After adjusting the nursing staff allocation based on the fluctuation characteristics of the number of phlebotomy per hour in the time series analysis model,at the peak period of the blood collection window,at least three nurses,one mobile nurse and two volunteers were added.The number of phlebotomy per hour increased from 289.74±54.55 to 327.53±37.84 person-time(t=-10.041,P<0.01),waiting time decreased from 5.79±2.68 to 4.01±0.46 min(t=11.531,P<0.01),and satisfaction rose from 92.7%to 97.3%(χ^(2)=6.877,P<0.05).Conclusions:Based on the time series analysis method,it is helpful for nursing managers to accurately allocate human resources and optimize the efficiency of outpatient service resources by mining the special change rule of the outpatient blood collection window and predicting the future fluctuation trend.展开更多
Leveraging big data signal processing offers a pathway to the development of artificial intelligencedriven equipment.The analysis of fluid flow signals and the characterization of fluid flow behavior are of critical i...Leveraging big data signal processing offers a pathway to the development of artificial intelligencedriven equipment.The analysis of fluid flow signals and the characterization of fluid flow behavior are of critical in two-phase flow studies.Significant research efforts have focused on discerning flow regimes using various signal analysis methods.In this review,recent advances in time series signals analysis algorithms for stirred tank reactors have been summarized,and the detailed methodologies are categorized into the frequency domain methods,time-frequency domain methods,and state space methods.The strengths,limitations,and notable findings of each algorithm are highlighted.Additionally,the interrelationships between these methodologies have also been discussed,as well as the present progress achieved in various applications.Future research directions and challenges are also predicted to provide an overview of current research trends in data mining of time series for analyzing flow regimes and chaotic signals.This review offers a comprehensive summary for extracting and characterizing fluid flow behavior and serves as a theoretical reference for optimizing the characterization of chaotic signals in future research endeavors.展开更多
GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieve...GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.展开更多
The natural visibility graph method has been widely used in physiological signal analysis,but it fails to accurately handle signals with data points below the baseline.Such signals are common across various physiologi...The natural visibility graph method has been widely used in physiological signal analysis,but it fails to accurately handle signals with data points below the baseline.Such signals are common across various physiological measurements,including electroencephalograph(EEG)and functional magnetic resonance imaging(fMRI),and are crucial for insights into physiological phenomena.This study introduces a novel method,the baseline perspective visibility graph(BPVG),which can analyze time series by accurately capturing connectivity across data points both above and below the baseline.We present the BPVG construction process and validate its performance using simulated signals.Results demonstrate that BPVG accurately translates periodic,random,and fractal signals into regular,random,and scale-free networks respectively,exhibiting diverse degree distribution traits.Furthermore,we apply BPVG to classify Alzheimer’s disease(AD)patients from healthy controls using EEG data and identify non-demented adults at varying dementia risk using resting-state fMRI(rs-fMRI)data.Utilizing degree distribution entropy derived from BPVG networks,our results exceed the best accuracy benchmark(77.01%)in EEG analysis,especially at channels F4(78.46%)and O1(81.54%).Additionally,our rs-fMRI analysis achieves a statistically significant classification accuracy of 76.74%.These findings highlight the effectiveness of BPVG in distinguishing various time series types and its practical utility in EEG and rs-fMRI analysis for early AD detection and dementia risk assessment.In conclusion,BPVG’s validation across both simulated and real data confirms its capability to capture comprehensive information from time series,irrespective of baseline constraints,providing a novel method for studying neural physiological signals.展开更多
Complementary metal oxide semiconductor(CMOS)aging mechanisms including bias temperature instability(BTI)pose growing concerns about circuit reliability.BTI results in threshold voltage increases on CMOS transistors,c...Complementary metal oxide semiconductor(CMOS)aging mechanisms including bias temperature instability(BTI)pose growing concerns about circuit reliability.BTI results in threshold voltage increases on CMOS transistors,causing delay shifts and timing violations on logic circuits.The amount of degradation is dependent on the circuit workload,which increases the challenge for accurate BTI aging prediction at the design time.In this paper,a BTI prediction method for logic circuits based on statistical static timing analysis(SSTA)is proposed,especially considering the correlation between circuit workload and BTI degradation.It consists of a training phase,to discover the relationship between circuit scale and the required workload samples,and a prediction phase,to present the degradations under different workloads in Gaussian probability distributions.This method can predict the distribution of degradations with negligible errors,and identify 50%more BTI-critical paths in an affordable time,compared with conventional methods.展开更多
This paper presents an investigation of the seismic behavior of reinforced concrete(RC)structures in which shear walls are the main lateral load-resisting elements and the participation of flat slab floor systems is n...This paper presents an investigation of the seismic behavior of reinforced concrete(RC)structures in which shear walls are the main lateral load-resisting elements and the participation of flat slab floor systems is not considered in the seismic design procedure.In this regard,the behavior of six prototype structures(with different heights and plan layouts)is investigated through nonlinear static and time history analyses,implemented in the OpenSees platform.The results of the analyses are presented in terms of the behavior of the slab-column connections and their mode of failure at different loading stages.Moreover,the global response of the buildings is discussed in terms of some parameters,such as lateral overstrength due to the gravity flat slab-column frames.According to the nonlinear static analyses,in structures in which the slab-column connections were designed only for gravity loads,the slab-column connections exhibited a punching mode of failure even in the early stages of loading.However,the punching failure was eliminated in structures in which a minimum transverse reinforcement recommended in ACI 318(2019)was provided in the slabs at joint regions.Furthermore,despite neglecting the contribution of gravity flat slab-column frames in the lateral load resistance of the structures,a relatively significant overstrength was imposed on the structures by the gravity frames.展开更多
This paper presents a new technique for measuring the bunch length of a high-energy electron beam at a bunch-by-bunch rate in storage rings.This technique uses the time–frequency-domain joint analysis of the bunch si...This paper presents a new technique for measuring the bunch length of a high-energy electron beam at a bunch-by-bunch rate in storage rings.This technique uses the time–frequency-domain joint analysis of the bunch signal to obtain bunch-by-bunch and turn-by-turn longitudinal parameters,such as bunch length and synchronous phase.The bunch signal is obtained using a button electrode with a bandwidth of several gigahertz.The data acquisition device was a high-speed digital oscilloscope with a sampling rate of more than 10 GS/s,and the single-shot sampling data buffer covered thousands of turns.The bunch-length and synchronous phase information were extracted via offline calculations using Python scripts.The calibration coefficient of the system was determined using a commercial streak camera.Moreover,this technique was tested on two different storage rings and successfully captured various longitudinal transient processes during the harmonic cavity debugging process at the Shanghai Synchrotron Radiation Facility(SSRF),and longitudinal instabilities were observed during the single-bunch accumulation process at Hefei Light Source(HLS).For Gaussian-distribution bunches,the uncertainty of the bunch phase obtained using this technique was better than 0.2 ps,and the bunch-length uncertainty was better than 1 ps.The dynamic range exceeded 10 ms.This technology is a powerful and versatile beam diagnostic tool that can be conveniently deployed in high-energy electron storage rings.展开更多
The significant impact of earthquakes on human lives and the built environment underscores the extensive human and economic losses caused by structural collapses. Over the years, researchers have focused on improving ...The significant impact of earthquakes on human lives and the built environment underscores the extensive human and economic losses caused by structural collapses. Over the years, researchers have focused on improving seismic design to mitigate earthquake-induced damages and enhance structural performance. In this study, a specific reinforced concrete (RC) frame structure at Kyungpook National University, designed for educational purposes, is analyzed as a representative case. Utilizing SAP 2000, the research conducts a nonlinear time history analysis to assess the structural performance under seismic conditions. The primary objective is to evaluate the influence of different column section designs, while maintaining identical column section areas, on structural behavior. The study employs two distinct seismic waves from Abeno (ABN) and Takatori (TKT) for the analysis, comparing the structural performance under varying seismic conditions. Key aspects examined include displacement, base shear force, base moment, joint radians, and layer displacement angle. This research is anticipated to serve as a valuable reference for seismic restraint reinforcement work on RC buildings, enriching the methods used for evaluating structures through nonlinear time history analysis based on the synthetic seismic wave approach.展开更多
Aiming at the problem of on-line damage diagnosis in structural health monitoring (SHM), an algorithm of feature extraction and damage alarming based on auto-regressive moving-average (ARMA) time series analysis i...Aiming at the problem of on-line damage diagnosis in structural health monitoring (SHM), an algorithm of feature extraction and damage alarming based on auto-regressive moving-average (ARMA) time series analysis is presented. The monitoring data were first modeled as ARMA models, while a principalcomponent matrix derived from the AR coefficients of these models was utilized to establish the Mahalanobisdistance criterion functions. Then, a new damage-sensitive feature index DDSF is proposed. A hypothesis test involving the t-test method is further applied to obtain a decision of damage alarming as the mean value of DDSF had significantly changed after damage. The numerical results of a three-span-girder model shows that the defined index is sensitive to subtle structural damage, and the proposed algorithm can be applied to the on-line damage alarming in SHM.展开更多
The research conducted prediction on changes of atmosphere pollution during July 9, 2014-July 22, 2014 with SPSS based on monitored data of O3 in 13 successive weeks from 6 sites in Baoding City and demonstrated predi...The research conducted prediction on changes of atmosphere pollution during July 9, 2014-July 22, 2014 with SPSS based on monitored data of O3 in 13 successive weeks from 6 sites in Baoding City and demonstrated prediction effect of ARIMA model is good by Ljung-Box Q-test and R2, and the model can be used for prediction on future atmosphere pollutant changes.展开更多
To improve the path slack of Field Programmable Gate Array(FPGA), this paper proposes a timing slack optimization approach which utilizes the hybrid routing strategy of rip-up-retry and pathfinder. Firstly, effect of ...To improve the path slack of Field Programmable Gate Array(FPGA), this paper proposes a timing slack optimization approach which utilizes the hybrid routing strategy of rip-up-retry and pathfinder. Firstly, effect of process variations on path slack is analyzed, and by constructing a collocation table of delay model that takes into account the multi-corner process, the complex statistical static timing analysis is successfully translated into a simple classical static timing analysis. Then, based on the hybrid routing strategy of rip-up-retry and pathfinder, by adjusting the critical path which detours a long distance, the critical path delay is reduced and the path slack is optimized. Experimental results show that, using the hybrid routing strategy, the number of paths with negative slack can be optimized(reduced) by 85.8% on average compared with the Versatile Place and Route(VPR) timing-driven routing algorithm, while the run-time is only increased by 15.02% on average.展开更多
Nonlinear variations in the coordinate time series of global navigation satellite system(GNSS) reference stations are strongly correlated with surface displacements caused by environmental loading effects,including at...Nonlinear variations in the coordinate time series of global navigation satellite system(GNSS) reference stations are strongly correlated with surface displacements caused by environmental loading effects,including atmospheric, hydrological, and nontidal ocean loading. Continuous improvements in the accuracy of surface mass loading products, performance of Earth models, and precise data-processing technologies have significantly advanced research on the effects of environmental loading on nonlinear variations in GNSS coordinate time series. However, owing to theoretical limitations, the lack of high spatiotemporal resolution surface mass observations, and the coupling of GNSS technology-related systematic errors, environmental loading and nonlinear GNSS reference station displacements remain inconsistent. The applicability and capability of these loading products across different regions also require further evaluation. This paper outlines methods for modeling environmental loading, surface mass loading products, and service organizations. In addition, it summarizes recent advances in applying environmental loading to address nonlinear variations in global and regional GNSS coordinate time series. Moreover, the scientific questions of existing studies are summarized, and insights into future research directions are provided. The complex nonlinear motion of reference stations is a major factor limiting the accuracy of the current terrestrial reference frame. Further refining the environmental load modeling method, establishing a surface mass distribution model with high spatiotemporal resolution and reliability, exploring other environmental load factors such as ice sheet and artificial mass-change effects, and developing an optimal data-processing model and strategy for reprocessing global reference station data consistently could contribute to the development of a millimeter-level nonlinear motion model for GNSS reference stations with actual physical significance and provide theoretical support for establishing a terrestrial reference frame with 1 mm accuracy by 2050.展开更多
In this paper, we propose a cross reference method for nonlinear time series analyzing in semi blind case, that is, the dynamic equations modeling the time series are known but the corresponding parameters are not. ...In this paper, we propose a cross reference method for nonlinear time series analyzing in semi blind case, that is, the dynamic equations modeling the time series are known but the corresponding parameters are not. The tasks of noise reduction and parameter estimation which were fulfilled separately before are combined iteratively. With the positive interaction between the two processing modules, the method is somewhat superior. Some prior work can be viewed as special cases of this general framework. The simulations for noise reduction and parameter estimation of contaminated chaotic time series show improved performance of our method compared with previous work.展开更多
Recent advancements in smart-meter technology are transforming traditional power systems into intelligent smart grids.It offers substantial benefits across social,environmental,and economic dimensions.To effectively r...Recent advancements in smart-meter technology are transforming traditional power systems into intelligent smart grids.It offers substantial benefits across social,environmental,and economic dimensions.To effectively realize these advantages,a fine-grained collection and analysis of smart meter data is essential.However,the high dimensionality and volume of such time-series present significant challenges,including increased computational load,data transmission overhead,latency,and complexity in real-time analysis.This study proposes a novel,computationally efficient framework for feature extraction and selection tailored to smart meter time-series data.The approach begins with an extensive offline analysis,where features are derived from multiple domains—time,frequency,and statistical—to capture diverse signal characteristics.Various feature sets are fused and evaluated using robust machine learning classifiers to identify the most informative combinations for automated appliance categorization.The bestperforming fused features set undergoes further refinement using Analysis of Variance(ANOVA)to identify the most discriminative features.The mathematical models,used to compute the selected features,are optimized to extract them with computational efficiency during online processing.Moreover,a notable dimension reduction is secured which facilitates data storage,transmission,and post processing.Onward,a specifically designed LogitBoost(LB)based ensemble of Random Forest base learners is used for an automated classification.The proposed solution demonstrates a high classification accuracy(97.93%)for the case of nine-class problem and dimension reduction(17.33-fold)with minimal front-end computational requirements,making it well-suited for real-world applications in smart grid environments.展开更多
This study aims to investigate the minimum required seismic gap distance based on the avoidance of shear failure for reinforced concrete(RC)buildings with potential floor-to-column pounding.Twenty different adjacent m...This study aims to investigate the minimum required seismic gap distance based on the avoidance of shear failure for reinforced concrete(RC)buildings with potential floor-to-column pounding.Twenty different adjacent models reflecting low and mid-rise buildings were created.Dynamic analyses were performed by selecting 11 earthquake record pairs compatible with the Turkish Building Earthquake Code(TBEC-2018).Two different cases were considered to determine the minimum required seismic gap distance.In the first case(named as Case-1),the gap distances between neighboring buildings were determined to avoid collisions during each acceleration record.The required distances calculated from the analyses were compared with the minimum seismic gap requirements of the TBEC-2018.The outcomes indicate that theαcoefficient recommended in TBEC-2018 for adjacent buildings with a potential floor-to-column pounding is sufficient for adjacent buildings with a period ratio of 1 to 1.5.The gap distances in the first case were then reduced by an iterative process to determine the distance at which the shear demand equals the shear strength(named as Case-2).The calculated gap distances to prevent shear failure(Case-2)are approximately 6%to 19%lower than the distances determined for avoidance of pounding(Case-1).展开更多
SUMMARIES OF TOP NEWS STORIES CHINA Safeguarding Gaokao Fairness This year’s gaokao,China’s national college entrance exam,began on 7 June,with a record 13.35 million students taking part.As one of the country’s mo...SUMMARIES OF TOP NEWS STORIES CHINA Safeguarding Gaokao Fairness This year’s gaokao,China’s national college entrance exam,began on 7 June,with a record 13.35 million students taking part.As one of the country’s most important annual events,the 2025 gaokao has seen extensive nation-wide efforts to ensure fairness,safety,and support for all candidates.Authorities have implemented a range of security and logistical measures to safeguard exam integrity.AI-powered monitoring systems have been rolled out in provinces like Jiangxi,Hubei,and Guangdong,enabling real-time behaviour analysis and early warnings without human intervention.These tools reduce pressure on staff and strengthen fairness.展开更多
Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The ma...Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The main objective of our work is to predict the market performance of the Dhaka Stock Exchange (DSE) on day closing price using different Deep Learning techniques. In this study, we have used the LSTM (Long Short-Term Memory) network to forecast the data of DSE for the convenience of shareholders. We have enforced LSTM networks to train data as well as forecast the future time series that has differentiated with test data. We have computed the Root Mean Square Error (RMSE) value to scrutinize the error between the forecasted value and test data that diminished the error by updating the LSTM networks. As a consequence of the renovation of the network, the LSTM network provides tremendous performance which outperformed the existing works to predict stock market prices.展开更多
In high-risk industrial environments like nuclear power plants,precise defect identification and localization are essential for maintaining production stability and safety.However,the complexity of such a harsh enviro...In high-risk industrial environments like nuclear power plants,precise defect identification and localization are essential for maintaining production stability and safety.However,the complexity of such a harsh environment leads to significant variations in the shape and size of the defects.To address this challenge,we propose the multivariate time series segmentation network(MSSN),which adopts a multiscale convolutional network with multi-stage and depth-separable convolutions for efficient feature extraction through variable-length templates.To tackle the classification difficulty caused by structural signal variance,MSSN employs logarithmic normalization to adjust instance distributions.Furthermore,it integrates classification with smoothing loss functions to accurately identify defect segments amid similar structural and defect signal subsequences.Our algorithm evaluated on both the Mackey-Glass dataset and industrial dataset achieves over 95%localization and demonstrates the capture capability on the synthetic dataset.In a nuclear plant's heat transfer tube dataset,it captures 90%of defect instances with75%middle localization F1 score.展开更多
Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2....Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2.5 concentrations is essential for mitigating health risks;however,the inherent nonlinearity and dynamic variability of air quality data present significant challenges.This study conducts a systematic evaluation of deep learning algorithms including Convolutional Neural Network(CNN),Long Short-Term Memory(LSTM),and the hybrid CNN-LSTM as well as statistical models,AutoRegressive Integrated Moving Average(ARIMA)and Maximum Likelihood Estimation(MLE)for hourly PM2.5 forecasting.Model performance is quantified using Root Mean Squared Error(RMSE),Mean Absolute Error(MAE),Mean Absolute Percentage Error(MAPE),and the Coefficient of Determination(R^(2))metrics.The comparative analysis identifies optimal predictive approaches for air quality modeling,emphasizing computational efficiency and accuracy.Additionally,CNN classification performance is evaluated using a confusion matrix,accuracy,precision,and F1-score.The results demonstrate that the Hybrid CNN-LSTM model outperforms standalone models,exhibiting lower error rates and higher R^(2) values,thereby highlighting the efficacy of deep learning-based hybrid architectures in achieving robust and precise PM2.5 forecasting.This study underscores the potential of advanced computational techniques in enhancing air quality prediction systems for environmental and public health applications.展开更多
This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characte...This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characteristics of autocorrelation function (ACF) and partial autocorrelation function (PACF), an autoregressive integrated moving average (ARIMA) model is roughly constructed. The rough model is optimized by combining with Akaike's information criterion (A/C), and the parameters are estimated based on the least squares algorithm. After validation testing, the model is utilized to forecast the next output on the basis of the previous measurement. When the difference between the measurement and its prediction exceeds the defined threshold, the measurement is identified as a gross error and remedied by its prediction. A case study on the yaw rate is performed to illustrate the developed algorithm. Experimental results demonstrate that the proposed approach can effectively distinguish gross errors and make some reasonable remedies.展开更多
基金funded by the Nursing project,“Clinical ability improvement project”in the First Affliated Hospital with Nanjing Medical University(JSPH-NC-2021-09).
文摘Objectives:This study aimed to explore the characteristics of outpatient blood collection center visit fluctuation and nursing workforce allocation based on a time series model,and the application effect was evaluated.Methods:To enhance the efficiency of phlebotomy at the hospital outpatient window and improve patient satisfaction,the First Affliated Hospital with Nanjing Medical University implemented a time series analysis model in 2024 to optimize nursing staff allocation.The management team was led by a head nurse of the outpatient blood collection department with extensive experience.It included one director of the nursing department,six senior clinical nurses,one informatics expert,and one nursing master's degree holder.Retrospective time-series data from the hospital's smart blood collection system(including hourly blood collection volumes and waiting times)were extracted between January 2020 and December 2023.Time series analysis was used to identify annual,seasonal,monthly,and hourly variation patterns in blood collection volumes.Seasonal decomposition and the Autoregressive Integrated Moving Average Model(ARIMA)were employed to forecast blood collection fluctuations for 2024 and facilitate dynamic scheduling.A comparison was conducted to evaluate differences in blood collection efficiency and patient satisfaction before(January-June 2023)and after(January-June 2024)implementing the dynamic scheduling model based on the time series analysis and forecasting.Results:Visit volumes showed periodicity and slow growth,peaking every second and third quarter of the year and daily at 8:00-9:00 a.m.and 2:00-3:00 p.m.The ARIMA model demonstrated a good fit(R2=0.692,mean absolute percentage error=8.28%).After adjusting the nursing staff allocation based on the fluctuation characteristics of the number of phlebotomy per hour in the time series analysis model,at the peak period of the blood collection window,at least three nurses,one mobile nurse and two volunteers were added.The number of phlebotomy per hour increased from 289.74±54.55 to 327.53±37.84 person-time(t=-10.041,P<0.01),waiting time decreased from 5.79±2.68 to 4.01±0.46 min(t=11.531,P<0.01),and satisfaction rose from 92.7%to 97.3%(χ^(2)=6.877,P<0.05).Conclusions:Based on the time series analysis method,it is helpful for nursing managers to accurately allocate human resources and optimize the efficiency of outpatient service resources by mining the special change rule of the outpatient blood collection window and predicting the future fluctuation trend.
基金the National Natural Science Foundation of China(22078030)the National Key Research and Development Project(2019YFC1905802,2022YFB3504305)+1 种基金the Joint Funds of the National Natural Science Foundation of China(U1802255,CSTB2022NSCQ-LZX0014)the Key Project of Independent Research Project of State Key Laboratory of Coal Mine Disaster Dynamics and Control(2011DA105287-zd201902).
文摘Leveraging big data signal processing offers a pathway to the development of artificial intelligencedriven equipment.The analysis of fluid flow signals and the characterization of fluid flow behavior are of critical in two-phase flow studies.Significant research efforts have focused on discerning flow regimes using various signal analysis methods.In this review,recent advances in time series signals analysis algorithms for stirred tank reactors have been summarized,and the detailed methodologies are categorized into the frequency domain methods,time-frequency domain methods,and state space methods.The strengths,limitations,and notable findings of each algorithm are highlighted.Additionally,the interrelationships between these methodologies have also been discussed,as well as the present progress achieved in various applications.Future research directions and challenges are also predicted to provide an overview of current research trends in data mining of time series for analyzing flow regimes and chaotic signals.This review offers a comprehensive summary for extracting and characterizing fluid flow behavior and serves as a theoretical reference for optimizing the characterization of chaotic signals in future research endeavors.
基金supported by the National Natural Science Foundation of China(Grant Nos.42404017,42122025 and 42174030).
文摘GNSS time series analysis provides an effective method for research on the earth's surface deformation,and it can be divided into two parts,deterministic models and stochastic models.The former part can be achieved by several parameters,such as polynomial terms,periodic terms,offsets,and post-seismic models.The latter contains some stochastic noises,which can be affected by detecting the former parameters.If there are not enough parameters assumed,modeling errors will occur and adversely affect the analysis results.In this study,we propose a processing strategy in which the commonly-used 1-order of the polynomial term can be replaced with different orders for better fitting GNSS time series of the Crustal Movement Network of China(CMONOC)stations.Initially,we use the Bayesian Information Criterion(BIC)to identify the best order within the range of 1-4 during the fitting process using the white noise plus power-law noise(WN+PL)model.Then,we compare the 1-order and the optimal order on the effect of deterministic models in GNSS time series,including the velocity and its uncertainty,amplitudes,and initial phases of the annual signals.The results indicate that the first-order polynomial in the GNSS time series is not the primary factor.The root mean square(RMS)reduction rates of almost all station components are positive,which means the new fitting of optimal-order polynomial helps to reduce the RMS of residual series.Most stations maintain the velocity difference(VD)within ±1 mm/yr,with percentages of 85.6%,81.9%and 63.4%in the North,East,and Up components,respectively.As for annual signals,the numbers of amplitude difference(AD)remained at ±0.2 mm are 242,239,and 200 in three components,accounting for 99.6%,98.4%,and 82.3%,respectively.This finding reminds us that the detection of the optimal-order polynomial is necessary when we aim to acquire an accurate understanding of the crustal movement features.
基金supported by the National Key Research and Development Program of China(Grant No.2023YFF1204803)the Natural Science Foundation of Jiangsu Province,China(Grant No.BK20190736)+1 种基金the Fundamental Research Funds for the Central Universities(Grant No.NJ2024029)the National Natural Science Foundation of China(Grant Nos.81701346 and 62201265).
文摘The natural visibility graph method has been widely used in physiological signal analysis,but it fails to accurately handle signals with data points below the baseline.Such signals are common across various physiological measurements,including electroencephalograph(EEG)and functional magnetic resonance imaging(fMRI),and are crucial for insights into physiological phenomena.This study introduces a novel method,the baseline perspective visibility graph(BPVG),which can analyze time series by accurately capturing connectivity across data points both above and below the baseline.We present the BPVG construction process and validate its performance using simulated signals.Results demonstrate that BPVG accurately translates periodic,random,and fractal signals into regular,random,and scale-free networks respectively,exhibiting diverse degree distribution traits.Furthermore,we apply BPVG to classify Alzheimer’s disease(AD)patients from healthy controls using EEG data and identify non-demented adults at varying dementia risk using resting-state fMRI(rs-fMRI)data.Utilizing degree distribution entropy derived from BPVG networks,our results exceed the best accuracy benchmark(77.01%)in EEG analysis,especially at channels F4(78.46%)and O1(81.54%).Additionally,our rs-fMRI analysis achieves a statistically significant classification accuracy of 76.74%.These findings highlight the effectiveness of BPVG in distinguishing various time series types and its practical utility in EEG and rs-fMRI analysis for early AD detection and dementia risk assessment.In conclusion,BPVG’s validation across both simulated and real data confirms its capability to capture comprehensive information from time series,irrespective of baseline constraints,providing a novel method for studying neural physiological signals.
基金3the High Performance Computing Center of Shanghai University,Shanghai Engineering Research Center of Intelligent Computing System(19DZ2252600)supported by State Key Laboratory of Computer Architecture(Institute of Computing Technology,Chinese Academy of Sciences)(CARCH201909)。
文摘Complementary metal oxide semiconductor(CMOS)aging mechanisms including bias temperature instability(BTI)pose growing concerns about circuit reliability.BTI results in threshold voltage increases on CMOS transistors,causing delay shifts and timing violations on logic circuits.The amount of degradation is dependent on the circuit workload,which increases the challenge for accurate BTI aging prediction at the design time.In this paper,a BTI prediction method for logic circuits based on statistical static timing analysis(SSTA)is proposed,especially considering the correlation between circuit workload and BTI degradation.It consists of a training phase,to discover the relationship between circuit scale and the required workload samples,and a prediction phase,to present the degradations under different workloads in Gaussian probability distributions.This method can predict the distribution of degradations with negligible errors,and identify 50%more BTI-critical paths in an affordable time,compared with conventional methods.
文摘This paper presents an investigation of the seismic behavior of reinforced concrete(RC)structures in which shear walls are the main lateral load-resisting elements and the participation of flat slab floor systems is not considered in the seismic design procedure.In this regard,the behavior of six prototype structures(with different heights and plan layouts)is investigated through nonlinear static and time history analyses,implemented in the OpenSees platform.The results of the analyses are presented in terms of the behavior of the slab-column connections and their mode of failure at different loading stages.Moreover,the global response of the buildings is discussed in terms of some parameters,such as lateral overstrength due to the gravity flat slab-column frames.According to the nonlinear static analyses,in structures in which the slab-column connections were designed only for gravity loads,the slab-column connections exhibited a punching mode of failure even in the early stages of loading.However,the punching failure was eliminated in structures in which a minimum transverse reinforcement recommended in ACI 318(2019)was provided in the slabs at joint regions.Furthermore,despite neglecting the contribution of gravity flat slab-column frames in the lateral load resistance of the structures,a relatively significant overstrength was imposed on the structures by the gravity frames.
基金supported by the National Key R&D Program(No.2022YFA1602201)。
文摘This paper presents a new technique for measuring the bunch length of a high-energy electron beam at a bunch-by-bunch rate in storage rings.This technique uses the time–frequency-domain joint analysis of the bunch signal to obtain bunch-by-bunch and turn-by-turn longitudinal parameters,such as bunch length and synchronous phase.The bunch signal is obtained using a button electrode with a bandwidth of several gigahertz.The data acquisition device was a high-speed digital oscilloscope with a sampling rate of more than 10 GS/s,and the single-shot sampling data buffer covered thousands of turns.The bunch-length and synchronous phase information were extracted via offline calculations using Python scripts.The calibration coefficient of the system was determined using a commercial streak camera.Moreover,this technique was tested on two different storage rings and successfully captured various longitudinal transient processes during the harmonic cavity debugging process at the Shanghai Synchrotron Radiation Facility(SSRF),and longitudinal instabilities were observed during the single-bunch accumulation process at Hefei Light Source(HLS).For Gaussian-distribution bunches,the uncertainty of the bunch phase obtained using this technique was better than 0.2 ps,and the bunch-length uncertainty was better than 1 ps.The dynamic range exceeded 10 ms.This technology is a powerful and versatile beam diagnostic tool that can be conveniently deployed in high-energy electron storage rings.
文摘The significant impact of earthquakes on human lives and the built environment underscores the extensive human and economic losses caused by structural collapses. Over the years, researchers have focused on improving seismic design to mitigate earthquake-induced damages and enhance structural performance. In this study, a specific reinforced concrete (RC) frame structure at Kyungpook National University, designed for educational purposes, is analyzed as a representative case. Utilizing SAP 2000, the research conducts a nonlinear time history analysis to assess the structural performance under seismic conditions. The primary objective is to evaluate the influence of different column section designs, while maintaining identical column section areas, on structural behavior. The study employs two distinct seismic waves from Abeno (ABN) and Takatori (TKT) for the analysis, comparing the structural performance under varying seismic conditions. Key aspects examined include displacement, base shear force, base moment, joint radians, and layer displacement angle. This research is anticipated to serve as a valuable reference for seismic restraint reinforcement work on RC buildings, enriching the methods used for evaluating structures through nonlinear time history analysis based on the synthetic seismic wave approach.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2006AA04Z416)the National Natural Science Foundation of China (No50538020)
文摘Aiming at the problem of on-line damage diagnosis in structural health monitoring (SHM), an algorithm of feature extraction and damage alarming based on auto-regressive moving-average (ARMA) time series analysis is presented. The monitoring data were first modeled as ARMA models, while a principalcomponent matrix derived from the AR coefficients of these models was utilized to establish the Mahalanobisdistance criterion functions. Then, a new damage-sensitive feature index DDSF is proposed. A hypothesis test involving the t-test method is further applied to obtain a decision of damage alarming as the mean value of DDSF had significantly changed after damage. The numerical results of a three-span-girder model shows that the defined index is sensitive to subtle structural damage, and the proposed algorithm can be applied to the on-line damage alarming in SHM.
基金Supported by Student Research Fund of Agricultural University of Hebei(cxzr2014023)Technology Fund of Agricultural University of Hebei(ZD201406)~~
文摘The research conducted prediction on changes of atmosphere pollution during July 9, 2014-July 22, 2014 with SPSS based on monitored data of O3 in 13 successive weeks from 6 sites in Baoding City and demonstrated prediction effect of ARIMA model is good by Ljung-Box Q-test and R2, and the model can be used for prediction on future atmosphere pollutant changes.
基金Supported by National High Technology Research and Develop Program of China(No.2012AA012301)the CAS/SAFEA International Partnership Program for Creative Research Teams
文摘To improve the path slack of Field Programmable Gate Array(FPGA), this paper proposes a timing slack optimization approach which utilizes the hybrid routing strategy of rip-up-retry and pathfinder. Firstly, effect of process variations on path slack is analyzed, and by constructing a collocation table of delay model that takes into account the multi-corner process, the complex statistical static timing analysis is successfully translated into a simple classical static timing analysis. Then, based on the hybrid routing strategy of rip-up-retry and pathfinder, by adjusting the critical path which detours a long distance, the critical path delay is reduced and the path slack is optimized. Experimental results show that, using the hybrid routing strategy, the number of paths with negative slack can be optimized(reduced) by 85.8% on average compared with the Versatile Place and Route(VPR) timing-driven routing algorithm, while the run-time is only increased by 15.02% on average.
基金supported by the Basic Science Center Project of the National Natural Science Foundation of China(42388102)the National Natural Science Foundation of China(42174030)+2 种基金the Special Fund of Hubei Luojia Laboratory(220100020)the Major Science and Technology Program for Hubei Province(2022AAA002)the Fundamental Research Funds for the Central Universities of China(2042022dx0001 and 2042023kfyq01)。
文摘Nonlinear variations in the coordinate time series of global navigation satellite system(GNSS) reference stations are strongly correlated with surface displacements caused by environmental loading effects,including atmospheric, hydrological, and nontidal ocean loading. Continuous improvements in the accuracy of surface mass loading products, performance of Earth models, and precise data-processing technologies have significantly advanced research on the effects of environmental loading on nonlinear variations in GNSS coordinate time series. However, owing to theoretical limitations, the lack of high spatiotemporal resolution surface mass observations, and the coupling of GNSS technology-related systematic errors, environmental loading and nonlinear GNSS reference station displacements remain inconsistent. The applicability and capability of these loading products across different regions also require further evaluation. This paper outlines methods for modeling environmental loading, surface mass loading products, and service organizations. In addition, it summarizes recent advances in applying environmental loading to address nonlinear variations in global and regional GNSS coordinate time series. Moreover, the scientific questions of existing studies are summarized, and insights into future research directions are provided. The complex nonlinear motion of reference stations is a major factor limiting the accuracy of the current terrestrial reference frame. Further refining the environmental load modeling method, establishing a surface mass distribution model with high spatiotemporal resolution and reliability, exploring other environmental load factors such as ice sheet and artificial mass-change effects, and developing an optimal data-processing model and strategy for reprocessing global reference station data consistently could contribute to the development of a millimeter-level nonlinear motion model for GNSS reference stations with actual physical significance and provide theoretical support for establishing a terrestrial reference frame with 1 mm accuracy by 2050.
文摘In this paper, we propose a cross reference method for nonlinear time series analyzing in semi blind case, that is, the dynamic equations modeling the time series are known but the corresponding parameters are not. The tasks of noise reduction and parameter estimation which were fulfilled separately before are combined iteratively. With the positive interaction between the two processing modules, the method is somewhat superior. Some prior work can be viewed as special cases of this general framework. The simulations for noise reduction and parameter estimation of contaminated chaotic time series show improved performance of our method compared with previous work.
文摘Recent advancements in smart-meter technology are transforming traditional power systems into intelligent smart grids.It offers substantial benefits across social,environmental,and economic dimensions.To effectively realize these advantages,a fine-grained collection and analysis of smart meter data is essential.However,the high dimensionality and volume of such time-series present significant challenges,including increased computational load,data transmission overhead,latency,and complexity in real-time analysis.This study proposes a novel,computationally efficient framework for feature extraction and selection tailored to smart meter time-series data.The approach begins with an extensive offline analysis,where features are derived from multiple domains—time,frequency,and statistical—to capture diverse signal characteristics.Various feature sets are fused and evaluated using robust machine learning classifiers to identify the most informative combinations for automated appliance categorization.The bestperforming fused features set undergoes further refinement using Analysis of Variance(ANOVA)to identify the most discriminative features.The mathematical models,used to compute the selected features,are optimized to extract them with computational efficiency during online processing.Moreover,a notable dimension reduction is secured which facilitates data storage,transmission,and post processing.Onward,a specifically designed LogitBoost(LB)based ensemble of Random Forest base learners is used for an automated classification.The proposed solution demonstrates a high classification accuracy(97.93%)for the case of nine-class problem and dimension reduction(17.33-fold)with minimal front-end computational requirements,making it well-suited for real-world applications in smart grid environments.
文摘This study aims to investigate the minimum required seismic gap distance based on the avoidance of shear failure for reinforced concrete(RC)buildings with potential floor-to-column pounding.Twenty different adjacent models reflecting low and mid-rise buildings were created.Dynamic analyses were performed by selecting 11 earthquake record pairs compatible with the Turkish Building Earthquake Code(TBEC-2018).Two different cases were considered to determine the minimum required seismic gap distance.In the first case(named as Case-1),the gap distances between neighboring buildings were determined to avoid collisions during each acceleration record.The required distances calculated from the analyses were compared with the minimum seismic gap requirements of the TBEC-2018.The outcomes indicate that theαcoefficient recommended in TBEC-2018 for adjacent buildings with a potential floor-to-column pounding is sufficient for adjacent buildings with a period ratio of 1 to 1.5.The gap distances in the first case were then reduced by an iterative process to determine the distance at which the shear demand equals the shear strength(named as Case-2).The calculated gap distances to prevent shear failure(Case-2)are approximately 6%to 19%lower than the distances determined for avoidance of pounding(Case-1).
文摘SUMMARIES OF TOP NEWS STORIES CHINA Safeguarding Gaokao Fairness This year’s gaokao,China’s national college entrance exam,began on 7 June,with a record 13.35 million students taking part.As one of the country’s most important annual events,the 2025 gaokao has seen extensive nation-wide efforts to ensure fairness,safety,and support for all candidates.Authorities have implemented a range of security and logistical measures to safeguard exam integrity.AI-powered monitoring systems have been rolled out in provinces like Jiangxi,Hubei,and Guangdong,enabling real-time behaviour analysis and early warnings without human intervention.These tools reduce pressure on staff and strengthen fairness.
文摘Deep learning plays a vital role in real-life applications, for example object identification, human face recognition, speech recognition, biometrics identification, and short and long-term forecasting of data. The main objective of our work is to predict the market performance of the Dhaka Stock Exchange (DSE) on day closing price using different Deep Learning techniques. In this study, we have used the LSTM (Long Short-Term Memory) network to forecast the data of DSE for the convenience of shareholders. We have enforced LSTM networks to train data as well as forecast the future time series that has differentiated with test data. We have computed the Root Mean Square Error (RMSE) value to scrutinize the error between the forecasted value and test data that diminished the error by updating the LSTM networks. As a consequence of the renovation of the network, the LSTM network provides tremendous performance which outperformed the existing works to predict stock market prices.
基金supported by the National Science and Technology Major Project of the Ministry of Science and Technology of China(2024ZD0608100)the National Natural Science Foundation of China(62332017,U22A2022)
文摘In high-risk industrial environments like nuclear power plants,precise defect identification and localization are essential for maintaining production stability and safety.However,the complexity of such a harsh environment leads to significant variations in the shape and size of the defects.To address this challenge,we propose the multivariate time series segmentation network(MSSN),which adopts a multiscale convolutional network with multi-stage and depth-separable convolutions for efficient feature extraction through variable-length templates.To tackle the classification difficulty caused by structural signal variance,MSSN employs logarithmic normalization to adjust instance distributions.Furthermore,it integrates classification with smoothing loss functions to accurately identify defect segments amid similar structural and defect signal subsequences.Our algorithm evaluated on both the Mackey-Glass dataset and industrial dataset achieves over 95%localization and demonstrates the capture capability on the synthetic dataset.In a nuclear plant's heat transfer tube dataset,it captures 90%of defect instances with75%middle localization F1 score.
文摘Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2.5 concentrations is essential for mitigating health risks;however,the inherent nonlinearity and dynamic variability of air quality data present significant challenges.This study conducts a systematic evaluation of deep learning algorithms including Convolutional Neural Network(CNN),Long Short-Term Memory(LSTM),and the hybrid CNN-LSTM as well as statistical models,AutoRegressive Integrated Moving Average(ARIMA)and Maximum Likelihood Estimation(MLE)for hourly PM2.5 forecasting.Model performance is quantified using Root Mean Squared Error(RMSE),Mean Absolute Error(MAE),Mean Absolute Percentage Error(MAPE),and the Coefficient of Determination(R^(2))metrics.The comparative analysis identifies optimal predictive approaches for air quality modeling,emphasizing computational efficiency and accuracy.Additionally,CNN classification performance is evaluated using a confusion matrix,accuracy,precision,and F1-score.The results demonstrate that the Hybrid CNN-LSTM model outperforms standalone models,exhibiting lower error rates and higher R^(2) values,thereby highlighting the efficacy of deep learning-based hybrid architectures in achieving robust and precise PM2.5 forecasting.This study underscores the potential of advanced computational techniques in enhancing air quality prediction systems for environmental and public health applications.
基金The National Natural Science Foundation of China(No.61273236)the Natural Science Foundation of Jiangsu Province(No.BK2010239)the Ph.D.Programs Foundation of Ministry of Education of China(No.200802861061)
文摘This paper presents a novel approach to identify and correct the gross errors in the microelectromechanical system (MEMS) gyroscope used in ground vehicles by means of time series analysis. According to the characteristics of autocorrelation function (ACF) and partial autocorrelation function (PACF), an autoregressive integrated moving average (ARIMA) model is roughly constructed. The rough model is optimized by combining with Akaike's information criterion (A/C), and the parameters are estimated based on the least squares algorithm. After validation testing, the model is utilized to forecast the next output on the basis of the previous measurement. When the difference between the measurement and its prediction exceeds the defined threshold, the measurement is identified as a gross error and remedied by its prediction. A case study on the yaw rate is performed to illustrate the developed algorithm. Experimental results demonstrate that the proposed approach can effectively distinguish gross errors and make some reasonable remedies.