The exponential expansion of the Internet of Things(IoT),Industrial Internet of Things(IIoT),and Transportation Management of Things(TMoT)produces vast amounts of real-time streaming data.Ensuring system dependability...The exponential expansion of the Internet of Things(IoT),Industrial Internet of Things(IIoT),and Transportation Management of Things(TMoT)produces vast amounts of real-time streaming data.Ensuring system dependability,operational efficiency,and security depends on the identification of anomalies in these dynamic and resource-constrained systems.Due to their high computational requirements and inability to efficiently process continuous data streams,traditional anomaly detection techniques often fail in IoT systems.This work presents a resource-efficient adaptive anomaly detection model for real-time streaming data in IoT systems.Extensive experiments were carried out on multiple real-world datasets,achieving an average accuracy score of 96.06%with an execution time close to 7.5 milliseconds for each individual streaming data point,demonstrating its potential for real-time,resourceconstrained applications.The model uses Principal Component Analysis(PCA)for dimensionality reduction and a Z-score technique for anomaly detection.It maintains a low computational footprint with a sliding window mechanism,enabling incremental data processing and identification of both transient and sustained anomalies without storing historical data.The system uses a Multivariate Linear Regression(MLR)based imputation technique that estimates missing or corrupted sensor values,preserving data integrity prior to anomaly detection.The suggested solution is appropriate for many uses in smart cities,industrial automation,environmental monitoring,IoT security,and intelligent transportation systems,and is particularly well-suited for resource-constrained edge devices.展开更多
With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Alth...With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads.展开更多
In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of ...In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.展开更多
Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem....Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm.展开更多
In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient ...In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient (entity) under monitoring, come from heterogeneous data sources as data streams, together with metadata associated with the formal definition of a measurement and evaluation project. Such metadata supports the patient analysis and monitoring in a more consistent way, facilitating for instance: i) The early detection of problems typical of data such as missing values, outliers, among others;and ii) The risk anticipation by means of on-line classification models adapted to the patient. We also performed a simulation using a prototype developed for outpatient monitoring, in order to analyze empirically processing times and variable scalability, which shed light on the feasibility of applying the prototype to real situations. In addition, we analyze statistically the results of the simulation, in order to detect the components which incorporate more variability to the system.展开更多
This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition...This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements.展开更多
With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time...With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time, is attracting more and more attention. It is said, however, that real- time stream processing will become more difficult in the near future, because the performance of processing applications continues to increase at a rate of 10% - 15% each year, while the amount of data to be processed is increasing exponentially. In this study, we focused on identifying a promising stream mining algorithm, specifically a Frequent Itemset Mining (FIsM) algorithm, then we improved its performance using an FPGA. FIsM algorithms are important and are basic data- mining techniques used to discover association rules from transactional databases. We improved on an approximate FIsM algorithm proposed recently so that it would fit onto hardware architecture efficiently. We then ran experiments on an FPGA. As a result, we have been able to achieve a speed 400% faster than the original algorithm implemented on a CPU. Moreover, our FPGA prototype showed a 20 times speed improvement compared to the CPU version.展开更多
A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,wh...A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing.展开更多
Radio frequency identification(RFID) enabled retail store management needs workflow optimization to facilitate real-time decision making. In this paper, complex event processing(CEP) based RFID-enabled retail store ma...Radio frequency identification(RFID) enabled retail store management needs workflow optimization to facilitate real-time decision making. In this paper, complex event processing(CEP) based RFID-enabled retail store management is studied, particularly focusing on automated shelf replenishment decisions. We define different types of event queries to describe retailer store workflow action over the RFID data streams on multiple tagging levels(e.g., item level and container level). Non-deterministic finite automata(NFA)based evaluation models are used to detect event patterns. To manage pattern match results in the process of event detection, optimization algorithm is applied in the event model to share event detection results. A simulated RFID-enabled retail store is used to verify the effectiveness of the method, experiment results show that the algorithm is effective and could optimize retail store management workflow.展开更多
A rapidly deployable dense seismic monitoring system which is capable of transmitting acquired data in real time and analyzing data automatically is crucial in seismic hazard mitigation after a major earthquake.Howeve...A rapidly deployable dense seismic monitoring system which is capable of transmitting acquired data in real time and analyzing data automatically is crucial in seismic hazard mitigation after a major earthquake.However,it is rather difficult for current seismic nodal stations to transmit data in real time for an extended period of time,and it usually takes a great amount of time to process the acquired data manually.To monitor earthquakes in real time flexibly,we develop a mobile integrated seismic monitoring system consisting of newly developed nodal units with 4G telemetry and a real-time AI-assisted automatic data processing workflow.The integrated system is convenient for deployment and has been successfully applied in monitoring the aftershocks of the Yangbi M_(S) 6.4 earthquake occurred on May 21,2021 in Yangbi County,Dali,Yunnan in southwest China.The acquired seismic data are transmitted almost in real time through the 4G cellular network,and then processed automat-ically for event detection,positioning,magnitude calculation and source mechanism inversion.From tens of seconds to a couple of minutes at most,the final seismic attributes can be presented remotely to the end users through the integrated system.From May 27 to June 17,the real-time system has detected and located 7905 aftershocks in the Yangbi area before the internal batteries exhausted,far more than the catalog provided by China Earthquake Networks Center using the regional permanent stations.The initial application of this inte-grated real-time monitoring system is promising,and we anticipate the advent of a new era for Real-time Intelligent Array Seismology(RIAS),for better monitoring and understanding the subsurface dynamic pro-cesses caused by Earth's internal forces as well as anthropogenic activities.展开更多
Purpose-The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources,e.g.,sensor networks,securities exchange,electric power secondary...Purpose-The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources,e.g.,sensor networks,securities exchange,electric power secondary system,etc.Concretely,the proposed framework should handle several difficult requirements including the management of gigantic data sources,the need for a fast self-adaptive algorithm,the relatively accurate prediction of multiple time series,and the real-time demand.Design/methodology/approach-First,the autoregressive integrated moving average-based prediction algorithm is introduced.Second,the processing framework is designed,which includes a time-series data storage model based on the HBase,and a real-time distributed prediction platform based on Storm.Then,the work principle of this platform is described.Finally,a proof-of-concept testbed is illustrated to verify the proposed framework.Findings-Several tests based on Power Grid monitoring data are provided for the proposed framework.The experimental results indicate that prediction data are basically consistent with actual data,processing efficiency is relatively high,and resources consumption is reasonable.Originality/value-This paper provides a distributed real-time data prediction framework for large-scale time-series data,which can exactly achieve the requirement of the effective management,prediction efficiency,accuracy,and high concurrency for massive data sources.展开更多
This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and li...This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and linear regression to describe the characteristics of the data quantity in the query window in order to determine the partition granularity of tuples, and utilizes equal depth histogram to implement partitio ning. This method can avoid data skew and reduce communi cation cost. The experiment results on both synthetic data and actual data prove that the proposed method is efficient, practical and suitable for time-varying data streams processing.展开更多
Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the predicti...Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the prediction of health issues.In the proposed scalable system,medical parameters are sent to Apache Spark to extract attributes from data and apply the proposed machine learning algorithm.In this way,healthcare risks can be predicted and sent as alerts and recommendations to users and healthcare providers.The proposed work also aims to provide an effective recommendation system by using streaming medical data,historical data on a user’s profile,and a knowledge database to make themost appropriate real-time recommendations and alerts based on the sensor’s measurements.This proposed scalable system works by tweeting the health status attributes of users.Their cloud profile receives the streaming healthcare data in real time by extracting the health attributes via a machine learning prediction algorithm to predict the users’health status.Subsequently,their status can be sent on demand to healthcare providers.Therefore,machine learning algorithms can be applied to stream health care data from wearables and provide users with insights into their health status.These algorithms can help healthcare providers and individuals focus on health risks and health status changes and consequently improve the quality of life.展开更多
A high-performance, distributed, complex-event processing en- gine with improved scalability is proposed. In this new engine, the stateless proeessing node is combined with distributed stor- age so that scale and perf...A high-performance, distributed, complex-event processing en- gine with improved scalability is proposed. In this new engine, the stateless proeessing node is combined with distributed stor- age so that scale and performance can be linearly expanded. This design prevents single node failure and makes the system highly reliable.展开更多
In this paper, we introduce a system architecture for a patient centered mobile health monitoring (PCMHM) system that deploys different sensors to determine patients’ activities, medical conditions, and the cause of ...In this paper, we introduce a system architecture for a patient centered mobile health monitoring (PCMHM) system that deploys different sensors to determine patients’ activities, medical conditions, and the cause of an emergency event. This system combines and analyzes sensor data to produce the patients’ detailed health information in real-time. A central computational node with data analyzing capability is used for sensor data integration and analysis. In addition to medical sensors, surrounding environmental sensors are also utilized to enhance the interpretation of the data and to improve medical diagnosis. The PCMHM system has the ability to provide on-demand health information of patients via the Internet, track real-time daily activities and patients’ health condition. This system also includes the capability for assessing patients’ posture and fall detection.展开更多
With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This...With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This study aims to explore the development strategies of real-time data analysis and decision-support systems,and analyze their application status and future development trends in various industries.The article first reviews the basic concepts and importance of real-time data analysis and decision-support systems,and then discusses in detail the key technical aspects such as system architecture,data collection and processing,analysis methods,and visualization techniques.展开更多
Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This s...Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This study aims to investigate the issue of missing data in extensive TBM datasets.Through a comprehensive literature review,we analyze the mechanism of missing TBM data and compare different imputation methods,including statistical analysis and machine learning algorithms.We also examine the impact of various missing patterns and rates on the efficacy of these methods.Finally,we propose a dynamic interpolation strategy tailored for TBM engineering sites.The research results show that K-Nearest Neighbors(KNN)and Random Forest(RF)algorithms can achieve good interpolation results;As the missing rate increases,the interpolation effect of different methods will decrease;The interpolation effect of block missing is poor,followed by mixed missing,and the interpolation effect of sporadic missing is the best.On-site application results validate the proposed interpolation strategy's capability to achieve robust missing value interpolation effects,applicable in ML scenarios such as parameter optimization,attitude warning,and pressure prediction.These findings contribute to enhancing the efficiency of TBM missing data processing,offering more effective support for large-scale TBM monitoring datasets.展开更多
This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving ...This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.展开更多
基金funded by the Ongoing Research Funding Program(ORF-2025-890)King Saud University,Riyadh,Saudi Arabia and was supported by the Competitive Research Fund of theUniversity of Aizu,Japan.
文摘The exponential expansion of the Internet of Things(IoT),Industrial Internet of Things(IIoT),and Transportation Management of Things(TMoT)produces vast amounts of real-time streaming data.Ensuring system dependability,operational efficiency,and security depends on the identification of anomalies in these dynamic and resource-constrained systems.Due to their high computational requirements and inability to efficiently process continuous data streams,traditional anomaly detection techniques often fail in IoT systems.This work presents a resource-efficient adaptive anomaly detection model for real-time streaming data in IoT systems.Extensive experiments were carried out on multiple real-world datasets,achieving an average accuracy score of 96.06%with an execution time close to 7.5 milliseconds for each individual streaming data point,demonstrating its potential for real-time,resourceconstrained applications.The model uses Principal Component Analysis(PCA)for dimensionality reduction and a Z-score technique for anomaly detection.It maintains a low computational footprint with a sliding window mechanism,enabling incremental data processing and identification of both transient and sustained anomalies without storing historical data.The system uses a Multivariate Linear Regression(MLR)based imputation technique that estimates missing or corrupted sensor values,preserving data integrity prior to anomaly detection.The suggested solution is appropriate for many uses in smart cities,industrial automation,environmental monitoring,IoT security,and intelligent transportation systems,and is particularly well-suited for resource-constrained edge devices.
基金funded by the Joint Project of Industry-University-Research of Jiangsu Province(Grant:BY20231146).
文摘With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads.
基金supported by the Research Fund of National Key Laboratory of Computer Architecture under Grant No.CARCH201501the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing under Grant No.2016A09
文摘In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.
基金supported by the Start-up Fund from Hainan University(No.KYQD(ZR)-20077)。
文摘Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm.
文摘In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient (entity) under monitoring, come from heterogeneous data sources as data streams, together with metadata associated with the formal definition of a measurement and evaluation project. Such metadata supports the patient analysis and monitoring in a more consistent way, facilitating for instance: i) The early detection of problems typical of data such as missing values, outliers, among others;and ii) The risk anticipation by means of on-line classification models adapted to the patient. We also performed a simulation using a prototype developed for outpatient monitoring, in order to analyze empirically processing times and variable scalability, which shed light on the feasibility of applying the prototype to real situations. In addition, we analyze statistically the results of the simulation, in order to detect the components which incorporate more variability to the system.
基金The National Natural Science Foundation of China (91438203,91638301,91438111,41601476).
文摘This paper focuses on the time efficiency for machine vision and intelligent photogrammetry, especially high accuracy on-board real-time cloud detection method. With the development of technology, the data acquisition ability is growing continuously and the volume of raw data is increasing explosively. Meanwhile, because of the higher requirement of data accuracy, the computation load is also becoming heavier. This situation makes time efficiency extremely important. Moreover, the cloud cover rate of optical satellite imagery is up to approximately 50%, which is seriously restricting the applications of on-board intelligent photogrammetry services. To meet the on-board cloud detection requirements and offer valid input data to subsequent processing, this paper presents a stream-computing of high accuracy on-board real-time cloud detection solution which follows the “bottom-up” understanding strategy of machine vision and uses multiple embedded GPU with significant potential to be applied on-board. Without external memory, the data parallel pipeline system based on multiple processing modules of this solution could afford the “stream-in, processing, stream-out” real-time stream computing. In experiments, images of GF-2 satellite are used to validate the accuracy and performance of this approach, and the experimental results show that this solution could not only bring up cloud detection accuracy, but also match the on-board real-time processing requirements.
文摘With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time, is attracting more and more attention. It is said, however, that real- time stream processing will become more difficult in the near future, because the performance of processing applications continues to increase at a rate of 10% - 15% each year, while the amount of data to be processed is increasing exponentially. In this study, we focused on identifying a promising stream mining algorithm, specifically a Frequent Itemset Mining (FIsM) algorithm, then we improved its performance using an FPGA. FIsM algorithms are important and are basic data- mining techniques used to discover association rules from transactional databases. We improved on an approximate FIsM algorithm proposed recently so that it would fit onto hardware architecture efficiently. We then ran experiments on an FPGA. As a result, we have been able to achieve a speed 400% faster than the original algorithm implemented on a CPU. Moreover, our FPGA prototype showed a 20 times speed improvement compared to the CPU version.
基金The High Technology Research Plan of Jiangsu Prov-ince (No.BG2004034)the Foundation of Graduate Creative Program ofJiangsu Province (No.xm04-36).
文摘A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing.
基金supported by National Social Science Fund (No. 16CTQ013)the Application Fundamental Research Foundation of Sichuan Province, China (No. 2017JY0011)the Key Project of Sichuan Provincial Department of Education, China (No. 2017GZ0333)
文摘Radio frequency identification(RFID) enabled retail store management needs workflow optimization to facilitate real-time decision making. In this paper, complex event processing(CEP) based RFID-enabled retail store management is studied, particularly focusing on automated shelf replenishment decisions. We define different types of event queries to describe retailer store workflow action over the RFID data streams on multiple tagging levels(e.g., item level and container level). Non-deterministic finite automata(NFA)based evaluation models are used to detect event patterns. To manage pattern match results in the process of event detection, optimization algorithm is applied in the event model to share event detection results. A simulated RFID-enabled retail store is used to verify the effectiveness of the method, experiment results show that the algorithm is effective and could optimize retail store management workflow.
基金supported by the National Natural Science Foundation of China (under grants 41874048,41790464,41790462).
文摘A rapidly deployable dense seismic monitoring system which is capable of transmitting acquired data in real time and analyzing data automatically is crucial in seismic hazard mitigation after a major earthquake.However,it is rather difficult for current seismic nodal stations to transmit data in real time for an extended period of time,and it usually takes a great amount of time to process the acquired data manually.To monitor earthquakes in real time flexibly,we develop a mobile integrated seismic monitoring system consisting of newly developed nodal units with 4G telemetry and a real-time AI-assisted automatic data processing workflow.The integrated system is convenient for deployment and has been successfully applied in monitoring the aftershocks of the Yangbi M_(S) 6.4 earthquake occurred on May 21,2021 in Yangbi County,Dali,Yunnan in southwest China.The acquired seismic data are transmitted almost in real time through the 4G cellular network,and then processed automat-ically for event detection,positioning,magnitude calculation and source mechanism inversion.From tens of seconds to a couple of minutes at most,the final seismic attributes can be presented remotely to the end users through the integrated system.From May 27 to June 17,the real-time system has detected and located 7905 aftershocks in the Yangbi area before the internal batteries exhausted,far more than the catalog provided by China Earthquake Networks Center using the regional permanent stations.The initial application of this inte-grated real-time monitoring system is promising,and we anticipate the advent of a new era for Real-time Intelligent Array Seismology(RIAS),for better monitoring and understanding the subsurface dynamic pro-cesses caused by Earth's internal forces as well as anthropogenic activities.
基金supported by“the Fundamental Research Funds for the Central Universities(2015XS72).”。
文摘Purpose-The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources,e.g.,sensor networks,securities exchange,electric power secondary system,etc.Concretely,the proposed framework should handle several difficult requirements including the management of gigantic data sources,the need for a fast self-adaptive algorithm,the relatively accurate prediction of multiple time series,and the real-time demand.Design/methodology/approach-First,the autoregressive integrated moving average-based prediction algorithm is introduced.Second,the processing framework is designed,which includes a time-series data storage model based on the HBase,and a real-time distributed prediction platform based on Storm.Then,the work principle of this platform is described.Finally,a proof-of-concept testbed is illustrated to verify the proposed framework.Findings-Several tests based on Power Grid monitoring data are provided for the proposed framework.The experimental results indicate that prediction data are basically consistent with actual data,processing efficiency is relatively high,and resources consumption is reasonable.Originality/value-This paper provides a distributed real-time data prediction framework for large-scale time-series data,which can exactly achieve the requirement of the effective management,prediction efficiency,accuracy,and high concurrency for massive data sources.
基金Supported by Foundation of High Technology Pro-ject of Jiangsu (BG2004034) , Foundation of Graduate Creative Pro-gramof Jiangsu (xm04-36)
文摘This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and linear regression to describe the characteristics of the data quantity in the query window in order to determine the partition granularity of tuples, and utilizes equal depth histogram to implement partitio ning. This method can avoid data skew and reduce communi cation cost. The experiment results on both synthetic data and actual data prove that the proposed method is efficient, practical and suitable for time-varying data streams processing.
基金This study was financially supported by the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),the Ministry of Health and Welfare(HI18C1216),and the Soonchunhyang University Research Fund.
文摘Big data applications in healthcare have provided a variety of solutions to reduce costs,errors,and waste.This work aims to develop a real-time system based on big medical data processing in the cloud for the prediction of health issues.In the proposed scalable system,medical parameters are sent to Apache Spark to extract attributes from data and apply the proposed machine learning algorithm.In this way,healthcare risks can be predicted and sent as alerts and recommendations to users and healthcare providers.The proposed work also aims to provide an effective recommendation system by using streaming medical data,historical data on a user’s profile,and a knowledge database to make themost appropriate real-time recommendations and alerts based on the sensor’s measurements.This proposed scalable system works by tweeting the health status attributes of users.Their cloud profile receives the streaming healthcare data in real time by extracting the health attributes via a machine learning prediction algorithm to predict the users’health status.Subsequently,their status can be sent on demand to healthcare providers.Therefore,machine learning algorithms can be applied to stream health care data from wearables and provide users with insights into their health status.These algorithms can help healthcare providers and individuals focus on health risks and health status changes and consequently improve the quality of life.
文摘A high-performance, distributed, complex-event processing en- gine with improved scalability is proposed. In this new engine, the stateless proeessing node is combined with distributed stor- age so that scale and performance can be linearly expanded. This design prevents single node failure and makes the system highly reliable.
文摘In this paper, we introduce a system architecture for a patient centered mobile health monitoring (PCMHM) system that deploys different sensors to determine patients’ activities, medical conditions, and the cause of an emergency event. This system combines and analyzes sensor data to produce the patients’ detailed health information in real-time. A central computational node with data analyzing capability is used for sensor data integration and analysis. In addition to medical sensors, surrounding environmental sensors are also utilized to enhance the interpretation of the data and to improve medical diagnosis. The PCMHM system has the ability to provide on-demand health information of patients via the Internet, track real-time daily activities and patients’ health condition. This system also includes the capability for assessing patients’ posture and fall detection.
文摘With the advent of the big data era,real-time data analysis and decision-support systems have been recognized as essential tools for enhancing enterprise competitiveness and optimizing the decision-making process.This study aims to explore the development strategies of real-time data analysis and decision-support systems,and analyze their application status and future development trends in various industries.The article first reviews the basic concepts and importance of real-time data analysis and decision-support systems,and then discusses in detail the key technical aspects such as system architecture,data collection and processing,analysis methods,and visualization techniques.
基金supported by the National Natural Science Foundation of China(Grant No.52409151)the Programme of Shenzhen Key Laboratory of Green,Efficient and Intelligent Construction of Underground Metro Station(Programme No.ZDSYS20200923105200001)the Science and Technology Major Project of Xizang Autonomous Region of China(XZ202201ZD0003G).
文摘Substantial advancements have been achieved in Tunnel Boring Machine(TBM)technology and monitoring systems,yet the presence of missing data impedes accurate analysis and interpretation of TBM monitoring results.This study aims to investigate the issue of missing data in extensive TBM datasets.Through a comprehensive literature review,we analyze the mechanism of missing TBM data and compare different imputation methods,including statistical analysis and machine learning algorithms.We also examine the impact of various missing patterns and rates on the efficacy of these methods.Finally,we propose a dynamic interpolation strategy tailored for TBM engineering sites.The research results show that K-Nearest Neighbors(KNN)and Random Forest(RF)algorithms can achieve good interpolation results;As the missing rate increases,the interpolation effect of different methods will decrease;The interpolation effect of block missing is poor,followed by mixed missing,and the interpolation effect of sporadic missing is the best.On-site application results validate the proposed interpolation strategy's capability to achieve robust missing value interpolation effects,applicable in ML scenarios such as parameter optimization,attitude warning,and pressure prediction.These findings contribute to enhancing the efficiency of TBM missing data processing,offering more effective support for large-scale TBM monitoring datasets.
基金financially by the National Research Council of Thailand(NRCT)under Contract No.N42A670894.
文摘This study presents an innovative development of the exponentially weighted moving average(EWMA)control chart,explicitly adapted for the examination of time series data distinguished by seasonal autoregressive moving average behavior—SARMA(1,1)L under exponential white noise.Unlike previous works that rely on simplified models such as AR(1)or assume independence,this research derives for the first time an exact two-sided Average Run Length(ARL)formula for theModified EWMAchart under SARMA(1,1)L conditions,using a mathematically rigorous Fredholm integral approach.The derived formulas are validated against numerical integral equation(NIE)solutions,showing strong agreement and significantly reduced computational burden.Additionally,a performance comparison index(PCI)is introduced to assess the chart’s detection capability.Results demonstrate that the proposed method exhibits superior sensitivity to mean shifts in autocorrelated environments,outperforming existing approaches.The findings offer a new,efficient framework for real-time quality control in complex seasonal processes,with potential applications in environmental monitoring and intelligent manufacturing systems.