The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated cha...The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.展开更多
The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardio...The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.展开更多
Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall ...Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.展开更多
With the rapid development of the Internet,people pay more and more attention to the protection of privacy.The second-generation onion routing system Tor is the most commonly used among anonymous communication systems...With the rapid development of the Internet,people pay more and more attention to the protection of privacy.The second-generation onion routing system Tor is the most commonly used among anonymous communication systems,which can be used to protect user privacy effectively.In recent years,Tor’s congestion problem has become the focus of attention,and it can affect Tor’s performance even user experience.Firstly,we investigate the causes of Tor network congestion and summarize some link scheduling algorithms proposed in recent years.Then we propose the link scheduling algorithm SWRR based on WRR(Weighted Round Robin).In this process,we design multiple weight functions and compare the performance of these weight functions under different congestion conditions,and the appropriate weight function is selected to be used in our algorithms based on the experiment results.Finally,we also compare the performance of SWRR with other link scheduling algorithms under different congestion conditions by experiments,and verify the effectiveness of the algorithm SWRR.展开更多
In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and c...In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and communication standard in ensuring incessant availability of service.At the same time,the development of 6G enables the Unmanned Aerial Vehicles(UAVs)in offering cost and time-efficient solution to several applications like healthcare,surveillance,disaster management,etc.In UAV networks,energy efficiency and data collection are considered the major process for high quality network communication.But these procedures are found to be challenging because of maximum mobility,unstable links,dynamic topology,and energy restricted UAVs.These issues are solved by the use of artificial intelligence(AI)and energy efficient clustering techniques for UAVs in the 6G environment.With this inspiration,this work designs an artificial intelligence enabled cooperative cluster-based data collection technique for unmanned aerial vehicles(AECCDC-UAV)in 6G environment.The proposed AECCDC-UAV technique purposes for dividing the UAV network as to different clusters and allocate a cluster head(CH)to each cluster in such a way that the energy consumption(ECM)gets minimized.The presented AECCDC-UAV technique involves a quasi-oppositional shuffled shepherd optimization(QOSSO)algorithm for selecting the CHs and construct clusters.The QOSSO algorithm derives a fitness function involving three input parameters residual energy of UAVs,distance to neighboring UAVs,and degree of UAVs.The performance of the AECCDC-UAV technique is validated in many aspects and the obtained experimental values demonstration promising results over the recent state of art methods.展开更多
MigroGrid(MG)has emerged to resolve the growing demand for energy.But because of its inconsistent output,it can result in various power quality(PQ)issues.PQ is a problem that is becoming more and more important for th...MigroGrid(MG)has emerged to resolve the growing demand for energy.But because of its inconsistent output,it can result in various power quality(PQ)issues.PQ is a problem that is becoming more and more important for the reliability of power systems that use renewable energy sources.Similarly,the employment of nonlinear loads will introduce harmonics into the system and,as a result,cause distortions in the current and voltage waveforms as well as low power quality issues in the supply system.Thus,this research focuses on power quality enhancement in the MG using hybrid shunt filters.However,the performance of the filter mainly depends upon the design,and stability of the controller.The efficiency of the proposed filter is enhanced by incorporating an enhanced adaptive fuzzy neural network(AFNN)controller.The performance of the proposed topology is examined in a MATLAB/Simulink environment,and experimental findings are provided to validate the effectiveness of this approach.Further,the results of the proposed controller are compared with Adaptive Fuzzy Back-Stepping(AFBS)and Adaptive Fuzzy Sliding(AFS)to prove its superiority over power quality improvement in MG.From the analysis,it can be observed that the proposed system reduces the total harmonic distortion by about 1.8%,which is less than the acceptable limit standard.展开更多
Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant...Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.展开更多
The pupil recognition method is helpful in many real-time systems,including ophthalmology testing devices,wheelchair assistance,and so on.The pupil detection system is a very difficult process in a wide range of datas...The pupil recognition method is helpful in many real-time systems,including ophthalmology testing devices,wheelchair assistance,and so on.The pupil detection system is a very difficult process in a wide range of datasets due to problems caused by varying pupil size,occlusion of eyelids,and eyelashes.Deep Convolutional Neural Networks(DCNN)are being used in pupil recognition systems and have shown promising results in terms of accuracy.To improve accuracy and cope with larger datasets,this research work proposes BOC(BAT Optimized CNN)-IrisNet,which consists of optimizing input weights and hidden layers of DCNN using the evolutionary BAT algorithm to efficiently find the human eye pupil region.The proposed method is based on very deep architecture and many tricks from recently developed popular CNNs.Experiment results show that the BOC-IrisNet proposal can efficiently model iris microstructures and provides a stable discriminating iris representation that is lightweight,easy to implement,and of cutting-edge accuracy.Finally,the region-based black box method for determining pupil center coordinates was introduced.The proposed architecture was tested using various IRIS databases,including the CASIA(Chinese academy of the scientific research institute of automation)Iris V4 dataset,which has 99.5%sensitivity and 99.75%accuracy,and the IIT(Indian Institute of Technology)Delhi dataset,which has 99.35%specificity and MMU(Multimedia University)99.45%accuracy,which is higher than the existing architectures.展开更多
Irretrievable loss of vision is the predominant result of Glaucoma in the retina.Recently,multiple approaches have paid attention to the automatic detection of glaucoma on fundus images.Due to the interlace of blood v...Irretrievable loss of vision is the predominant result of Glaucoma in the retina.Recently,multiple approaches have paid attention to the automatic detection of glaucoma on fundus images.Due to the interlace of blood vessels and the herculean task involved in glaucoma detection,the exactly affected site of the optic disc of whether small or big size cup,is deemed challenging.Spatially Based Ellipse Fitting Curve Model(SBEFCM)classification is suggested based on the Ensemble for a reliable diagnosis of Glaucomain theOptic Cup(OC)and Optic Disc(OD)boundary correspondingly.This research deploys the Ensemble Convolutional Neural Network(CNN)classification for classifying Glaucoma or Diabetes Retinopathy(DR).The detection of the boundary between the OC and the OD is performed by the SBEFCM,which is the latest weighted ellipse fitting model.The SBEFCM that enhances and widens the multi-ellipse fitting technique is proposed here.There is a preprocessing of input fundus image besides segmentation of blood vessels to avoid interlacing surrounding tissues and blood vessels.The ascertaining of OCandODboundary,which characterizedmany output factors for glaucoma detection,has been developed by EnsembleCNNclassification,which includes detecting sensitivity,specificity,precision,andArea Under the receiver operating characteristic Curve(AUC)values accurately by an innovative SBEFCM.In terms of contrast,the proposed Ensemble CNNsignificantly outperformed the current methods.展开更多
A wireless sensor network (WSN) is spatially distributing independent sensors to monitor physical and environmental characteristics such as temperature, sound, pressure and also provides different applications such as...A wireless sensor network (WSN) is spatially distributing independent sensors to monitor physical and environmental characteristics such as temperature, sound, pressure and also provides different applications such as battlefield inspection and biological detection. The Constrained Motion and Sensor (CMS) Model represents the features and explain k-step reach ability testing to describe the states. The description and calculation based on CMS model does not solve the problem in mobile robots. The ADD framework based on monitoring radio measurements creates a threshold. But the methods are not effective in dynamic coverage of complex environment. In this paper, a Localized Coverage based on Shape and Area Detection (LCSAD) Framework is developed to increase the dynamic coverage using mobile robots. To facilitate the measurement in mobile robots, two algorithms are designed to identify the coverage area, (i.e.,) the area of a coverage hole or not. The two algorithms are Localized Geometric Voronoi Hexagon (LGVH) and Acquaintance Area Hexagon (AAH). LGVH senses all the shapes and it is simple to show all the boundary area nodes. AAH based algorithm simply takes directional information by locating the area of local and global convex points of coverage area. Both these algorithms are applied to WSN of random topologies. The simulation result shows that the proposed LCSAD framework attains minimal energy utilization, lesser waiting time, and also achieves higher scalability, throughput, delivery rate and 8% maximal coverage connectivity in sensor network compared to state-of-art works.展开更多
The objective of this research is to examine the use of feature selection and classification methods for distinguishing different types of brain tumors.The brain tumor is characterized by an anomalous proliferation of ...The objective of this research is to examine the use of feature selection and classification methods for distinguishing different types of brain tumors.The brain tumor is characterized by an anomalous proliferation of brain cells that can either be benign or malignant.Most tumors are misdiagnosed due to the variabil-ity and complexity of lesions,which reduces the survival rate in patients.Diagno-sis of brain tumors via computer vision algorithms is a challenging task.Segmentation and classification of brain tumors are currently one of the most essential surgical and pharmaceutical procedures.Traditional brain tumor identi-fication techniques require manual segmentation or handcrafted feature extraction that is error-prone and time-consuming.Hence the proposed research work is mainly focused on medical image processing,which takes Magnetic Resonance Imaging(MRI)images as input and performs preprocessing,segmentation,fea-ture extraction,feature selection,similarity measurement,and classification steps for identifying brain tumors.Initially,the medianfilter is practically applied to the input image to reduce the noise.The graph-cut segmentation technique is used to segment the tumor region.The texture feature is extracted from the output of the segmented image.The extracted feature is selected by using the Ant Colony Opti-mization(ACO)algorithm to improve the performance of the classifier.This prob-abilistic approach is used to solve computing issues.The Euclidean distance is used to calculate the degree of similarity for each extracted feature.The selected feature value is given to the Relevance Vector Machine(RVM)which is a multi-class classification technique.Finally,the tumor is classified as abnormal or nor-mal.The experimental result reveals that the proposed RVM technique gives a better accuracy range of 98.87%when compared to the traditional Support Vector Machine(SVM)technique.展开更多
A new method of adaptable rendering for interaction in Virtual Environment(VE) through different visual acuity equations is proposed. An acuity factor equation of luminance vision is first given. Secondly, five equati...A new method of adaptable rendering for interaction in Virtual Environment(VE) through different visual acuity equations is proposed. An acuity factor equation of luminance vision is first given. Secondly, five equations which calculate the visual acuity through visual acuity factors are presented, and adaptive rendering strategy based on different visual acuity equations is given. The VE system may select one of them on the basis of the host’s load, hereby select LOD for each model which would be rendered. A coarser LOD is selected where the visual acuity is lower, and a better LOD is used where it is higher. This method is tested through experiments and the experimental results show that it is effective.展开更多
Liver Segmentation is one of the challenging tasks in detecting and classifying liver tumors from Computed Tomography(CT)images.The segmentation of hepatic organ is more intricate task,owing to the fact that it posses...Liver Segmentation is one of the challenging tasks in detecting and classifying liver tumors from Computed Tomography(CT)images.The segmentation of hepatic organ is more intricate task,owing to the fact that it possesses a sizeable quantum of vascularization.This paper proposes an algorithm for automatic seed point selection using energy feature for use in level set algorithm for segmentation of liver region in CT scans.The effectiveness of the method can be determined when used in a model to classify the liver CT images as tumorous or not.This involves segmentation of the region of interest(ROI)from the segmented liver,extraction of the shape and texture features from the segmented ROI and classification of the ROIs as tumorous or not by using a classifier based on the extracted features.In this work,the proposed seed point selection technique has been used in level set algorithm for segmentation of liver region in CT scans and the ROIs have been extracted using Fuzzy C Means clustering(FCM)which is one of the algorithms to segment the images.The dataset used in this method has been collected from various repositories and scan centers.The outcome of this proposed segmentation model has reduced the area overlap error that could offer the intended accuracy and consistency.It gives better results when compared with other existing algorithms.Fast execution in short span of time is another advantage of this method which in turns helps the radiologist to ascertain the abnormalities instantly.展开更多
Cloud computing is an emerging paradigm with many applications that are integrated with IT organization having the freedom to migrate services between different physical servers. Analytic Hierarchy Process (AHP) with ...Cloud computing is an emerging paradigm with many applications that are integrated with IT organization having the freedom to migrate services between different physical servers. Analytic Hierarchy Process (AHP) with a pairwise comparison matrix technique for applications has been used for serving resources. AHP is a mathematical technique for multi-criteria decision-making used in cloud computing. The growth in cloud computing for resource allocation is sudden and raises complex issues with quality of services for selecting applications. Finally, based on the selected criteria, applications are ranked using the pairwise comparison matrix of AHP to determine the most effective scheme. The presented AHP technique represents a well-balanced multi criteria priorities synthesis of various applications effect factors that must be taken into consideration when making complex decisions of this nature. Keeping in view wide range of applications of cloud computing an attempt has been made to develop multiple criteria decision making model.展开更多
Cognitive radio technology makes efficient use of the valuable radio frequency spectrum in a non-interfering manner to solve the problem of spectrum scarcity. This paper aims to design a scheme for the concurrent use ...Cognitive radio technology makes efficient use of the valuable radio frequency spectrum in a non-interfering manner to solve the problem of spectrum scarcity. This paper aims to design a scheme for the concurrent use of licensed frequencies by Underlay Cognitive Users (UCUs). We develop a new receiver-initiated Medium Access Control (MAC) protocol to facilitate the selections of alternative reliable carrier frequencies. A circuit is designed to establish reliable carrier selections based on the Received Signal Strength Indicator (RSSI) at the receiving end. Based on both packet-level simulations and various performance parameters, a comparison is carried out among conventional techniques, including the Multiple Access with Collision Avoidance (MACA) and MACA by invitation(MACA-BI) techniques, and our scheme. The simulated results demonstrate that when conventional techniques are used, the system overhead time increases from 0.5 s on the first attempt to 16.5 s on the sixth attempt. In the proposed scheme under the same failure condition, overhead time varies from 0.5 s to 2 s. This improvement is due to the complete elimination of the exponential waiting time that occurs during failed transmissions. An average efficiency of 60% is achieved with our scheme while only 43% and 34% average efficiencies are achieved with the MACA and MACA-BI techniques, respectively. The throughput performance of our scheme on the fourth attempt is 7 Mbps, whereas for the MACA and MACA-BI protocols, it is 1.9 Mbps and 2.2 Mbps respectively.展开更多
A recommender system is an approach performed by e-commerce for increasing smooth users’experience.Sequential pattern mining is a technique of data mining used to identify the co-occurrence relationships by taking in...A recommender system is an approach performed by e-commerce for increasing smooth users’experience.Sequential pattern mining is a technique of data mining used to identify the co-occurrence relationships by taking into account the order of transactions.This work will present the implementation of sequence pattern mining for recommender systems within the domain of e-com-merce.This work will execute the Systolic tree algorithm for mining the frequent patterns to yield feasible rules for the recommender system.The feature selec-tion's objective is to pick a feature subset having the least feature similarity as well as highest relevancy with the target class.This will mitigate the feature vector's dimensionality by eliminating redundant,irrelevant,or noisy data.This work pre-sents a new hybrid recommender system based on optimized feature selection and systolic tree.The features were extracted using Term Frequency-Inverse Docu-ment Frequency(TF-IDF),feature selection with the utilization of River Forma-tion Dynamics(RFD),and the Particle Swarm Optimization(PSO)algorithm.The systolic tree is used for pattern mining,and based on this,the recommendations are given.The proposed methods were evaluated using the MovieLens dataset,and the experimental outcomes confirmed the efficiency of the techniques.It was observed that the RFD feature selection with systolic tree frequent pattern mining with collaborativefiltering,the precision of 0.89 was achieved.展开更多
The high performance of the state-of-the-art deep neural networks(DNNs)is acquired at the cost of huge consumption of computing resources.Quantization of networks is recently recognized as a promising solution to solv...The high performance of the state-of-the-art deep neural networks(DNNs)is acquired at the cost of huge consumption of computing resources.Quantization of networks is recently recognized as a promising solution to solve the problem and significantly reduce the resource usage.However,the previous quantization works have mostly focused on the DNN inference,and there were very few works to address on the challenges of DNN training.In this paper,we leverage dynamic fixed-point(DFP)quantization algorithm and stochastic rounding(SR)strategy to develop a fully quantized 8-bit neural networks targeting low bitwidth training.The experiments show that,in comparison to the full-precision networks,the accuracy drop of our quantized convolutional neural networks(CNNs)can be less than 2%,even when applied to deep models evaluated on Image-Net dataset.Additionally,our 8-bit GNMT translation network can achieve almost identical BLEU to full-precision network.We further implement a prototype on FPGA and the synthesis shows that the low bitwidth training scheme can reduce the resource usage significantly.展开更多
A severe problem in modern information systems is Digital media tampering along with fake information.Even though there is an enhancement in image development,image forgery,either by the photographer or via image mani...A severe problem in modern information systems is Digital media tampering along with fake information.Even though there is an enhancement in image development,image forgery,either by the photographer or via image manipulations,is also done in parallel.Numerous researches have been concentrated on how to identify such manipulated media or information manually along with automatically;thus conquering the complicated forgery methodologies with effortlessly obtainable technologically enhanced instruments.However,high complexity affects the developed methods.Presently,it is complicated to resolve the issue of the speed-accuracy trade-off.For tackling these challenges,this article put forward a quick and effective Copy-Move Forgery Detection(CMFD)system utilizing a novel Quad-sort Moth Flame(QMF)Light Gradient Boosting Machine(QMF-Light GBM).Utilizing Borel Transform(BT)-based Wiener Filter(BWF)and resizing,the input images are initially pre-processed by eliminating the noise in the proposed system.After that,by utilizing the Orientation Preserving Simple Linear Iterative Clustering(OPSLIC),the pre-processed images,partitioned into a number of grids,are segmented.Next,as of the segmented images,the significant features are extracted along with the feature’s distance is calculated and matched with the input images.Next,utilizing the Union Topological Measure of Pattern Diversity(UTMOPD)method,the false positive matches that took place throughout the matching process are eliminated.After that,utilizing the QMF-Light GBM visualization,the visualization of forged in conjunction with non-forged images is performed.The extensive experiments revealed that concerning detection accuracy,the proposed system could be extremely precise when contrasted to some top-notch approaches.展开更多
The sensitive data stored in the public cloud by privileged users,such as corporate companies and government agencies are highly vulnerable in the hands of cloud providers and hackers.The proposed Virtual Cloud Storag...The sensitive data stored in the public cloud by privileged users,such as corporate companies and government agencies are highly vulnerable in the hands of cloud providers and hackers.The proposed Virtual Cloud Storage Archi-tecture is primarily concerned with data integrity and confidentiality,as well as availability.To provide confidentiality and availability,thefile to be stored in cloud storage should be encrypted using an auto-generated key and then encoded into distinct chunks.Hashing the encoded chunks ensured thefile integrity,and a newly proposed Circular Shift Chunk Allocation technique was used to determine the order of chunk storage.Thefile could be retrieved by performing the opera-tions in reverse.Using the regenerating code,the model could regenerate the missing and corrupted chunks from the cloud.The proposed architecture adds an extra layer of security while maintaining a reasonable response time and sto-rage capacity.Experimental results analysis show that the proposed model has been tested with storage space and response time for storage and retrieval.The VCSA model consumes 1.5x(150%)storage space.It was found that total storage required for the VCSA model is very low when compared with 2x Replication and completely satisfies the CIA model.The response time VCSA model was tested with different sizedfiles starting from 2 to 16 MB.The response time for storing and retrieving a 2 MBfile is 4.96 and 3.77 s respectively,and for a 16 MBfile,the response times are 11.06 s for storage and 5.6 s for retrieval.展开更多
In this work,we design a multisensory IoT-based online vitals monitor(hereinafter referred to as the VITALS)to sense four bedside physiological parameters including pulse(heart)rate,body temperature,blood pressure,and...In this work,we design a multisensory IoT-based online vitals monitor(hereinafter referred to as the VITALS)to sense four bedside physiological parameters including pulse(heart)rate,body temperature,blood pressure,and periph-eral oxygen saturation.Then,the proposed system constantly transfers these signals to the analytics system which aids in enhancing diagnostics at an earlier stage as well as monitoring after recovery.The core hardware of the VITALS includes commercial off-the-shelf sensing devices/medical equipment,a powerful microcontroller,a reliable wireless communication module,and a big data analytics system.It extracts human vital signs in a pre-programmed interval of 30 min and sends them to big data analytics system through the WiFi module for further analysis.We use Apache Kafka(to gather live data streams from connected sen-sors),Apache Spark(to categorize the patient vitals and notify the medical pro-fessionals while identifying abnormalities in physiological parameters),Hadoop Distributed File System(HDFS)(to archive data streams for further analysis and long-term storage),Spark SQL,Hive and Matplotlib(to support caregivers to access/visualize appropriate information from collected data streams and to explore/understand the health status of the individuals).In addition,we develop a mobile application to send statistical graphs to doctors and patients to enable them to monitor health conditions remotely.Our proposed system is implemented on three patients for 7 days to check the effectiveness of sensing,data processing,and data transmission mechanisms.To validate the system accuracy,we compare the data values collected from established sensors with the measured readouts using a commercial healthcare monitor,the Welch Allyn®Spot Check.Our pro-posed system provides improved care solutions,especially for those whose access to care services is limited.展开更多
基金funded by Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah under grant No.(RG-6-611-43)the authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘The integration of the Internet of Things(IoT)into healthcare systems improves patient care,boosts operational efficiency,and contributes to cost-effective healthcare delivery.However,overcoming several associated challenges,such as data security,interoperability,and ethical concerns,is crucial to realizing the full potential of IoT in healthcare.Real-time anomaly detection plays a key role in protecting patient data and maintaining device integrity amidst the additional security risks posed by interconnected systems.In this context,this paper presents a novelmethod for healthcare data privacy analysis.The technique is based on the identification of anomalies in cloud-based Internet of Things(IoT)networks,and it is optimized using explainable artificial intelligence.For anomaly detection,the Radial Boltzmann Gaussian Temporal Fuzzy Network(RBGTFN)is used in the process of doing information privacy analysis for healthcare data.Remora Colony SwarmOptimization is then used to carry out the optimization of the network.The performance of the model in identifying anomalies across a variety of healthcare data is evaluated by an experimental study.This evaluation suggested that themodel measures the accuracy,precision,latency,Quality of Service(QoS),and scalability of themodel.A remarkable 95%precision,93%latency,89%quality of service,98%detection accuracy,and 96%scalability were obtained by the suggested model,as shown by the subsequent findings.
基金funded by Researchers Supporting ProjectNumber(RSPD2025R947),King Saud University,Riyadh,Saudi Arabia.
文摘The integration of IoT and Deep Learning(DL)has significantly advanced real-time health monitoring and predictive maintenance in prognostic and health management(PHM).Electrocardiograms(ECGs)are widely used for cardiovascular disease(CVD)diagnosis,but fluctuating signal patterns make classification challenging.Computer-assisted automated diagnostic tools that enhance ECG signal categorization using sophisticated algorithms and machine learning are helping healthcare practitioners manage greater patient populations.With this motivation,the study proposes a DL framework leveraging the PTB-XL ECG dataset to improve CVD diagnosis.Deep Transfer Learning(DTL)techniques extract features,followed by feature fusion to eliminate redundancy and retain the most informative features.Utilizing the African Vulture Optimization Algorithm(AVOA)for feature selection is more effective than the standard methods,as it offers an ideal balance between exploration and exploitation that results in an optimal set of features,improving classification performance while reducing redundancy.Various machine learning classifiers,including Support Vector Machine(SVM),eXtreme Gradient Boosting(XGBoost),Adaptive Boosting(AdaBoost),and Extreme Learning Machine(ELM),are used for further classification.Additionally,an ensemble model is developed to further improve accuracy.Experimental results demonstrate that the proposed model achieves the highest accuracy of 96.31%,highlighting its effectiveness in enhancing CVD diagnosis.
文摘Floods and storm surges pose significant threats to coastal regions worldwide,demanding timely and accurate early warning systems(EWS)for disaster preparedness.Traditional numerical and statistical methods often fall short in capturing complex,nonlinear,and real-time environmental dynamics.In recent years,machine learning(ML)and deep learning(DL)techniques have emerged as promising alternatives for enhancing the accuracy,speed,and scalability of EWS.This review critically evaluates the evolution of ML models—such as Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—in coastal flood prediction,highlighting their architectures,data requirements,performance metrics,and implementation challenges.A unique contribution of this work is the synthesis of real-time deployment challenges including latency,edge-cloud tradeoffs,and policy-level integration,areas often overlooked in prior literature.Furthermore,the review presents a comparative framework of model performance across different geographic and hydrologic settings,offering actionable insights for researchers and practitioners.Limitations of current AI-driven models,such as interpretability,data scarcity,and generalization across regions,are discussed in detail.Finally,the paper outlines future research directions including hybrid modelling,transfer learning,explainable AI,and policy-aware alert systems.By bridging technical performance and operational feasibility,this review aims to guide the development of next-generation intelligent EWS for resilient and adaptive coastal management.
基金This work is supported by the National Natural Science Foundation of China(Grant No.61170273,No.U1536111)and the China Scholarship Council(No.[2013]3050).In addition,we express our sincere gratitude to Lingling Gong,Meng Luo,Zhimin Lin,Peiyuan Li and the anonymous reviewers for their valuable comments and suggestions.
文摘With the rapid development of the Internet,people pay more and more attention to the protection of privacy.The second-generation onion routing system Tor is the most commonly used among anonymous communication systems,which can be used to protect user privacy effectively.In recent years,Tor’s congestion problem has become the focus of attention,and it can affect Tor’s performance even user experience.Firstly,we investigate the causes of Tor network congestion and summarize some link scheduling algorithms proposed in recent years.Then we propose the link scheduling algorithm SWRR based on WRR(Weighted Round Robin).In this process,we design multiple weight functions and compare the performance of these weight functions under different congestion conditions,and the appropriate weight function is selected to be used in our algorithms based on the experiment results.Finally,we also compare the performance of SWRR with other link scheduling algorithms under different congestion conditions by experiments,and verify the effectiveness of the algorithm SWRR.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1F1A1063319).
文摘In recent times,sixth generation(6G)communication technologies have become a hot research topic because of maximum throughput and low delay services for mobile users.It encompasses several heterogeneous resource and communication standard in ensuring incessant availability of service.At the same time,the development of 6G enables the Unmanned Aerial Vehicles(UAVs)in offering cost and time-efficient solution to several applications like healthcare,surveillance,disaster management,etc.In UAV networks,energy efficiency and data collection are considered the major process for high quality network communication.But these procedures are found to be challenging because of maximum mobility,unstable links,dynamic topology,and energy restricted UAVs.These issues are solved by the use of artificial intelligence(AI)and energy efficient clustering techniques for UAVs in the 6G environment.With this inspiration,this work designs an artificial intelligence enabled cooperative cluster-based data collection technique for unmanned aerial vehicles(AECCDC-UAV)in 6G environment.The proposed AECCDC-UAV technique purposes for dividing the UAV network as to different clusters and allocate a cluster head(CH)to each cluster in such a way that the energy consumption(ECM)gets minimized.The presented AECCDC-UAV technique involves a quasi-oppositional shuffled shepherd optimization(QOSSO)algorithm for selecting the CHs and construct clusters.The QOSSO algorithm derives a fitness function involving three input parameters residual energy of UAVs,distance to neighboring UAVs,and degree of UAVs.The performance of the AECCDC-UAV technique is validated in many aspects and the obtained experimental values demonstration promising results over the recent state of art methods.
文摘MigroGrid(MG)has emerged to resolve the growing demand for energy.But because of its inconsistent output,it can result in various power quality(PQ)issues.PQ is a problem that is becoming more and more important for the reliability of power systems that use renewable energy sources.Similarly,the employment of nonlinear loads will introduce harmonics into the system and,as a result,cause distortions in the current and voltage waveforms as well as low power quality issues in the supply system.Thus,this research focuses on power quality enhancement in the MG using hybrid shunt filters.However,the performance of the filter mainly depends upon the design,and stability of the controller.The efficiency of the proposed filter is enhanced by incorporating an enhanced adaptive fuzzy neural network(AFNN)controller.The performance of the proposed topology is examined in a MATLAB/Simulink environment,and experimental findings are provided to validate the effectiveness of this approach.Further,the results of the proposed controller are compared with Adaptive Fuzzy Back-Stepping(AFBS)and Adaptive Fuzzy Sliding(AFS)to prove its superiority over power quality improvement in MG.From the analysis,it can be observed that the proposed system reduces the total harmonic distortion by about 1.8%,which is less than the acceptable limit standard.
基金This work was supported by the Deanship of Scientific Research at King Khalid University through a General Research Project under Grant Number GRP/41/42.
文摘Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.
文摘The pupil recognition method is helpful in many real-time systems,including ophthalmology testing devices,wheelchair assistance,and so on.The pupil detection system is a very difficult process in a wide range of datasets due to problems caused by varying pupil size,occlusion of eyelids,and eyelashes.Deep Convolutional Neural Networks(DCNN)are being used in pupil recognition systems and have shown promising results in terms of accuracy.To improve accuracy and cope with larger datasets,this research work proposes BOC(BAT Optimized CNN)-IrisNet,which consists of optimizing input weights and hidden layers of DCNN using the evolutionary BAT algorithm to efficiently find the human eye pupil region.The proposed method is based on very deep architecture and many tricks from recently developed popular CNNs.Experiment results show that the BOC-IrisNet proposal can efficiently model iris microstructures and provides a stable discriminating iris representation that is lightweight,easy to implement,and of cutting-edge accuracy.Finally,the region-based black box method for determining pupil center coordinates was introduced.The proposed architecture was tested using various IRIS databases,including the CASIA(Chinese academy of the scientific research institute of automation)Iris V4 dataset,which has 99.5%sensitivity and 99.75%accuracy,and the IIT(Indian Institute of Technology)Delhi dataset,which has 99.35%specificity and MMU(Multimedia University)99.45%accuracy,which is higher than the existing architectures.
文摘Irretrievable loss of vision is the predominant result of Glaucoma in the retina.Recently,multiple approaches have paid attention to the automatic detection of glaucoma on fundus images.Due to the interlace of blood vessels and the herculean task involved in glaucoma detection,the exactly affected site of the optic disc of whether small or big size cup,is deemed challenging.Spatially Based Ellipse Fitting Curve Model(SBEFCM)classification is suggested based on the Ensemble for a reliable diagnosis of Glaucomain theOptic Cup(OC)and Optic Disc(OD)boundary correspondingly.This research deploys the Ensemble Convolutional Neural Network(CNN)classification for classifying Glaucoma or Diabetes Retinopathy(DR).The detection of the boundary between the OC and the OD is performed by the SBEFCM,which is the latest weighted ellipse fitting model.The SBEFCM that enhances and widens the multi-ellipse fitting technique is proposed here.There is a preprocessing of input fundus image besides segmentation of blood vessels to avoid interlacing surrounding tissues and blood vessels.The ascertaining of OCandODboundary,which characterizedmany output factors for glaucoma detection,has been developed by EnsembleCNNclassification,which includes detecting sensitivity,specificity,precision,andArea Under the receiver operating characteristic Curve(AUC)values accurately by an innovative SBEFCM.In terms of contrast,the proposed Ensemble CNNsignificantly outperformed the current methods.
文摘A wireless sensor network (WSN) is spatially distributing independent sensors to monitor physical and environmental characteristics such as temperature, sound, pressure and also provides different applications such as battlefield inspection and biological detection. The Constrained Motion and Sensor (CMS) Model represents the features and explain k-step reach ability testing to describe the states. The description and calculation based on CMS model does not solve the problem in mobile robots. The ADD framework based on monitoring radio measurements creates a threshold. But the methods are not effective in dynamic coverage of complex environment. In this paper, a Localized Coverage based on Shape and Area Detection (LCSAD) Framework is developed to increase the dynamic coverage using mobile robots. To facilitate the measurement in mobile robots, two algorithms are designed to identify the coverage area, (i.e.,) the area of a coverage hole or not. The two algorithms are Localized Geometric Voronoi Hexagon (LGVH) and Acquaintance Area Hexagon (AAH). LGVH senses all the shapes and it is simple to show all the boundary area nodes. AAH based algorithm simply takes directional information by locating the area of local and global convex points of coverage area. Both these algorithms are applied to WSN of random topologies. The simulation result shows that the proposed LCSAD framework attains minimal energy utilization, lesser waiting time, and also achieves higher scalability, throughput, delivery rate and 8% maximal coverage connectivity in sensor network compared to state-of-art works.
文摘The objective of this research is to examine the use of feature selection and classification methods for distinguishing different types of brain tumors.The brain tumor is characterized by an anomalous proliferation of brain cells that can either be benign or malignant.Most tumors are misdiagnosed due to the variabil-ity and complexity of lesions,which reduces the survival rate in patients.Diagno-sis of brain tumors via computer vision algorithms is a challenging task.Segmentation and classification of brain tumors are currently one of the most essential surgical and pharmaceutical procedures.Traditional brain tumor identi-fication techniques require manual segmentation or handcrafted feature extraction that is error-prone and time-consuming.Hence the proposed research work is mainly focused on medical image processing,which takes Magnetic Resonance Imaging(MRI)images as input and performs preprocessing,segmentation,fea-ture extraction,feature selection,similarity measurement,and classification steps for identifying brain tumors.Initially,the medianfilter is practically applied to the input image to reduce the noise.The graph-cut segmentation technique is used to segment the tumor region.The texture feature is extracted from the output of the segmented image.The extracted feature is selected by using the Ant Colony Opti-mization(ACO)algorithm to improve the performance of the classifier.This prob-abilistic approach is used to solve computing issues.The Euclidean distance is used to calculate the degree of similarity for each extracted feature.The selected feature value is given to the Relevance Vector Machine(RVM)which is a multi-class classification technique.Finally,the tumor is classified as abnormal or nor-mal.The experimental result reveals that the proposed RVM technique gives a better accuracy range of 98.87%when compared to the traditional Support Vector Machine(SVM)technique.
文摘A new method of adaptable rendering for interaction in Virtual Environment(VE) through different visual acuity equations is proposed. An acuity factor equation of luminance vision is first given. Secondly, five equations which calculate the visual acuity through visual acuity factors are presented, and adaptive rendering strategy based on different visual acuity equations is given. The VE system may select one of them on the basis of the host’s load, hereby select LOD for each model which would be rendered. A coarser LOD is selected where the visual acuity is lower, and a better LOD is used where it is higher. This method is tested through experiments and the experimental results show that it is effective.
文摘Liver Segmentation is one of the challenging tasks in detecting and classifying liver tumors from Computed Tomography(CT)images.The segmentation of hepatic organ is more intricate task,owing to the fact that it possesses a sizeable quantum of vascularization.This paper proposes an algorithm for automatic seed point selection using energy feature for use in level set algorithm for segmentation of liver region in CT scans.The effectiveness of the method can be determined when used in a model to classify the liver CT images as tumorous or not.This involves segmentation of the region of interest(ROI)from the segmented liver,extraction of the shape and texture features from the segmented ROI and classification of the ROIs as tumorous or not by using a classifier based on the extracted features.In this work,the proposed seed point selection technique has been used in level set algorithm for segmentation of liver region in CT scans and the ROIs have been extracted using Fuzzy C Means clustering(FCM)which is one of the algorithms to segment the images.The dataset used in this method has been collected from various repositories and scan centers.The outcome of this proposed segmentation model has reduced the area overlap error that could offer the intended accuracy and consistency.It gives better results when compared with other existing algorithms.Fast execution in short span of time is another advantage of this method which in turns helps the radiologist to ascertain the abnormalities instantly.
文摘Cloud computing is an emerging paradigm with many applications that are integrated with IT organization having the freedom to migrate services between different physical servers. Analytic Hierarchy Process (AHP) with a pairwise comparison matrix technique for applications has been used for serving resources. AHP is a mathematical technique for multi-criteria decision-making used in cloud computing. The growth in cloud computing for resource allocation is sudden and raises complex issues with quality of services for selecting applications. Finally, based on the selected criteria, applications are ranked using the pairwise comparison matrix of AHP to determine the most effective scheme. The presented AHP technique represents a well-balanced multi criteria priorities synthesis of various applications effect factors that must be taken into consideration when making complex decisions of this nature. Keeping in view wide range of applications of cloud computing an attempt has been made to develop multiple criteria decision making model.
文摘Cognitive radio technology makes efficient use of the valuable radio frequency spectrum in a non-interfering manner to solve the problem of spectrum scarcity. This paper aims to design a scheme for the concurrent use of licensed frequencies by Underlay Cognitive Users (UCUs). We develop a new receiver-initiated Medium Access Control (MAC) protocol to facilitate the selections of alternative reliable carrier frequencies. A circuit is designed to establish reliable carrier selections based on the Received Signal Strength Indicator (RSSI) at the receiving end. Based on both packet-level simulations and various performance parameters, a comparison is carried out among conventional techniques, including the Multiple Access with Collision Avoidance (MACA) and MACA by invitation(MACA-BI) techniques, and our scheme. The simulated results demonstrate that when conventional techniques are used, the system overhead time increases from 0.5 s on the first attempt to 16.5 s on the sixth attempt. In the proposed scheme under the same failure condition, overhead time varies from 0.5 s to 2 s. This improvement is due to the complete elimination of the exponential waiting time that occurs during failed transmissions. An average efficiency of 60% is achieved with our scheme while only 43% and 34% average efficiencies are achieved with the MACA and MACA-BI techniques, respectively. The throughput performance of our scheme on the fourth attempt is 7 Mbps, whereas for the MACA and MACA-BI protocols, it is 1.9 Mbps and 2.2 Mbps respectively.
文摘A recommender system is an approach performed by e-commerce for increasing smooth users’experience.Sequential pattern mining is a technique of data mining used to identify the co-occurrence relationships by taking into account the order of transactions.This work will present the implementation of sequence pattern mining for recommender systems within the domain of e-com-merce.This work will execute the Systolic tree algorithm for mining the frequent patterns to yield feasible rules for the recommender system.The feature selec-tion's objective is to pick a feature subset having the least feature similarity as well as highest relevancy with the target class.This will mitigate the feature vector's dimensionality by eliminating redundant,irrelevant,or noisy data.This work pre-sents a new hybrid recommender system based on optimized feature selection and systolic tree.The features were extracted using Term Frequency-Inverse Docu-ment Frequency(TF-IDF),feature selection with the utilization of River Forma-tion Dynamics(RFD),and the Particle Swarm Optimization(PSO)algorithm.The systolic tree is used for pattern mining,and based on this,the recommendations are given.The proposed methods were evaluated using the MovieLens dataset,and the experimental outcomes confirmed the efficiency of the techniques.It was observed that the RFD feature selection with systolic tree frequent pattern mining with collaborativefiltering,the precision of 0.89 was achieved.
文摘The high performance of the state-of-the-art deep neural networks(DNNs)is acquired at the cost of huge consumption of computing resources.Quantization of networks is recently recognized as a promising solution to solve the problem and significantly reduce the resource usage.However,the previous quantization works have mostly focused on the DNN inference,and there were very few works to address on the challenges of DNN training.In this paper,we leverage dynamic fixed-point(DFP)quantization algorithm and stochastic rounding(SR)strategy to develop a fully quantized 8-bit neural networks targeting low bitwidth training.The experiments show that,in comparison to the full-precision networks,the accuracy drop of our quantized convolutional neural networks(CNNs)can be less than 2%,even when applied to deep models evaluated on Image-Net dataset.Additionally,our 8-bit GNMT translation network can achieve almost identical BLEU to full-precision network.We further implement a prototype on FPGA and the synthesis shows that the low bitwidth training scheme can reduce the resource usage significantly.
文摘A severe problem in modern information systems is Digital media tampering along with fake information.Even though there is an enhancement in image development,image forgery,either by the photographer or via image manipulations,is also done in parallel.Numerous researches have been concentrated on how to identify such manipulated media or information manually along with automatically;thus conquering the complicated forgery methodologies with effortlessly obtainable technologically enhanced instruments.However,high complexity affects the developed methods.Presently,it is complicated to resolve the issue of the speed-accuracy trade-off.For tackling these challenges,this article put forward a quick and effective Copy-Move Forgery Detection(CMFD)system utilizing a novel Quad-sort Moth Flame(QMF)Light Gradient Boosting Machine(QMF-Light GBM).Utilizing Borel Transform(BT)-based Wiener Filter(BWF)and resizing,the input images are initially pre-processed by eliminating the noise in the proposed system.After that,by utilizing the Orientation Preserving Simple Linear Iterative Clustering(OPSLIC),the pre-processed images,partitioned into a number of grids,are segmented.Next,as of the segmented images,the significant features are extracted along with the feature’s distance is calculated and matched with the input images.Next,utilizing the Union Topological Measure of Pattern Diversity(UTMOPD)method,the false positive matches that took place throughout the matching process are eliminated.After that,utilizing the QMF-Light GBM visualization,the visualization of forged in conjunction with non-forged images is performed.The extensive experiments revealed that concerning detection accuracy,the proposed system could be extremely precise when contrasted to some top-notch approaches.
文摘The sensitive data stored in the public cloud by privileged users,such as corporate companies and government agencies are highly vulnerable in the hands of cloud providers and hackers.The proposed Virtual Cloud Storage Archi-tecture is primarily concerned with data integrity and confidentiality,as well as availability.To provide confidentiality and availability,thefile to be stored in cloud storage should be encrypted using an auto-generated key and then encoded into distinct chunks.Hashing the encoded chunks ensured thefile integrity,and a newly proposed Circular Shift Chunk Allocation technique was used to determine the order of chunk storage.Thefile could be retrieved by performing the opera-tions in reverse.Using the regenerating code,the model could regenerate the missing and corrupted chunks from the cloud.The proposed architecture adds an extra layer of security while maintaining a reasonable response time and sto-rage capacity.Experimental results analysis show that the proposed model has been tested with storage space and response time for storage and retrieval.The VCSA model consumes 1.5x(150%)storage space.It was found that total storage required for the VCSA model is very low when compared with 2x Replication and completely satisfies the CIA model.The response time VCSA model was tested with different sizedfiles starting from 2 to 16 MB.The response time for storing and retrieving a 2 MBfile is 4.96 and 3.77 s respectively,and for a 16 MBfile,the response times are 11.06 s for storage and 5.6 s for retrieval.
文摘In this work,we design a multisensory IoT-based online vitals monitor(hereinafter referred to as the VITALS)to sense four bedside physiological parameters including pulse(heart)rate,body temperature,blood pressure,and periph-eral oxygen saturation.Then,the proposed system constantly transfers these signals to the analytics system which aids in enhancing diagnostics at an earlier stage as well as monitoring after recovery.The core hardware of the VITALS includes commercial off-the-shelf sensing devices/medical equipment,a powerful microcontroller,a reliable wireless communication module,and a big data analytics system.It extracts human vital signs in a pre-programmed interval of 30 min and sends them to big data analytics system through the WiFi module for further analysis.We use Apache Kafka(to gather live data streams from connected sen-sors),Apache Spark(to categorize the patient vitals and notify the medical pro-fessionals while identifying abnormalities in physiological parameters),Hadoop Distributed File System(HDFS)(to archive data streams for further analysis and long-term storage),Spark SQL,Hive and Matplotlib(to support caregivers to access/visualize appropriate information from collected data streams and to explore/understand the health status of the individuals).In addition,we develop a mobile application to send statistical graphs to doctors and patients to enable them to monitor health conditions remotely.Our proposed system is implemented on three patients for 7 days to check the effectiveness of sensing,data processing,and data transmission mechanisms.To validate the system accuracy,we compare the data values collected from established sensors with the measured readouts using a commercial healthcare monitor,the Welch Allyn®Spot Check.Our pro-posed system provides improved care solutions,especially for those whose access to care services is limited.