Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models...Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models often have accurate results at the expense of high computational complexity.To address this problem,this paper uses the Tree-structured Parzen Estimator(TPE)to tune the hyperparameters of the Long Short-term Memory(LSTM)deep learning framework.The Tree-structured Parzen Estimator(TPE)uses a probabilistic approach with an adaptive searching mechanism by classifying the objective function values into good and bad samples.This ensures fast convergence in tuning the hyperparameter values in the deep learning model for performing prediction while still maintaining a certain degree of accuracy.It also overcomes the problem of converging to local optima and avoids timeconsuming random search and,therefore,avoids high computational complexity in prediction accuracy.The proposed scheme first performs data smoothing and normalization on the input data,which is then fed to the input of the TPE for tuning the hyperparameters.The traffic data is then input to the LSTM model with tuned parameters to perform the traffic prediction.The three optimizers:Adaptive Moment Estimation(Adam),Root Mean Square Propagation(RMSProp),and Stochastic Gradient Descend with Momentum(SGDM)are also evaluated for accuracy prediction and the best optimizer is then chosen for final traffic prediction in TPE-LSTM model.Simulation results verify the effectiveness of the proposed model in terms of accuracy of prediction over the benchmark schemes.展开更多
The combining microelectronic devices and associated technologies onto a single silicon chip poses a substantial challenge.However,in recent years,the area of silicon photonics has experienced remarkable advancements ...The combining microelectronic devices and associated technologies onto a single silicon chip poses a substantial challenge.However,in recent years,the area of silicon photonics has experienced remarkable advancements and notable leaps in performance.The performance of silicon on insulator(SOI)based photonic devices,such as fast silicon optical modulators,photonic transceivers,optical filters,etc.,have been discussed.This would be a step forward in creating standalone silicon photonic devices,strengthening the possibility of single on-chip nanophotonic integrated circuits.Suppose an integrated silicon photonic chip is designed and fabricated.In that case,it might drastically modify these combined photonic component costs,power consumption,and size,bringing substantial,perhaps revolutionary,changes to the next-generation communications sector.Yet,the monolithic integration of photonic and electrical circuitry is a significant technological difficulty.A complicated set of factors must be carefully considered to determine which application will have the best chance of success employing silicon-based integrated product solutions.The processing limitations connected to the current process flow,the process generation(sometimes referred to as lithography node generation),and packaging requirements are a few of these factors to consider.This review highlights recent developments in integrated silicon photonic devices and their proven applications,including but not limited to photonic waveguides,photonic amplifiers and filters,onchip photonic transceivers,and the state-of-the-art of silicon photonic in multidimensional quantum systems.The investigated devices aim to expedite the transfer of silicon photonics from academia to industry by opening the next phase in on-chip silicon photonics and enabling the application of silicon photonic-based devices in various optical systems.展开更多
Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis a...Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment.An automated screening for DR has been identified as an effective method for early DR detection,which can decrease the workload associated to manual grading as well as save diagnosis costs and time.Several studies have been carried out to develop automated detection and classification models for DR.This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy(DR).The proposed model incorporates different processes namely data collection,preprocessing,segmentation,feature extraction and classification.At first,the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server.Then,the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization(CLAHE)model.Next,the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering(ASKFCM)model.Afterwards,deep Convolution Neural Network(CNN)based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes(GNB)model.The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.展开更多
The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and ...The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and needs to be explored in various areas to understand the key challenges for deployment in real-world scenarios.The paper systematically reviewed the available literature using the PRISMA guiding principle.The study aims to provide a detailed overview of the increasing use of FL in IoT networks,including the architecture and challenges.A systematic review approach is used to collect,categorize and analyze FL-IoT-based articles.Asearch was performed in the IEEE,Elsevier,Arxiv,ACM,and WOS databases and 92 articles were finally examined.Inclusion measures were published in English and with the keywords“FL”and“IoT”.The methodology begins with an overview of recent advances in FL and the IoT,followed by a discussion of how these two technologies can be integrated.To be more specific,we examine and evaluate the capabilities of FL by talking about communication protocols,frameworks and architecture.We then present a comprehensive analysis of the use of FL in a number of key IoT applications,including smart healthcare,smart transportation,smart cities,smart industry,smart finance,and smart agriculture.The key findings from this analysis of FL IoT services and applications are also presented.Finally,we performed a comparative analysis with FL IID(independent and identical data)and non-ID,traditional centralized deep learning(DL)approaches.We concluded that FL has better performance,especially in terms of privacy protection and resource utilization.FL is excellent for preserving privacy becausemodel training takes place on individual devices or edge nodes,eliminating the need for centralized data aggregation,which poses significant privacy risks.To facilitate development in this rapidly evolving field,the insights presented are intended to help practitioners and researchers navigate the complex terrain of FL and IoT.展开更多
Ubiquitous data monitoring and processing with minimal latency is one of the crucial challenges in real-time and scalable applications.Internet of Things(IoT),fog computing,edge computing,cloud computing,and the edge ...Ubiquitous data monitoring and processing with minimal latency is one of the crucial challenges in real-time and scalable applications.Internet of Things(IoT),fog computing,edge computing,cloud computing,and the edge of things are the spine of all real-time and scalable applications.Conspicuously,this study proposed a novel framework for a real-time and scalable application that changes dynamically with time.In this study,IoT deployment is recommended for data acquisition.The Pre-Processing of data with local edge and fog nodes is implemented in this study.The thresholdoriented data classification method is deployed to improve the intrusion detection mechanism’s performance.The employment of machine learningempowered intelligent algorithms in a distributed manner is implemented to enhance the overall response rate of the layered framework.The placement of respondent nodes near the framework’s IoT layer minimizes the network’s latency.For economic evaluation of the proposed framework with minimal efforts,EdgeCloudSim and FogNetSim++simulation environments are deployed in this study.The experimental results confirm the robustness of the proposed system by its improvised threshold-oriented data classification and intrusion detection approach,improved response rate,and prediction mechanism.Moreover,the proposed layered framework provides a robust solution for real-time and scalable applications that changes dynamically with time.展开更多
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ...Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.展开更多
The goal of the research is to develop a methodology to minimize the public’s exposure to harmful emissions from coal power plants while maintaining minimal operational costs related to electric distribution losses a...The goal of the research is to develop a methodology to minimize the public’s exposure to harmful emissions from coal power plants while maintaining minimal operational costs related to electric distribution losses and coal logistics. The objective is achieved by combining EPA Screen3, ISC3 and Japanese METI-LIS model equations with minimum spanning tree (MST) algorithm. Prim’s MST algorithm is used to simulate an electric distribution system and coal transportation pathways. The model can detect emission interaction with another source and estimate the ground level concentrations of emissions up to distances of 25 kilometers. During a grid search, the algorithm helps determine a candidate location, for a new coal power plant, that would minimize the operational cost while ensuring emission exposure is below the EPA/NIOSH thresholds. The proposed methodology has been coded in form of a location analysis simulation. An exhaustive search strategy delivers a final candidate location for a new coal power plant to ensure minimum operational costs as compared to the random or greedy search strategy. The simulation provides a tool to industrial zone planners, environmental engineers, and stakeholders in coal-based power generation. Using operational and emissions perspectives, the tool helps ascertain a list of compromise locations for a new coal power plant facility.展开更多
Data fusion is a multidisciplinary research area that involves different domains.It is used to attain minimum detection error probability and maximum reliability with the help of data retrieved from multiple healthcar...Data fusion is a multidisciplinary research area that involves different domains.It is used to attain minimum detection error probability and maximum reliability with the help of data retrieved from multiple healthcare sources.The generation of huge quantity of data from medical devices resulted in the formation of big data during which data fusion techniques become essential.Securing medical data is a crucial issue of exponentially-pacing computing world and can be achieved by Intrusion Detection Systems(IDS).In this regard,since singularmodality is not adequate to attain high detection rate,there is a need exists to merge diverse techniques using decision-based multimodal fusion process.In this view,this research article presents a new multimodal fusion-based IDS to secure the healthcare data using Spark.The proposed model involves decision-based fusion model which has different processes such as initialization,pre-processing,Feature Selection(FS)and multimodal classification for effective detection of intrusions.In FS process,a chaotic Butterfly Optimization(BO)algorithmcalled CBOA is introduced.Though the classic BO algorithm offers effective exploration,it fails in achieving faster convergence.In order to overcome this,i.e.,to improve the convergence rate,this research work modifies the required parameters of BO algorithm using chaos theory.Finally,to detect intrusions,multimodal classifier is applied by incorporating three Deep Learning(DL)-based classification models.Besides,the concepts like Hadoop MapReduce and Spark were also utilized in this study to achieve faster computation of big data in parallel computation platform.To validate the outcome of the presented model,a series of experimentations was performed using the benchmark NSLKDDCup99 Dataset repository.The proposed model demonstrated its effective results on the applied dataset by offering the maximum accuracy of 99.21%,precision of 98.93%and detection rate of 99.59%.The results assured the betterment of the proposed model.展开更多
The rapid expansion of Internet of Things(IoT)devices deploys various sensors in different applications like homes,cities and offices.IoT applications depend upon the accuracy of sensor data.So,it is necessary to pred...The rapid expansion of Internet of Things(IoT)devices deploys various sensors in different applications like homes,cities and offices.IoT applications depend upon the accuracy of sensor data.So,it is necessary to predict faults in the sensor and isolate their cause.A novel primitive technique named fall curve is presented in this paper which characterizes sensor faults.This technique identifies the faulty sensor and determines the correct working of the sensor.Different sources of sensor faults are explained in detail whereas various faults that occurred in sensor nodes available in IoT devices are also presented in tabular form.Fault prediction in digital and analog sensors along with methods of sensor fault prediction are described.There are several advantages and disadvantages of sensor fault prediction methods and the fall curve technique.So,some solutions are provided to overcome the limitations of the fall curve technique.In this paper,a bibliometric analysis is carried out to visually analyze 63 papers fetched from the Scopus database for the past five years.Its novelty is to predict a fault before its occurrence by looking at the fall curve.The sensing of current flow in devices is important to prevent a major loss.So,the fall curves of ACS712 current sensors configured on different devices are drawn for predicting faulty or non-faulty devices.The analysis result proved that if any of the current sensors gets faulty,then the fall curve will differ and the value will immediately drop to zero.Various evaluation metrics for fault prediction are also described in this paper.At last,this paper also addresses some possible open research issues which are important to deal with false IoT sensor data.展开更多
Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on o...Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on ophthalmoscopically-visible symptoms of growing severity,which is then ranked in a stepwise scale from no retinopathy to various levels of DR severity.This paper presents an ensemble of Orthogonal Learning Particle Swarm Optimization(OPSO)algorithm-based Convolutional Neural Network(CNN)Model EOPSO-CNN in order to perform DR detection and grading.The proposed EOPSO-CNN model involves three main processes such as preprocessing,feature extraction,and classification.The proposed model initially involves preprocessing stage which removes the presence of noise in the input image.Then,the watershed algorithm is applied to segment the preprocessed images.Followed by,feature extraction takes place by leveraging EOPSO-CNN model.Finally,the extracted feature vectors are provided to a Decision Tree(DT)classifier to classify the DR images.The study experiments were carried out using Messidor DR Dataset and the results showed an extraordinary performance by the proposed method over compared methods in a considerable way.The simulation outcome offered the maximum classification with accuracy,sensitivity,and specificity values being 98.47%,96.43%,and 99.02%respectively.展开更多
The probability of medical staff to get affected from COVID19 is much higher due to their working environment which is more exposed to infectious diseases.So,as a preventive measure the body temperature monitoring of ...The probability of medical staff to get affected from COVID19 is much higher due to their working environment which is more exposed to infectious diseases.So,as a preventive measure the body temperature monitoring of medical staff at regular intervals is highly recommended.Infrared temperature sensing guns have proved its effectiveness and therefore such devices are used to monitor the body temperature.These devices are either used on hands or forehead.As a result,there are many issues in monitoring the temperature of frontline healthcare professionals.Firstly,these healthcare professionals keep wearing PPE(Personal Protective Equipment)kits during working hours and as a result it would be very difficult to monitor their body temperature.Secondly,these healthcare professionals also wear face shields and in such cases monitoring temperature by exposing forehead needs removal of face shield.Doing so after regular intervals is surely uncomfortable for healthcare professionals.To avoid such issues,this paper is disclosing a technologically advanced face shield equipped with sensors capable of monitoring body temperature instantly without the hassle of removing the face shield.This face shield is integrated with a built-in infrared temperature sensor.A total of 10 such face shields were printed and assembled within the university lab and then handed over to a group of ten members including faculty and students of nursing and health science department.This sequence was repeated four times and as a result 40 healthcare workers participated in the study.Thereafter,feedback analysis was conducted on questionnaire data and found a significant overall mean score of 4.59 out of 5 which indicates that the product is effective and worthy in every facet.Stress analysis is also performed in the simulated environment and found that the device can easily withstand the typically applied forces.The limitations of this product are difficulty in cleaning the product and comparatively high cost due to the deployment of electronic equipment.展开更多
The implementation of energy economics principles(EEPs)in sustainable construction and environmental mitigation is widely acknowledged.However,limited research has focused on the hindrances faced in implementing these...The implementation of energy economics principles(EEPs)in sustainable construction and environmental mitigation is widely acknowledged.However,limited research has focused on the hindrances faced in implementing these principles in the context of developing countries.To address this research gap,this study examines these hindrances from the perspective of professionals in the Nigerian construction industry.Existing hindrances were extracted from extant studies using a systematic literature review with predefined inclusion/exclusion criteria which helped formulate the questionnaire.Through the application of exploratory factor analysis,five clusters of hindrance factors were identified,encompassing financial constraints,inadequate policies and regulations,insufficient technological infrastructure,lack of awareness and education and stakeholder-related challenges.Furthermore,the multinomial regression analysis confirmed that the hindrances related to financial constraints,inadequate policies and regulations and insufficient technological infrastructure are the most significant barriers.This study advances scientific knowledge on the hindrances to the adoption of EEPs in Nigerian building projects,providing a comprehensive understanding of the challenges faced in the context of the Nigerian construction industry.Findings from the study will inform policymakers,industry professionals and other stakeholders about the key challenges that require attention and intervention,facilitating the development of targeted strategies and initiatives to overcome these barriers effectively.展开更多
This work presents the analyses of earthquake magnitude scales and seismicity parameters across the Nubian-Eurasian Plate Boundary Region.We developed magnitude conversion models using three regression techniques(R2≈...This work presents the analyses of earthquake magnitude scales and seismicity parameters across the Nubian-Eurasian Plate Boundary Region.We developed magnitude conversion models using three regression techniques(R2≈0.68)and implemented a tapered Gutenberg-Richter model with bootstrap uncertainty quantification.Our analysis yielded Mc=4.35,b-value=0.93(95%CI:0.74-1.09),a-value=6.19(95%CI:5.35-6.90),corner magnitude=8.69(95%CI:5.77-8.69),and maximum magnitude(Mmax)=7.24(95%CI:6.50-7.24).The tapered model provides superior fitting at higher magnitudes compared to the standard Gutenberg-Richter relationship,addressing a key limitation in seismic hazard characterization.The b-value below 1.0 indicates elevated potential for higher-magnitude events,while the substantial a-value suggests significant seismic productivity across the boundary.The relatively high Mc value points to limitations in detecting smaller earthquakes,particularly in less-instrumented areas of the boundary zone.The estimated Mmax and corner magnitude constrain the upper bound of potential earthquake magnitudes,critical for hazard assessments and engineering applications.While treating the region as a single seismotectonic unit was necessary given current data constraints,we acknowledge this approach’s limitations given the boundary’s diverse tectonic regimes.Future research should develop zone-specific parameters that account for distinct regional characteristics.Nevertheless,these region-wide parameters establish a valuable baseline framework for seismic hazard assessment,particularly useful where zone-specific data remain insufficient.展开更多
The June 22nd,1939 Accra earthquake(Mw=6.2)of Ghana is one of the most devastating intra-plate earthquakes in the sub-Sahara West African region.The waveform inversion earlier carried out suggested that the earthquake...The June 22nd,1939 Accra earthquake(Mw=6.2)of Ghana is one of the most devastating intra-plate earthquakes in the sub-Sahara West African region.The waveform inversion earlier carried out suggested that the earthquake was composed of two events.The smaller event(6.1 Mw)occurred 9.5 s before the onset of the larger event(6.4 Mw).The smaller event has a focal mechanism that suggests it occurred immediately north of the intersection of the Akwapim and Coastal Boundary fault.This study resolved the static Coulomb Failure Stress(CFS)change onto the finite fault models of the 6.4 Mw and 6.1 Mw earthquakes by USGS and its effect on associated receiver faults.Aftershocks were poorly spatially correlated with the enhanced CFS condition after the 6.4 Mw main shock and were explained to correlate with release of seismic energy from the associated secondarily stressed prominent strike-slip(Akwapim)fault and strike-slip(coastal boundary fault).Abrupt termination of the northeastward propagation of 6.1 Mw rupture surface was due to interaction with the strike-slip coastal boundary faults.The existing intersection between the Akwapim and Coastal boundary faults favored the enhanced CFS to generate the next major event of 6.4 Mw due to the deflection of motion transmitted from the seismically active fractured zones in the mid-Atlantic ridge(the boundary between the African plate and the South-American plate).展开更多
Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intell...Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intelligence(AI)and deep learning,there has been potential to improve diagnostic accuracy,especially with Magnetic Resonance Imaging(MRI).However,traditional state-of-the-art models lack the sensitivity essential for reliable tumor identification and segmentation.Thus,our research aims to enhance brain tumor diagnosis in MRI by proposing an advanced model.The proposed model incorporates dilated convolutions to optimize the brain tumor segmentation and classification.The proposed model is first trained and later evaluated using the BraTS 2020 dataset.In our proposed model preprocessing consists of normalization,noise reduction,and data augmentation to improve model robustness.The attention mechanism and dilated convolutions were introduced to increase the model’s focus on critical regions and capture finer spatial details without compromising image resolution.We have performed experimentation to measure efficiency.For this,we have used various metrics including accuracy,sensitivity,and curve(AUC-ROC).The proposed model achieved a high accuracy of 94%,a sensitivity of 93%,a specificity of 92%,and an AUC-ROC of 0.98,outperforming traditional diagnostic models in brain tumor detection.The proposed model accurately identifies tumor regions,while dilated convolutions enhanced the segmentation accuracy,especially for complex tumor structures.The proposed model demonstrates significant potential for clinical application,providing reliable and precise brain tumor detection in MRI.展开更多
文摘Traffic forecasting with high precision aids Intelligent Transport Systems(ITS)in formulating and optimizing traffic management strategies.The algorithms used for tuning the hyperparameters of the deep learning models often have accurate results at the expense of high computational complexity.To address this problem,this paper uses the Tree-structured Parzen Estimator(TPE)to tune the hyperparameters of the Long Short-term Memory(LSTM)deep learning framework.The Tree-structured Parzen Estimator(TPE)uses a probabilistic approach with an adaptive searching mechanism by classifying the objective function values into good and bad samples.This ensures fast convergence in tuning the hyperparameter values in the deep learning model for performing prediction while still maintaining a certain degree of accuracy.It also overcomes the problem of converging to local optima and avoids timeconsuming random search and,therefore,avoids high computational complexity in prediction accuracy.The proposed scheme first performs data smoothing and normalization on the input data,which is then fed to the input of the TPE for tuning the hyperparameters.The traffic data is then input to the LSTM model with tuned parameters to perform the traffic prediction.The three optimizers:Adaptive Moment Estimation(Adam),Root Mean Square Propagation(RMSProp),and Stochastic Gradient Descend with Momentum(SGDM)are also evaluated for accuracy prediction and the best optimizer is then chosen for final traffic prediction in TPE-LSTM model.Simulation results verify the effectiveness of the proposed model in terms of accuracy of prediction over the benchmark schemes.
文摘The combining microelectronic devices and associated technologies onto a single silicon chip poses a substantial challenge.However,in recent years,the area of silicon photonics has experienced remarkable advancements and notable leaps in performance.The performance of silicon on insulator(SOI)based photonic devices,such as fast silicon optical modulators,photonic transceivers,optical filters,etc.,have been discussed.This would be a step forward in creating standalone silicon photonic devices,strengthening the possibility of single on-chip nanophotonic integrated circuits.Suppose an integrated silicon photonic chip is designed and fabricated.In that case,it might drastically modify these combined photonic component costs,power consumption,and size,bringing substantial,perhaps revolutionary,changes to the next-generation communications sector.Yet,the monolithic integration of photonic and electrical circuitry is a significant technological difficulty.A complicated set of factors must be carefully considered to determine which application will have the best chance of success employing silicon-based integrated product solutions.The processing limitations connected to the current process flow,the process generation(sometimes referred to as lithography node generation),and packaging requirements are a few of these factors to consider.This review highlights recent developments in integrated silicon photonic devices and their proven applications,including but not limited to photonic waveguides,photonic amplifiers and filters,onchip photonic transceivers,and the state-of-the-art of silicon photonic in multidimensional quantum systems.The investigated devices aim to expedite the transfer of silicon photonics from academia to industry by opening the next phase in on-chip silicon photonics and enabling the application of silicon photonic-based devices in various optical systems.
基金RUSA-Phase 2.0 grant sanctioned vide Letter No.F.24-51/2014-U,Policy(TNMulti-Gen)Dept.of Edn.Govt.of India,Dt.09.10.2018.
文摘Diabetic retinopathy(DR)is a disease with an increasing prevalence and the major reason for blindness among working-age population.The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment.An automated screening for DR has been identified as an effective method for early DR detection,which can decrease the workload associated to manual grading as well as save diagnosis costs and time.Several studies have been carried out to develop automated detection and classification models for DR.This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy(DR).The proposed model incorporates different processes namely data collection,preprocessing,segmentation,feature extraction and classification.At first,the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server.Then,the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization(CLAHE)model.Next,the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering(ASKFCM)model.Afterwards,deep Convolution Neural Network(CNN)based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes(GNB)model.The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.
文摘The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and needs to be explored in various areas to understand the key challenges for deployment in real-world scenarios.The paper systematically reviewed the available literature using the PRISMA guiding principle.The study aims to provide a detailed overview of the increasing use of FL in IoT networks,including the architecture and challenges.A systematic review approach is used to collect,categorize and analyze FL-IoT-based articles.Asearch was performed in the IEEE,Elsevier,Arxiv,ACM,and WOS databases and 92 articles were finally examined.Inclusion measures were published in English and with the keywords“FL”and“IoT”.The methodology begins with an overview of recent advances in FL and the IoT,followed by a discussion of how these two technologies can be integrated.To be more specific,we examine and evaluate the capabilities of FL by talking about communication protocols,frameworks and architecture.We then present a comprehensive analysis of the use of FL in a number of key IoT applications,including smart healthcare,smart transportation,smart cities,smart industry,smart finance,and smart agriculture.The key findings from this analysis of FL IoT services and applications are also presented.Finally,we performed a comparative analysis with FL IID(independent and identical data)and non-ID,traditional centralized deep learning(DL)approaches.We concluded that FL has better performance,especially in terms of privacy protection and resource utilization.FL is excellent for preserving privacy becausemodel training takes place on individual devices or edge nodes,eliminating the need for centralized data aggregation,which poses significant privacy risks.To facilitate development in this rapidly evolving field,the insights presented are intended to help practitioners and researchers navigate the complex terrain of FL and IoT.
文摘Ubiquitous data monitoring and processing with minimal latency is one of the crucial challenges in real-time and scalable applications.Internet of Things(IoT),fog computing,edge computing,cloud computing,and the edge of things are the spine of all real-time and scalable applications.Conspicuously,this study proposed a novel framework for a real-time and scalable application that changes dynamically with time.In this study,IoT deployment is recommended for data acquisition.The Pre-Processing of data with local edge and fog nodes is implemented in this study.The thresholdoriented data classification method is deployed to improve the intrusion detection mechanism’s performance.The employment of machine learningempowered intelligent algorithms in a distributed manner is implemented to enhance the overall response rate of the layered framework.The placement of respondent nodes near the framework’s IoT layer minimizes the network’s latency.For economic evaluation of the proposed framework with minimal efforts,EdgeCloudSim and FogNetSim++simulation environments are deployed in this study.The experimental results confirm the robustness of the proposed system by its improvised threshold-oriented data classification and intrusion detection approach,improved response rate,and prediction mechanism.Moreover,the proposed layered framework provides a robust solution for real-time and scalable applications that changes dynamically with time.
文摘Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.
文摘The goal of the research is to develop a methodology to minimize the public’s exposure to harmful emissions from coal power plants while maintaining minimal operational costs related to electric distribution losses and coal logistics. The objective is achieved by combining EPA Screen3, ISC3 and Japanese METI-LIS model equations with minimum spanning tree (MST) algorithm. Prim’s MST algorithm is used to simulate an electric distribution system and coal transportation pathways. The model can detect emission interaction with another source and estimate the ground level concentrations of emissions up to distances of 25 kilometers. During a grid search, the algorithm helps determine a candidate location, for a new coal power plant, that would minimize the operational cost while ensuring emission exposure is below the EPA/NIOSH thresholds. The proposed methodology has been coded in form of a location analysis simulation. An exhaustive search strategy delivers a final candidate location for a new coal power plant to ensure minimum operational costs as compared to the random or greedy search strategy. The simulation provides a tool to industrial zone planners, environmental engineers, and stakeholders in coal-based power generation. Using operational and emissions perspectives, the tool helps ascertain a list of compromise locations for a new coal power plant facility.
文摘Data fusion is a multidisciplinary research area that involves different domains.It is used to attain minimum detection error probability and maximum reliability with the help of data retrieved from multiple healthcare sources.The generation of huge quantity of data from medical devices resulted in the formation of big data during which data fusion techniques become essential.Securing medical data is a crucial issue of exponentially-pacing computing world and can be achieved by Intrusion Detection Systems(IDS).In this regard,since singularmodality is not adequate to attain high detection rate,there is a need exists to merge diverse techniques using decision-based multimodal fusion process.In this view,this research article presents a new multimodal fusion-based IDS to secure the healthcare data using Spark.The proposed model involves decision-based fusion model which has different processes such as initialization,pre-processing,Feature Selection(FS)and multimodal classification for effective detection of intrusions.In FS process,a chaotic Butterfly Optimization(BO)algorithmcalled CBOA is introduced.Though the classic BO algorithm offers effective exploration,it fails in achieving faster convergence.In order to overcome this,i.e.,to improve the convergence rate,this research work modifies the required parameters of BO algorithm using chaos theory.Finally,to detect intrusions,multimodal classifier is applied by incorporating three Deep Learning(DL)-based classification models.Besides,the concepts like Hadoop MapReduce and Spark were also utilized in this study to achieve faster computation of big data in parallel computation platform.To validate the outcome of the presented model,a series of experimentations was performed using the benchmark NSLKDDCup99 Dataset repository.The proposed model demonstrated its effective results on the applied dataset by offering the maximum accuracy of 99.21%,precision of 98.93%and detection rate of 99.59%.The results assured the betterment of the proposed model.
基金supported by Taif University Researchers supporting Project number(TURSP-2020/347),Taif University,Taif,Saudi Arabia.
文摘The rapid expansion of Internet of Things(IoT)devices deploys various sensors in different applications like homes,cities and offices.IoT applications depend upon the accuracy of sensor data.So,it is necessary to predict faults in the sensor and isolate their cause.A novel primitive technique named fall curve is presented in this paper which characterizes sensor faults.This technique identifies the faulty sensor and determines the correct working of the sensor.Different sources of sensor faults are explained in detail whereas various faults that occurred in sensor nodes available in IoT devices are also presented in tabular form.Fault prediction in digital and analog sensors along with methods of sensor fault prediction are described.There are several advantages and disadvantages of sensor fault prediction methods and the fall curve technique.So,some solutions are provided to overcome the limitations of the fall curve technique.In this paper,a bibliometric analysis is carried out to visually analyze 63 papers fetched from the Scopus database for the past five years.Its novelty is to predict a fault before its occurrence by looking at the fall curve.The sensing of current flow in devices is important to prevent a major loss.So,the fall curves of ACS712 current sensors configured on different devices are drawn for predicting faulty or non-faulty devices.The analysis result proved that if any of the current sensors gets faulty,then the fall curve will differ and the value will immediately drop to zero.Various evaluation metrics for fault prediction are also described in this paper.At last,this paper also addresses some possible open research issues which are important to deal with false IoT sensor data.
文摘Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on ophthalmoscopically-visible symptoms of growing severity,which is then ranked in a stepwise scale from no retinopathy to various levels of DR severity.This paper presents an ensemble of Orthogonal Learning Particle Swarm Optimization(OPSO)algorithm-based Convolutional Neural Network(CNN)Model EOPSO-CNN in order to perform DR detection and grading.The proposed EOPSO-CNN model involves three main processes such as preprocessing,feature extraction,and classification.The proposed model initially involves preprocessing stage which removes the presence of noise in the input image.Then,the watershed algorithm is applied to segment the preprocessed images.Followed by,feature extraction takes place by leveraging EOPSO-CNN model.Finally,the extracted feature vectors are provided to a Decision Tree(DT)classifier to classify the DR images.The study experiments were carried out using Messidor DR Dataset and the results showed an extraordinary performance by the proposed method over compared methods in a considerable way.The simulation outcome offered the maximum classification with accuracy,sensitivity,and specificity values being 98.47%,96.43%,and 99.02%respectively.
基金supported by Taif University Researchers Supporting Project number(TURSP-2020/347),Taif University,Taif,Saudi Arabia.
文摘The probability of medical staff to get affected from COVID19 is much higher due to their working environment which is more exposed to infectious diseases.So,as a preventive measure the body temperature monitoring of medical staff at regular intervals is highly recommended.Infrared temperature sensing guns have proved its effectiveness and therefore such devices are used to monitor the body temperature.These devices are either used on hands or forehead.As a result,there are many issues in monitoring the temperature of frontline healthcare professionals.Firstly,these healthcare professionals keep wearing PPE(Personal Protective Equipment)kits during working hours and as a result it would be very difficult to monitor their body temperature.Secondly,these healthcare professionals also wear face shields and in such cases monitoring temperature by exposing forehead needs removal of face shield.Doing so after regular intervals is surely uncomfortable for healthcare professionals.To avoid such issues,this paper is disclosing a technologically advanced face shield equipped with sensors capable of monitoring body temperature instantly without the hassle of removing the face shield.This face shield is integrated with a built-in infrared temperature sensor.A total of 10 such face shields were printed and assembled within the university lab and then handed over to a group of ten members including faculty and students of nursing and health science department.This sequence was repeated four times and as a result 40 healthcare workers participated in the study.Thereafter,feedback analysis was conducted on questionnaire data and found a significant overall mean score of 4.59 out of 5 which indicates that the product is effective and worthy in every facet.Stress analysis is also performed in the simulated environment and found that the device can easily withstand the typically applied forces.The limitations of this product are difficulty in cleaning the product and comparatively high cost due to the deployment of electronic equipment.
文摘The implementation of energy economics principles(EEPs)in sustainable construction and environmental mitigation is widely acknowledged.However,limited research has focused on the hindrances faced in implementing these principles in the context of developing countries.To address this research gap,this study examines these hindrances from the perspective of professionals in the Nigerian construction industry.Existing hindrances were extracted from extant studies using a systematic literature review with predefined inclusion/exclusion criteria which helped formulate the questionnaire.Through the application of exploratory factor analysis,five clusters of hindrance factors were identified,encompassing financial constraints,inadequate policies and regulations,insufficient technological infrastructure,lack of awareness and education and stakeholder-related challenges.Furthermore,the multinomial regression analysis confirmed that the hindrances related to financial constraints,inadequate policies and regulations and insufficient technological infrastructure are the most significant barriers.This study advances scientific knowledge on the hindrances to the adoption of EEPs in Nigerian building projects,providing a comprehensive understanding of the challenges faced in the context of the Nigerian construction industry.Findings from the study will inform policymakers,industry professionals and other stakeholders about the key challenges that require attention and intervention,facilitating the development of targeted strategies and initiatives to overcome these barriers effectively.
文摘This work presents the analyses of earthquake magnitude scales and seismicity parameters across the Nubian-Eurasian Plate Boundary Region.We developed magnitude conversion models using three regression techniques(R2≈0.68)and implemented a tapered Gutenberg-Richter model with bootstrap uncertainty quantification.Our analysis yielded Mc=4.35,b-value=0.93(95%CI:0.74-1.09),a-value=6.19(95%CI:5.35-6.90),corner magnitude=8.69(95%CI:5.77-8.69),and maximum magnitude(Mmax)=7.24(95%CI:6.50-7.24).The tapered model provides superior fitting at higher magnitudes compared to the standard Gutenberg-Richter relationship,addressing a key limitation in seismic hazard characterization.The b-value below 1.0 indicates elevated potential for higher-magnitude events,while the substantial a-value suggests significant seismic productivity across the boundary.The relatively high Mc value points to limitations in detecting smaller earthquakes,particularly in less-instrumented areas of the boundary zone.The estimated Mmax and corner magnitude constrain the upper bound of potential earthquake magnitudes,critical for hazard assessments and engineering applications.While treating the region as a single seismotectonic unit was necessary given current data constraints,we acknowledge this approach’s limitations given the boundary’s diverse tectonic regimes.Future research should develop zone-specific parameters that account for distinct regional characteristics.Nevertheless,these region-wide parameters establish a valuable baseline framework for seismic hazard assessment,particularly useful where zone-specific data remain insufficient.
文摘The June 22nd,1939 Accra earthquake(Mw=6.2)of Ghana is one of the most devastating intra-plate earthquakes in the sub-Sahara West African region.The waveform inversion earlier carried out suggested that the earthquake was composed of two events.The smaller event(6.1 Mw)occurred 9.5 s before the onset of the larger event(6.4 Mw).The smaller event has a focal mechanism that suggests it occurred immediately north of the intersection of the Akwapim and Coastal Boundary fault.This study resolved the static Coulomb Failure Stress(CFS)change onto the finite fault models of the 6.4 Mw and 6.1 Mw earthquakes by USGS and its effect on associated receiver faults.Aftershocks were poorly spatially correlated with the enhanced CFS condition after the 6.4 Mw main shock and were explained to correlate with release of seismic energy from the associated secondarily stressed prominent strike-slip(Akwapim)fault and strike-slip(coastal boundary fault).Abrupt termination of the northeastward propagation of 6.1 Mw rupture surface was due to interaction with the strike-slip coastal boundary faults.The existing intersection between the Akwapim and Coastal boundary faults favored the enhanced CFS to generate the next major event of 6.4 Mw due to the deflection of motion transmitted from the seismically active fractured zones in the mid-Atlantic ridge(the boundary between the African plate and the South-American plate).
基金supported by the European University of Atlantic.
文摘Brain tumors pose significant diagnostic challenges due to their diverse types and complex anatomical locations.Due to the increase in precision image-based diagnostic tools,driven by advancements in artificial intelligence(AI)and deep learning,there has been potential to improve diagnostic accuracy,especially with Magnetic Resonance Imaging(MRI).However,traditional state-of-the-art models lack the sensitivity essential for reliable tumor identification and segmentation.Thus,our research aims to enhance brain tumor diagnosis in MRI by proposing an advanced model.The proposed model incorporates dilated convolutions to optimize the brain tumor segmentation and classification.The proposed model is first trained and later evaluated using the BraTS 2020 dataset.In our proposed model preprocessing consists of normalization,noise reduction,and data augmentation to improve model robustness.The attention mechanism and dilated convolutions were introduced to increase the model’s focus on critical regions and capture finer spatial details without compromising image resolution.We have performed experimentation to measure efficiency.For this,we have used various metrics including accuracy,sensitivity,and curve(AUC-ROC).The proposed model achieved a high accuracy of 94%,a sensitivity of 93%,a specificity of 92%,and an AUC-ROC of 0.98,outperforming traditional diagnostic models in brain tumor detection.The proposed model accurately identifies tumor regions,while dilated convolutions enhanced the segmentation accuracy,especially for complex tumor structures.The proposed model demonstrates significant potential for clinical application,providing reliable and precise brain tumor detection in MRI.