AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased cl...AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased classification pipeline. A total of 2835 endoscopic images from the duodenum were recorded in 290 children using the modified immersion technique(MIT). These children underwent routine upper endoscopy for suspected CD or non-celiac upper abdominal symptoms between August 2008 and December 2014. Blinded to the clinical data and biopsy results, three medical experts visually classified each image as normal mucosa(Marsh-0) or villous atrophy(Marsh-3). The experts' decisions were further integrated into state-of-the-arttexture recognition systems. Using the biopsy results as the reference standard, the classification accuracies of this hybrid approach were compared to the experts' diagnoses in 27 different settings.RESULTS: Compared to the experts' diagnoses, in 24 of 27 classification settings(consisting of three imaging modalities, three endoscopists and three classification approaches), the best overall classification accuracies were obtained with the new hybrid approach. In 17 of 24 classification settings, the improvements achieved with the hybrid approach were statistically significant(P < 0.05). Using the hybrid approach classification accuracies between 94% and 100% were obtained. Whereas the improvements are only moderate in the case of the most experienced expert, the results of the less experienced expert could be improved significantly in 17 out of 18 classification settings. Furthermore, the lowest classification accuracy, based on the combination of one database and one specific expert, could be improved from 80% to 95%(P < 0.001).CONCLUSION: The overall classification performance of medical experts, especially less experienced experts, can be boosted significantly by integrating expert knowledge into computer-aided diagnosis systems.展开更多
BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),i...BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),iScan]facilitate the detection and classification of colonic polyps during endoscopic sessions.However,there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps.In this work,we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging.AIM To assess which endoscopic imaging modalities are best suited for the computerassisted staging of colonic polyps.METHODS In our experiments,we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions.For this purpose,we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database.The image databases were obtained using different imaging modalities.Two databases were obtained by high-definition endoscopy in combination with i-Scan technology(one with chromoendoscopy and one without chromoendoscopy).Three databases were obtained by highmagnification endoscopy(two databases using narrow band imaging and one using chromoendoscopy).The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis.RESULTS Generally,it is feature-dependent which imaging modalities achieve high results and which do not.For the high-definition image databases,we achieved overall classification rates of up to 79.2%with chromoendoscopy and 88.9%without chromoendoscopy.In the case of the database obtained by high-magnification chromoendoscopy,the classification rates were up to 81.4%.For the combination of high-magnification endoscopy with NBI,results of up to 97.4%for one database and up to 84%for the other were achieved.Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions.It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality.CONCLUSION Chromoendoscopy has a negative impact on the results of the methods.NBI is better suited than chromoendoscopy.High-definition and high-magnification endoscopy are equally suited.展开更多
Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung c...Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung cancer.The recently developed deep learning(DL)models can be employed for the effectual identification and classification of diseases.This article introduces novel deep learning enabled CAD technique for lung cancer using biomedical CT image,named DLCADLC-BCT technique.The proposed DLCADLC-BCT technique intends for detecting and classifying lung cancer using CT images.The proposed DLCADLC-BCT technique initially uses gray level co-occurrence matrix(GLCM)model for feature extraction.Also,long short term memory(LSTM)model was applied for classifying the existence of lung cancer in the CT images.Moreover,moth swarm optimization(MSO)algorithm is employed to optimally choose the hyperparameters of the LSTM model such as learning rate,batch size,and epoch count.For demonstrating the improved classifier results of the DLCADLC-BCT approach,a set of simulations were executed on benchmark dataset and the outcomes exhibited the supremacy of the DLCADLC-BCT technique over the recent approaches.展开更多
Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical sh...Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.展开更多
This study offers a framework for a breast cancer computer-aided treat-ment prediction(CATP)system.The rising death rate among women due to breast cancer is a worldwide health concern that can only be addressed by ear...This study offers a framework for a breast cancer computer-aided treat-ment prediction(CATP)system.The rising death rate among women due to breast cancer is a worldwide health concern that can only be addressed by early diagno-sis and frequent screening.Mammography has been the most utilized breast ima-ging technique to date.Radiologists have begun to use computer-aided detection and diagnosis(CAD)systems to improve the accuracy of breast cancer diagnosis by minimizing human errors.Despite the progress of artificial intelligence(AI)in the medical field,this study indicates that systems that can anticipate a treatment plan once a patient has been diagnosed with cancer are few and not widely used.Having such a system will assist clinicians in determining the optimal treatment plan and avoid exposing a patient to unnecessary hazardous treatment that wastes a significant amount of money.To develop the prediction model,data from 336,525 patients from the SEER dataset were split into training(80%),and testing(20%)sets.Decision Trees,Random Forest,XGBoost,and CatBoost are utilized with feature importance to build the treatment prediction model.The best overall Area Under the Curve(AUC)achieved was 0.91 using Random Forest on the SEER dataset.展开更多
Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) a...Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.展开更多
Deep learning-based approaches are applied successfully in manyfields such as deepFake identification,big data analysis,voice recognition,and image recognition.Deepfake is the combination of deep learning in fake creati...Deep learning-based approaches are applied successfully in manyfields such as deepFake identification,big data analysis,voice recognition,and image recognition.Deepfake is the combination of deep learning in fake creation,which states creating a fake image or video with the help of artificial intelligence for political abuse,spreading false information,and pornography.The artificial intel-ligence technique has a wide demand,increasing the problems related to privacy,security,and ethics.This paper has analyzed the features related to the computer vision of digital content to determine its integrity.This method has checked the computer vision features of the image frames using the fuzzy clustering feature extraction method.By the proposed deep belief network with loss handling,the manipulation of video/image is found by means of a pairwise learning approach.This proposed approach has improved the accuracy of the detection rate by 98%on various datasets.展开更多
Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the...Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.展开更多
Metaheuristics are commonly used in various fields,including real-life problem-solving and engineering applications.The present work introduces a novel metaheuristic algorithm named the Artificial Circulatory System A...Metaheuristics are commonly used in various fields,including real-life problem-solving and engineering applications.The present work introduces a novel metaheuristic algorithm named the Artificial Circulatory System Algorithm(ACSA).The control of the circulatory system inspires it and mimics the behavior of hormonal and neural regulators involved in this process.The work initially evaluates the effectiveness of the suggested approach on 16 two-dimensional test functions,identified as classical benchmark functions.The method was subsequently examined by application to 12 CEC 2022 benchmark problems of different complexities.Furthermore,the paper evaluates ACSA in comparison to 64 metaheuristic methods that are derived from different approaches,including evolutionary,human,physics,and swarm-based.Subsequently,a sequence of statistical tests was undertaken to examine the superiority of the suggested algorithm in comparison to the 7 most widely used algorithms in the existing literature.The results show that the ACSA strategy can quickly reach the global optimum,avoid getting trapped in local optima,and effectively maintain a balance between exploration and exploitation.ACSA outperformed 42 algorithms statistically,according to post-hoc tests.It also outperformed 9 algorithms quantitatively.The study concludes that ACSA offers competitive solutions in comparison to popüler methods.展开更多
In this paper,we study the existence of least energy solutions for the following nonlinear fractional Schrodinger–Poisson system{(−∆)^(s)u+V(x)u+φu=f(u)in R^(3),(−∆)^(t)φ=u^(2)in R^(3),where s∈(3/4,1),t∈(0,1).Und...In this paper,we study the existence of least energy solutions for the following nonlinear fractional Schrodinger–Poisson system{(−∆)^(s)u+V(x)u+φu=f(u)in R^(3),(−∆)^(t)φ=u^(2)in R^(3),where s∈(3/4,1),t∈(0,1).Under some assumptions on V(x)and f,using Nehari–Pohozaev identity and the arguments of Brezis–Nirenberg,the monotonic trick and global compactness lemma,we prove the existence of a nontrivial least energy solution.展开更多
Fatigue crack growth is a critical phenomenon in engineering structures,accounting for a significant percentage of structural failures across various industries.Accurate prediction of crack initiation,propagation path...Fatigue crack growth is a critical phenomenon in engineering structures,accounting for a significant percentage of structural failures across various industries.Accurate prediction of crack initiation,propagation paths,and fatigue life is essential for ensuring structural integrity and optimizing maintenance schedules.This paper presents a comprehensive finite element approach for simulating two-dimensional fatigue crack growth under linear elastic conditionswith adaptivemesh generation.The source code for the programwas developed in Fortran 95 and compiled with Visual Fortran.To achieve high-fidelity simulations,the methodology integrates several key features:it employs an automatic,adaptive meshing technique that selectively refines the element density near the crack front and areas of significant stress concentration.Specialized singular elements are used at the crack tip to ensure precise stress field representation.The direction of crack advancement is predicted using the maximum tangential stress criterion,while stress intensity factors are determined through either the displacement extrapolation technique or the J-integral method.The simulation models crack growth as a series of linear increments,with solution stability maintained by a consistent transfer algorithm and a crack relaxation method.The framework’s effectiveness is demonstrated across various geometries and loading scenarios.Through rigorous validation against both experimental data and established numerical benchmarks,the approach is proven to accurately forecast crack trajectories and fatigue life.Furthermore,the detailed description of the program’s architecture offers a foundational blueprint,serving as a valuable guide for researchers aiming to develop their specialized software for fracture mechanics analysis.展开更多
The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.How...The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.However,existing datasets for IDS training often lack relevance to modern IIoT environments,limiting their applicability for research and development.To address the latter gap,this paper introduces the HiTar-2024 dataset specifically designed for IIoT systems.As a consequence,that can be used by an IDS to detect imminent threats.Likewise,HiTar-2024 was generated using the AREZZO simulator,which replicates realistic smart manufacturing scenarios.The generated dataset includes five distinct classes:Normal,Probing,Remote to Local(R2L),User to Root(U2R),and Denial of Service(DoS).Furthermore,comprehensive experiments with popular Machine Learning(ML)models using various classifiers,including BayesNet,Logistic,IBK,Multiclass,PART,and J48 demonstrate high accuracy,precision,recall,and F1-scores,exceeding 0.99 across all ML metrics.The latter result is reached thanks to the rigorous applied process to achieve this quite good result,including data pre-processing,features extraction,fixing the class imbalance problem,and using a test option for model robustness.This comprehensive approach emphasizes meticulous dataset construction through a complete dataset generation process,a careful labelling algorithm,and a sophisticated evaluation method,providing valuable insights to reinforce IIoT system security.Finally,the HiTar-2024 dataset is compared with other similar datasets in the literature,considering several factors such as data format,feature extraction tools,number of features,attack categories,number of instances,and ML metrics.展开更多
Acoustic-resolution photoacoustic microscopy(AR-PAM)suffers from degraded lateral resolution due to acoustic diffraction.Here,a resolution enhancement strategy for AR-PAM via a mean-reverting diffusion model was propo...Acoustic-resolution photoacoustic microscopy(AR-PAM)suffers from degraded lateral resolution due to acoustic diffraction.Here,a resolution enhancement strategy for AR-PAM via a mean-reverting diffusion model was proposed to achieve the transition from acoustic resolution to optical resolution.By modeling the degradation process from high-resolution image to low-resolution AR-PAM image with stable Gaussian noise(i.e.,mean state),a mean-reverting diffusion model is trained to learn prior information of the data distribution.Then the learned prior is employed to generate a high-resolution image from the AR-PAM image by iteratively sampling the noisy state.The performance of the proposed method was validated utilizing the simulated and in vivo experimental data under varying lateral resolutions and noise levels.The results show that an over 3.6-fold enhancement in lateral resolution was achieved.The image quality can be effectively improved,with a notable enhancement of∼66%in PSNR and∼480%in SSIM for in vivo data.展开更多
To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine lea...To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.展开更多
It’s possible for malicious operators to seize hold of electrical control systems, for instance, the engine control unit of driverless vehicles, from various vectors, e.g. autonomic control system, remote vehicle acc...It’s possible for malicious operators to seize hold of electrical control systems, for instance, the engine control unit of driverless vehicles, from various vectors, e.g. autonomic control system, remote vehicle access, or human drivers. To mitigate potential risks, this paper provides the inauguration study by proposing a theoretical framework in the physical, human and cyber triad. Its goal is to, at each time point, detect adversary control behaviors and protect control systems against malicious operations via integrating a variety of methods. This paper only proposes a theoretical framework which tries to indicate possible threats. With the support of the framework, the security system can lightly reduce the risk. The development and implementation of the system are out of scope.展开更多
Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosi...Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosing the best security system requires a structured decision-making framework,especially in complex scenarios involving multiple criteria.To address this problem,we develop a novel quantum spherical fuzzy technique for order preference by similarity to ideal solution(QSF-TOPSIS)methodology,integrating quantum mechanics principles and fuzzy theory.The proposed approach enhances decision-making accuracy,handles uncertainty,and incorporates criteria relationships.Criteria weights are determined using spherical fuzzy sets,and alternatives are ranked through the QSFTOPSIS framework.This comprehensive multi-criteria decision-making(MCDM)approach is applied to identify the optimal gate security system for an organization,considering critical factors such as accuracy,cost,and reliability.Additionally,the study compares the proposed approach with other established MCDM methods.The results confirm the alignment of rankings across these methods,demonstrating the robustness and reliability of the QSF-TOPSIS framework.The study identifies the infrared recognition and identification system(IRIS)as the most effective,with a score value of 0.5280 and optimal security system among the evaluated alternatives.This research contributes to the growing literature on quantum-enhanced decision-making models and offers a practical framework for solving complex,real-world problems involving uncertainty and ambiguity.展开更多
Classroom behavior recognition is a hot research topic,which plays a vital role in assessing and improving the quality of classroom teaching.However,existing classroom behavior recognition methods have challenges for ...Classroom behavior recognition is a hot research topic,which plays a vital role in assessing and improving the quality of classroom teaching.However,existing classroom behavior recognition methods have challenges for high recognition accuracy with datasets with problems such as scenes with blurred pictures,and inconsistent objects.To address this challenge,we proposed an effective,lightweight object detector method called the RFNet model(YOLO-FR).The YOLO-FR is a lightweight and effective model.Specifically,for efficient multi-scale feature extraction,effective feature pyramid shared convolutional(FPSC)was designed to improve the feature extract performance by leveraging convolutional layers with varying dilation rates from the input image in the backbone.Secondly,to address the problem of multi-scale variability in the scene,we design the Rep Ghost fusion Cross Stage Partial and Efficient Layer Aggregation Network(RGCSPELAN)to improve the network performance further and reduce the amount of computation and the number of parameters.In addition,by conducting experimental valuation on the SCB dataset3 and STBD-08 dataset.Experimental results indicate that,compared to the baseline model,the RFNet model has increased mean accuracy precision(mAP@50)from 69.6%to 71.0%on the SCB dataset3 and from 91.8%to 93.1%on the STBD-08 dataset.The RFNet approach has effectiveness precision at 68.6%,surpassing the baseline method(YOLOv11)at 3.3%and archieve the minimal size(4.9 M)on the SCB dataset3.Finally,comparing it with other algorithms,it accurately detects student behavior in complex classroom environments results confirmed that RFNet is well-suited for real-time and efficiently recognizing classroom behaviors.展开更多
Accurately modeling real network dynamics is a grand challenge in network science.The network dynamics arise from node interactions,which are shaped by network topology.Real networks tend to exhibit compact or highly ...Accurately modeling real network dynamics is a grand challenge in network science.The network dynamics arise from node interactions,which are shaped by network topology.Real networks tend to exhibit compact or highly optimized topologies.But the key problems arise:how to compress a network to best enhance its compactness,and what the compression limit of the network reflects?We abstract the topological compression of complex networks as a dynamic process of making them more compact and propose the local compression modulus that plays a key role in effective compression evolution of networks.Subsequently,we identify topological compressibility-a general property of complex networks that characterizes the extent to which a network can be compressed-and provide its approximate quantification.We anticipate that our findings and established theory will provide valuable insights into both dynamics and various applications of complex networks.展开更多
The rapid advancements in distributed generation technologies,the widespread adoption of distributed energy resources,and the integration of 5G technology have spurred sharing economy businesses within the electricity...The rapid advancements in distributed generation technologies,the widespread adoption of distributed energy resources,and the integration of 5G technology have spurred sharing economy businesses within the electricity sector.Revolutionary technologies such as blockchain,5G connectivity,and Internet of Things(IoT)devices have facilitated peer-to-peer distribution and real-time response to fluctuations in supply and demand.Nevertheless,sharing electricity within a smart community presents numerous challenges,including intricate design considerations,equitable allocation,and accurate forecasting due to the lack of well-organized temporal parameters.To address these challenges,this proposed system is focused on sharing extra electricity within the smart community.The working of the proposed system is composed of five main phases.In phase 1,we develop a model to forecast the energy consumption of the appliances using the Long Short-Term Memory(LSTM)integrated with the attention module.In phase 2,based on the predicted energy consumption,we designed a smart scheduler with attention-induced Genetic Algorithm(GA)to schedule the appliances to reduce energy consumption.In phase 3,a dynamic Feed-in Tariff(dFIT)algorithm makes real-time tariff adjustments using LSTM for demand prediction and SHapley Additive exPlanations(SHAP)values to improve model transparency.In phase 4,the energy saved from solar systems and smart scheduling is shared with the community grid.Finally,in phase 5,SDP security ensures the integrity and confidentiality of shared energy data.To evaluate the performance of energy sharing and scheduling for houses with and without solar support,we simulated the above phases using data obtained from the energy consumption of 17 household appliances in our IoT laboratory.Finally,the simulation results show that the proposed scheme reduces energy consumption and ensures secure and efficient distribution with peers,promoting a more sustainable energy management and resilient smart community.展开更多
Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variabil...Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variability of pressure,and conscientiousness of energy,issues that previously went unnoticed are now becoming recognized.This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning(MADRL)with Shapley Additive Explanations(SHAP)-based Explainable AI(XAI)for adaptive and interpretable water resource management.In the methodology,the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time network states,while also providing human-understandable explanations of the agents’decisions,using SHAP.This framework has been validated on five very diverse datasets,three of which are real-world scenarios involving actual water consumption from NYC and Alicante,with the other two being simulationbased standards such as LeakDB and the Water Distribution System Anomaly(WDSA)network.Empirical results demonstrate that the MADRL SHAP hybrid system reduces water loss by up to 32%,improves energy efficiency by+up to 25%,and maintains pressure stability between 91%and 93%,thereby outperforming the traditional rule-based control,single-agent DRL(Deep Reinforcement Learning),and XGBoost SHAP baselines.Furthermore,SHAP-based+interpretation brings transparency to the proposed model,with the average explanation consistency for all prediction models reaching 88%,thus further reinforcing the trustworthiness of the system on which the decision-making is based and empowering the utility operators to derive actionable insights from the model.The proposed framework addresses the critical challenges of smart water distribution.展开更多
基金Supported by the Austrian Science Fund(FWF),No.KLI 429-B13 to Vécsei A
文摘AIM: To further improve the endoscopic detection of intestinal mucosa alterations due to celiac disease(CD).METHODS: We assessed a hybrid approach based on the integration of expert knowledge into the computerbased classification pipeline. A total of 2835 endoscopic images from the duodenum were recorded in 290 children using the modified immersion technique(MIT). These children underwent routine upper endoscopy for suspected CD or non-celiac upper abdominal symptoms between August 2008 and December 2014. Blinded to the clinical data and biopsy results, three medical experts visually classified each image as normal mucosa(Marsh-0) or villous atrophy(Marsh-3). The experts' decisions were further integrated into state-of-the-arttexture recognition systems. Using the biopsy results as the reference standard, the classification accuracies of this hybrid approach were compared to the experts' diagnoses in 27 different settings.RESULTS: Compared to the experts' diagnoses, in 24 of 27 classification settings(consisting of three imaging modalities, three endoscopists and three classification approaches), the best overall classification accuracies were obtained with the new hybrid approach. In 17 of 24 classification settings, the improvements achieved with the hybrid approach were statistically significant(P < 0.05). Using the hybrid approach classification accuracies between 94% and 100% were obtained. Whereas the improvements are only moderate in the case of the most experienced expert, the results of the less experienced expert could be improved significantly in 17 out of 18 classification settings. Furthermore, the lowest classification accuracy, based on the combination of one database and one specific expert, could be improved from 80% to 95%(P < 0.001).CONCLUSION: The overall classification performance of medical experts, especially less experienced experts, can be boosted significantly by integrating expert knowledge into computer-aided diagnosis systems.
文摘BACKGROUND It was shown in previous studies that high definition endoscopy,high magnification endoscopy and image enhancement technologies,such as chromoendoscopy and digital chromoendoscopy[narrow-band imaging(NBI),iScan]facilitate the detection and classification of colonic polyps during endoscopic sessions.However,there are no comprehensive studies so far that analyze which endoscopic imaging modalities facilitate the automated classification of colonic polyps.In this work,we investigate the impact of endoscopic imaging modalities on the results of computer-assisted diagnosis systems for colonic polyp staging.AIM To assess which endoscopic imaging modalities are best suited for the computerassisted staging of colonic polyps.METHODS In our experiments,we apply twelve state-of-the-art feature extraction methods for the classification of colonic polyps to five endoscopic image databases of colonic lesions.For this purpose,we employ a specifically designed experimental setup to avoid biases in the outcomes caused by differing numbers of images per image database.The image databases were obtained using different imaging modalities.Two databases were obtained by high-definition endoscopy in combination with i-Scan technology(one with chromoendoscopy and one without chromoendoscopy).Three databases were obtained by highmagnification endoscopy(two databases using narrow band imaging and one using chromoendoscopy).The lesions are categorized into non-neoplastic and neoplastic according to the histological diagnosis.RESULTS Generally,it is feature-dependent which imaging modalities achieve high results and which do not.For the high-definition image databases,we achieved overall classification rates of up to 79.2%with chromoendoscopy and 88.9%without chromoendoscopy.In the case of the database obtained by high-magnification chromoendoscopy,the classification rates were up to 81.4%.For the combination of high-magnification endoscopy with NBI,results of up to 97.4%for one database and up to 84%for the other were achieved.Non-neoplastic lesions were classified more accurately in general than non-neoplastic lesions.It was shown that the image recording conditions highly affect the performance of automated diagnosis systems and partly contribute to a stronger effect on the staging results than the used imaging modality.CONCLUSION Chromoendoscopy has a negative impact on the results of the methods.NBI is better suited than chromoendoscopy.High-definition and high-magnification endoscopy are equally suited.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR03).
文摘Early detection of lung cancer can help for improving the survival rate of the patients.Biomedical imaging tools such as computed tomography(CT)image was utilized to the proper identification and positioning of lung cancer.The recently developed deep learning(DL)models can be employed for the effectual identification and classification of diseases.This article introduces novel deep learning enabled CAD technique for lung cancer using biomedical CT image,named DLCADLC-BCT technique.The proposed DLCADLC-BCT technique intends for detecting and classifying lung cancer using CT images.The proposed DLCADLC-BCT technique initially uses gray level co-occurrence matrix(GLCM)model for feature extraction.Also,long short term memory(LSTM)model was applied for classifying the existence of lung cancer in the CT images.Moreover,moth swarm optimization(MSO)algorithm is employed to optimally choose the hyperparameters of the LSTM model such as learning rate,batch size,and epoch count.For demonstrating the improved classifier results of the DLCADLC-BCT approach,a set of simulations were executed on benchmark dataset and the outcomes exhibited the supremacy of the DLCADLC-BCT technique over the recent approaches.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.
文摘Limbal Stem Cell Deficiency(LSCD)is an eye disease that can cause corneal opacity and vascularization.In its advanced stage it can lead to a degree of visual impairment.It involves the changing in the semispherical shape of the cornea to a drooping shape to downwards direction.LSCD is hard to be diagnosed at early stages.The color and texture of the cornea surface can provide significant information about the cornea affected by LSCD.Parameters such as shape and texture are very crucial to differentiate normal from LSCD cornea.Although several medical approaches exist,most of them requires complicated procedure and medical devices.Therefore,in this paper,we pursued the development of a LSCD detection technique(LDT)utilizing image processing methods.Early diagnosis of LSCD is very crucial for physicians to arrange for effective treatment.In the proposed technique,we developed a method for LSCD detection utilizing frontal eye images.A dataset of 280 eye images of frontal and lateral LSCD and normal patients were used in this research.First,the cornea region of both frontal and lateral images is segmented,and the geometric features are extracted through the automated active contour model and the spline curve.While the texture features are extracted using the feature selection algorithm.The experimental results exhibited that the combined features of the geometric and texture will exhibit accuracy of 95.95%,sensitivity of 97.91% and specificity of 94.05% with the random forest classifier of n=40.As a result,this research developed a Limbal stem cell deficiency detection system utilizing features’fusion using image processing techniques for frontal and lateral digital images of the eyes.
基金N.I.R.R.and K.I.M.have received a grant from the Malaysian Ministry of Higher Education.Grant number:203/PKOMP/6712025,http://portal.mygrants.gov.my/main.php.
文摘This study offers a framework for a breast cancer computer-aided treat-ment prediction(CATP)system.The rising death rate among women due to breast cancer is a worldwide health concern that can only be addressed by early diagno-sis and frequent screening.Mammography has been the most utilized breast ima-ging technique to date.Radiologists have begun to use computer-aided detection and diagnosis(CAD)systems to improve the accuracy of breast cancer diagnosis by minimizing human errors.Despite the progress of artificial intelligence(AI)in the medical field,this study indicates that systems that can anticipate a treatment plan once a patient has been diagnosed with cancer are few and not widely used.Having such a system will assist clinicians in determining the optimal treatment plan and avoid exposing a patient to unnecessary hazardous treatment that wastes a significant amount of money.To develop the prediction model,data from 336,525 patients from the SEER dataset were split into training(80%),and testing(20%)sets.Decision Trees,Random Forest,XGBoost,and CatBoost are utilized with feature importance to build the treatment prediction model.The best overall Area Under the Curve(AUC)achieved was 0.91 using Random Forest on the SEER dataset.
文摘Proactive Semantic Interference (PSI) and failure to recover from PSI (frPSI), are novel constructs assessed by the LASSI-L. These measures are sensitive to cognitive changes in early Mild Cognitive Impairment (MCI) and preclinical AD determined by Aβ load using PET. The goal of this study was to compare a new computerized version of the LASSI-L (LASSI-Brief Computerized) to the standard paper-and-pencil version of the test. In this study, we examined 110 cognitively unimpaired (CU) older adults and 79 with amnestic MCI (aMCI) who were administered the paper-and-pencil form of the LASSI-L. Their performance was compared with 62 CU older adults and 52 aMCI participants examined using the LASSI-BC. After adjustment for covariates (degree of initial learning, sex, education, and language of evaluation) both the standard and computerized versions distinguished between aMCI and CU participants. The performance of CU and aMCI groups using either form was relatively commensurate. Importantly, an optimal combination of Cued B2 recall and Cued B1 intrusions on the LASSI-BC yielded an area under the ROC curve of .927, a sensitivity of 92.3% and specificity of 88.1%, relative to an area under the ROC curve of .815, a sensitivity of 72.5%, and a specificity of 79.1% obtained for the paper-and-pencil LASSI-L. Overall, the LASSI-BC was comparable, and in some ways, superior to the paper-and-pencil LASSI-L. Advantages of the LASSI-BC include a more standardized administration, suitability for remote assessment, and an automated scoring mechanism that can be verified by a built-in audio recording of responses.
文摘Deep learning-based approaches are applied successfully in manyfields such as deepFake identification,big data analysis,voice recognition,and image recognition.Deepfake is the combination of deep learning in fake creation,which states creating a fake image or video with the help of artificial intelligence for political abuse,spreading false information,and pornography.The artificial intel-ligence technique has a wide demand,increasing the problems related to privacy,security,and ethics.This paper has analyzed the features related to the computer vision of digital content to determine its integrity.This method has checked the computer vision features of the image frames using the fuzzy clustering feature extraction method.By the proposed deep belief network with loss handling,the manipulation of video/image is found by means of a pairwise learning approach.This proposed approach has improved the accuracy of the detection rate by 98%on various datasets.
基金supported by the Spanish Ministry of Science and Innovation under Projects PID2022-137680OB-C32 and PID2022-139187OB-I00.
文摘Customer segmentation according to load-shape profiles using smart meter data is an increasingly important application to vital the planning and operation of energy systems and to enable citizens’participation in the energy transition.This study proposes an innovative multi-step clustering procedure to segment customers based on load-shape patterns at the daily and intra-daily time horizons.Smart meter data is split between daily and hourly normalized time series to assess monthly,weekly,daily,and hourly seasonality patterns separately.The dimensionality reduction implicit in the splitting allows a direct approach to clustering raw daily energy time series data.The intraday clustering procedure sequentially identifies representative hourly day-unit profiles for each customer and the entire population.For the first time,a step function approach is applied to reduce time series dimensionality.Customer attributes embedded in surveys are employed to build external clustering validation metrics using Cramer’s V correlation factors and to identify statistically significant determinants of load-shape in energy usage.In addition,a time series features engineering approach is used to extract 16 relevant demand flexibility indicators that characterize customers and corresponding clusters along four different axes:available Energy(E),Temporal patterns(T),Consistency(C),and Variability(V).The methodology is implemented on a real-world electricity consumption dataset of 325 Small and Medium-sized Enterprise(SME)customers,identifying 4 daily and 6 hourly easy-to-interpret,well-defined clusters.The application of the methodology includes selecting key parameters via grid search and a thorough comparison of clustering distances and methods to ensure the robustness of the results.Further research can test the scalability of the methodology to larger datasets from various customer segments(households and large commercial)and locations with different weather and socioeconomic conditions.
文摘Metaheuristics are commonly used in various fields,including real-life problem-solving and engineering applications.The present work introduces a novel metaheuristic algorithm named the Artificial Circulatory System Algorithm(ACSA).The control of the circulatory system inspires it and mimics the behavior of hormonal and neural regulators involved in this process.The work initially evaluates the effectiveness of the suggested approach on 16 two-dimensional test functions,identified as classical benchmark functions.The method was subsequently examined by application to 12 CEC 2022 benchmark problems of different complexities.Furthermore,the paper evaluates ACSA in comparison to 64 metaheuristic methods that are derived from different approaches,including evolutionary,human,physics,and swarm-based.Subsequently,a sequence of statistical tests was undertaken to examine the superiority of the suggested algorithm in comparison to the 7 most widely used algorithms in the existing literature.The results show that the ACSA strategy can quickly reach the global optimum,avoid getting trapped in local optima,and effectively maintain a balance between exploration and exploitation.ACSA outperformed 42 algorithms statistically,according to post-hoc tests.It also outperformed 9 algorithms quantitatively.The study concludes that ACSA offers competitive solutions in comparison to popüler methods.
基金Supported by NSFC(No.12561023)partly by the Provincial Natural Science Foundation of Jiangxi,China(Nos.20232BAB201001,20202BAB211004)。
文摘In this paper,we study the existence of least energy solutions for the following nonlinear fractional Schrodinger–Poisson system{(−∆)^(s)u+V(x)u+φu=f(u)in R^(3),(−∆)^(t)φ=u^(2)in R^(3),where s∈(3/4,1),t∈(0,1).Under some assumptions on V(x)and f,using Nehari–Pohozaev identity and the arguments of Brezis–Nirenberg,the monotonic trick and global compactness lemma,we prove the existence of a nontrivial least energy solution.
基金funding of the Deanship of Graduate Studies and Scientific Research,Jazan University,Saudi Arabia,through Project number:JU-20250230-DGSSR-RP-2025.
文摘Fatigue crack growth is a critical phenomenon in engineering structures,accounting for a significant percentage of structural failures across various industries.Accurate prediction of crack initiation,propagation paths,and fatigue life is essential for ensuring structural integrity and optimizing maintenance schedules.This paper presents a comprehensive finite element approach for simulating two-dimensional fatigue crack growth under linear elastic conditionswith adaptivemesh generation.The source code for the programwas developed in Fortran 95 and compiled with Visual Fortran.To achieve high-fidelity simulations,the methodology integrates several key features:it employs an automatic,adaptive meshing technique that selectively refines the element density near the crack front and areas of significant stress concentration.Specialized singular elements are used at the crack tip to ensure precise stress field representation.The direction of crack advancement is predicted using the maximum tangential stress criterion,while stress intensity factors are determined through either the displacement extrapolation technique or the J-integral method.The simulation models crack growth as a series of linear increments,with solution stability maintained by a consistent transfer algorithm and a crack relaxation method.The framework’s effectiveness is demonstrated across various geometries and loading scenarios.Through rigorous validation against both experimental data and established numerical benchmarks,the approach is proven to accurately forecast crack trajectories and fatigue life.Furthermore,the detailed description of the program’s architecture offers a foundational blueprint,serving as a valuable guide for researchers aiming to develop their specialized software for fracture mechanics analysis.
文摘The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.However,existing datasets for IDS training often lack relevance to modern IIoT environments,limiting their applicability for research and development.To address the latter gap,this paper introduces the HiTar-2024 dataset specifically designed for IIoT systems.As a consequence,that can be used by an IDS to detect imminent threats.Likewise,HiTar-2024 was generated using the AREZZO simulator,which replicates realistic smart manufacturing scenarios.The generated dataset includes five distinct classes:Normal,Probing,Remote to Local(R2L),User to Root(U2R),and Denial of Service(DoS).Furthermore,comprehensive experiments with popular Machine Learning(ML)models using various classifiers,including BayesNet,Logistic,IBK,Multiclass,PART,and J48 demonstrate high accuracy,precision,recall,and F1-scores,exceeding 0.99 across all ML metrics.The latter result is reached thanks to the rigorous applied process to achieve this quite good result,including data pre-processing,features extraction,fixing the class imbalance problem,and using a test option for model robustness.This comprehensive approach emphasizes meticulous dataset construction through a complete dataset generation process,a careful labelling algorithm,and a sophisticated evaluation method,providing valuable insights to reinforce IIoT system security.Finally,the HiTar-2024 dataset is compared with other similar datasets in the literature,considering several factors such as data format,feature extraction tools,number of features,attack categories,number of instances,and ML metrics.
基金pported by the National Natural Science Foundation of China(62265011 and 62122033)Jiangxi Provincial Natural Science Foundation(20224BAB212006 and 20232BAB 202038)National Key Research and Develop-ment Program of China(2023YFF1204302)。
文摘Acoustic-resolution photoacoustic microscopy(AR-PAM)suffers from degraded lateral resolution due to acoustic diffraction.Here,a resolution enhancement strategy for AR-PAM via a mean-reverting diffusion model was proposed to achieve the transition from acoustic resolution to optical resolution.By modeling the degradation process from high-resolution image to low-resolution AR-PAM image with stable Gaussian noise(i.e.,mean state),a mean-reverting diffusion model is trained to learn prior information of the data distribution.Then the learned prior is employed to generate a high-resolution image from the AR-PAM image by iteratively sampling the noisy state.The performance of the proposed method was validated utilizing the simulated and in vivo experimental data under varying lateral resolutions and noise levels.The results show that an over 3.6-fold enhancement in lateral resolution was achieved.The image quality can be effectively improved,with a notable enhancement of∼66%in PSNR and∼480%in SSIM for in vivo data.
基金Natural Science Foundation of Shandong Province,Grant/Award Number:ZR202103010903Doctoral Fund of Shandong Jianzhu University,Grant/Award Number:X21101Z。
文摘To guarantee safe and efficient tunneling of a tunnel boring machine(TBM),rapid and accurate judgment of the rock mass condition is essential.Based on fuzzy C-means clustering,this paper proposes a grouped machine learning method for predicting rock mass parameters.An elaborate data set on field rock mass is collected,which also matches field TBM tunneling.Meanwhile,target stratum samples are divided into several clusters by fuzzy C-means clustering,and multiple submodels are trained by samples in different clusters with the input of pretreated TBM tunneling data and the output of rock mass parameter data.Each testing sample or newly encountered tunneling condition can be predicted by multiple submodels with the weight of the membership degree of the sample to each cluster.The proposed method has been realized by 100 training samples and verified by 30 testing samples collected from the C1 part of the Pearl Delta water resources allocation project.The average percentage error of uniaxial compressive strength and joint frequency(Jf)of the 30 testing samples predicted by the pure back propagation(BP)neural network is 13.62%and 12.38%,while that predicted by the BP neural network combined with fuzzy C-means is 7.66%and6.40%,respectively.In addition,by combining fuzzy C-means clustering,the prediction accuracies of support vector regression and random forest are also improved to different degrees,which demonstrates that fuzzy C-means clustering is helpful for improving the prediction accuracy of machine learning and thus has good applicability.Accordingly,the proposed method is valuable for predicting rock mass parameters during TBM tunneling.
文摘It’s possible for malicious operators to seize hold of electrical control systems, for instance, the engine control unit of driverless vehicles, from various vectors, e.g. autonomic control system, remote vehicle access, or human drivers. To mitigate potential risks, this paper provides the inauguration study by proposing a theoretical framework in the physical, human and cyber triad. Its goal is to, at each time point, detect adversary control behaviors and protect control systems against malicious operations via integrating a variety of methods. This paper only proposes a theoretical framework which tries to indicate possible threats. With the support of the framework, the security system can lightly reduce the risk. The development and implementation of the system are out of scope.
文摘Global security threats have motivated organizations to adopt robust and reliable security systems to ensure the safety of individuals and assets.Biometric authentication systems offer a strong solution.However,choosing the best security system requires a structured decision-making framework,especially in complex scenarios involving multiple criteria.To address this problem,we develop a novel quantum spherical fuzzy technique for order preference by similarity to ideal solution(QSF-TOPSIS)methodology,integrating quantum mechanics principles and fuzzy theory.The proposed approach enhances decision-making accuracy,handles uncertainty,and incorporates criteria relationships.Criteria weights are determined using spherical fuzzy sets,and alternatives are ranked through the QSFTOPSIS framework.This comprehensive multi-criteria decision-making(MCDM)approach is applied to identify the optimal gate security system for an organization,considering critical factors such as accuracy,cost,and reliability.Additionally,the study compares the proposed approach with other established MCDM methods.The results confirm the alignment of rankings across these methods,demonstrating the robustness and reliability of the QSF-TOPSIS framework.The study identifies the infrared recognition and identification system(IRIS)as the most effective,with a score value of 0.5280 and optimal security system among the evaluated alternatives.This research contributes to the growing literature on quantum-enhanced decision-making models and offers a practical framework for solving complex,real-world problems involving uncertainty and ambiguity.
基金suported by the Fundamental Research Grant Scheme(FRGS)of Universiti Sains Malaysia,Research Number:FRGS/1/2024/ICT02/USM/02/1.
文摘Classroom behavior recognition is a hot research topic,which plays a vital role in assessing and improving the quality of classroom teaching.However,existing classroom behavior recognition methods have challenges for high recognition accuracy with datasets with problems such as scenes with blurred pictures,and inconsistent objects.To address this challenge,we proposed an effective,lightweight object detector method called the RFNet model(YOLO-FR).The YOLO-FR is a lightweight and effective model.Specifically,for efficient multi-scale feature extraction,effective feature pyramid shared convolutional(FPSC)was designed to improve the feature extract performance by leveraging convolutional layers with varying dilation rates from the input image in the backbone.Secondly,to address the problem of multi-scale variability in the scene,we design the Rep Ghost fusion Cross Stage Partial and Efficient Layer Aggregation Network(RGCSPELAN)to improve the network performance further and reduce the amount of computation and the number of parameters.In addition,by conducting experimental valuation on the SCB dataset3 and STBD-08 dataset.Experimental results indicate that,compared to the baseline model,the RFNet model has increased mean accuracy precision(mAP@50)from 69.6%to 71.0%on the SCB dataset3 and from 91.8%to 93.1%on the STBD-08 dataset.The RFNet approach has effectiveness precision at 68.6%,surpassing the baseline method(YOLOv11)at 3.3%and archieve the minimal size(4.9 M)on the SCB dataset3.Finally,comparing it with other algorithms,it accurately detects student behavior in complex classroom environments results confirmed that RFNet is well-suited for real-time and efficiently recognizing classroom behaviors.
基金supported inpart by the National Natural Science Foundation of China(Grant No. 12371088)the Innovative Research Group Project of Natural Science Foundation of Hunan Provinceof China (Grant No. 2024JJ1008)in part by the Australian Research Council (ARC) through the Discovery Projects scheme (Grant No. DP220100580)。
文摘Accurately modeling real network dynamics is a grand challenge in network science.The network dynamics arise from node interactions,which are shaped by network topology.Real networks tend to exhibit compact or highly optimized topologies.But the key problems arise:how to compress a network to best enhance its compactness,and what the compression limit of the network reflects?We abstract the topological compression of complex networks as a dynamic process of making them more compact and propose the local compression modulus that plays a key role in effective compression evolution of networks.Subsequently,we identify topological compressibility-a general property of complex networks that characterizes the extent to which a network can be compressed-and provide its approximate quantification.We anticipate that our findings and established theory will provide valuable insights into both dynamics and various applications of complex networks.
基金Funded by Kuwait Foundation for the Advancement of Sciences(KFAS)under project code:PN23-15EM-1901.
文摘The rapid advancements in distributed generation technologies,the widespread adoption of distributed energy resources,and the integration of 5G technology have spurred sharing economy businesses within the electricity sector.Revolutionary technologies such as blockchain,5G connectivity,and Internet of Things(IoT)devices have facilitated peer-to-peer distribution and real-time response to fluctuations in supply and demand.Nevertheless,sharing electricity within a smart community presents numerous challenges,including intricate design considerations,equitable allocation,and accurate forecasting due to the lack of well-organized temporal parameters.To address these challenges,this proposed system is focused on sharing extra electricity within the smart community.The working of the proposed system is composed of five main phases.In phase 1,we develop a model to forecast the energy consumption of the appliances using the Long Short-Term Memory(LSTM)integrated with the attention module.In phase 2,based on the predicted energy consumption,we designed a smart scheduler with attention-induced Genetic Algorithm(GA)to schedule the appliances to reduce energy consumption.In phase 3,a dynamic Feed-in Tariff(dFIT)algorithm makes real-time tariff adjustments using LSTM for demand prediction and SHapley Additive exPlanations(SHAP)values to improve model transparency.In phase 4,the energy saved from solar systems and smart scheduling is shared with the community grid.Finally,in phase 5,SDP security ensures the integrity and confidentiality of shared energy data.To evaluate the performance of energy sharing and scheduling for houses with and without solar support,we simulated the above phases using data obtained from the energy consumption of 17 household appliances in our IoT laboratory.Finally,the simulation results show that the proposed scheme reduces energy consumption and ensures secure and efficient distribution with peers,promoting a more sustainable energy management and resilient smart community.
基金supported via funding from Prince sattam bin Abdulaziz University project number(PSAU/2025/R/1446).
文摘Effective water distribution and transparency are threatened with being outrightly undermined unless the good name of urban infrastructure is maintained.With improved control systems in place to check leakage,variability of pressure,and conscientiousness of energy,issues that previously went unnoticed are now becoming recognized.This paper presents a grandiose hybrid framework that combines Multi-Agent Deep Reinforcement Learning(MADRL)with Shapley Additive Explanations(SHAP)-based Explainable AI(XAI)for adaptive and interpretable water resource management.In the methodology,the agents perform decentralized learning of the control policies for the pumps and valves based on the real-time network states,while also providing human-understandable explanations of the agents’decisions,using SHAP.This framework has been validated on five very diverse datasets,three of which are real-world scenarios involving actual water consumption from NYC and Alicante,with the other two being simulationbased standards such as LeakDB and the Water Distribution System Anomaly(WDSA)network.Empirical results demonstrate that the MADRL SHAP hybrid system reduces water loss by up to 32%,improves energy efficiency by+up to 25%,and maintains pressure stability between 91%and 93%,thereby outperforming the traditional rule-based control,single-agent DRL(Deep Reinforcement Learning),and XGBoost SHAP baselines.Furthermore,SHAP-based+interpretation brings transparency to the proposed model,with the average explanation consistency for all prediction models reaching 88%,thus further reinforcing the trustworthiness of the system on which the decision-making is based and empowering the utility operators to derive actionable insights from the model.The proposed framework addresses the critical challenges of smart water distribution.