Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the pun...Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.展开更多
Energy-efficient communications is crucial for wireless sensor networks(WSN) where energy consumption is constrained. The transmission and reception energy can be saved by applying network coding to many wireless comm...Energy-efficient communications is crucial for wireless sensor networks(WSN) where energy consumption is constrained. The transmission and reception energy can be saved by applying network coding to many wireless communications systems. In this paper,we present a coded cooperation scheme which employs network coding to WSN. In the scheme,the partner node forwards the combination of the source data and its own data instead of sending the source data alone. Afterward,both of the system block error rates(BLERs) and energy performance are evaluated. Experiment results show that the proposed scheme has higher energy efficiency. When Noise power spectral density is-171dBm/Hz,the energy consumption of the coded cooperation scheme is 81.1% lower than that of the single-path scheme,43.9% lower than that of the cooperation scheme to reach the target average BLER of 10-2. When the channel condition is getting worse,the energy saving effect is more obvious.展开更多
Organic room-temperature phosphorescence(RTP)materials are promising for bioimaging applications due to their tunable structures,excellent biocompatibility,and long-lived luminescence.However,the development of highly...Organic room-temperature phosphorescence(RTP)materials are promising for bioimaging applications due to their tunable structures,excellent biocompatibility,and long-lived luminescence.However,the development of highly efficient organic RTP materials for aqueous systems remains challenging,as the organic phosphorescence is prone to being quenched by the dissolved oxygen in water.Herein,heteroaromatic carboxylic acids serve as ligand vips to construct a series of host-vip composites with nontoxic,dense EDTA-M(M=Ca,Mg,and Al)coordination polymer in water.These composites exhibit ultra-long pure RTP of vip molecules with phosphorescence quantum yield up to 53%,and lifetime up to 589.7 ms,due to the synergistic effect of dual-network structure:a coordinatively cross-linked network of EDTA-M,and a non-covalent bonded network formed by ligands and water molecules.The phosphorescence intensity is more than three times that of the composite with a single coordination network.Notably,the dual-network configuration can form a rigid and dense structure and block the intrusion of external H_(2)O and O_(2) molecules to avoid phosphorescence quenching in water.As a result,the RTP of the composites remains unchanged after 1 month in water.Furthermore,the nanoparticles fabricated from composites and anionic surfactants can be successfully applied in in vivo imaging of mice for the stable RTP in water.This work provides a novel strategy for the development of high-performance RTP materials in aqueous systems.展开更多
In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe ...In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe energy constraints inWSNs.This paper presents an Adaptive Enhanced GreyWolf Optimizer(AEGWO)for energy-efficient cluster head(CH)selection that mitigates the exploration–exploitation imbalance,preserves population diversity,and avoids premature convergence inherent in baseline GWO.The AEGWO combines adaptive control of the parameter of the search pressure to accelerate convergence without stagnation,a hybrid velocity-momentum update based on the dynamics of PSO,and an intelligent mutation operator to maintain the diversity of the population.The search is guided by a multi-objective fitness,which aims at maximizing the residual energy,equal distribution of CH,minimizing the intra-cluster distance,desirable proximity to sinks,and enhancing the coverage.Simulations on 100 nodes homogeneousWSN Tested the proposed AEGWO under the same conditions with LEACH,GWO,IGWO,PSO,WOA,and GA,AEGWO significantly increases stability and lifetime compared to LEACHand other tested algorithms;it has the best first,half,and last node dead,and higher residual energy and smaller communication overhead.The findings prove that AEGWO provides sustainable energy management and better lifetime extension,which makes it a robust,flexible clustering protocol of large-scaleWSNs.展开更多
This paper aims to improve energy efficiency(EE)of the integrated access and backhaul(IAB)aerial-terrestrial network,facilitating rapid and adjustable network infrastructure deployment.This is challenging,as interfere...This paper aims to improve energy efficiency(EE)of the integrated access and backhaul(IAB)aerial-terrestrial network,facilitating rapid and adjustable network infrastructure deployment.This is challenging,as interference generated by backhaul and access links degrades network throughput,and power imbalance between these links increases overall energy consumption.To this end,we jointly optimize aerial base station(ABS)deployment,user association,and downlink power allocation for both terrestrial base station and ABSs to maximize network EE.Specifically,using fractional programming,the EE maximization problem is transformed into a subtractive-form parametric problem,and then decomposed into ABS deployment and resource allocation subproblems.A hybrid algorithm combining particle swarm optimization and simulated annealing is proposed to solve the ABS deployment subproblem,determining ABS spatial configurations and updating power allocation given fixed user association.Meanwhile,a dynamic power allocation in response to network load is designed to solve the resource allocation subproblem.Furthermore,considering the quality of service requirements of ground users and the transmit power constraints of base stations,a joint EE optimization algorithm is proposed to enhance the network EE.Simulation results validate the effectiveness of the proposed methods in improving network EE,especially in scenarios involving more deployed ABSs.展开更多
The sixth-generation(6G)networks will consist of multiple bands such as low-frequency,midfrequency,millimeter wave,terahertz and other bands to meet various business requirements and networking scenarios.The dynamic c...The sixth-generation(6G)networks will consist of multiple bands such as low-frequency,midfrequency,millimeter wave,terahertz and other bands to meet various business requirements and networking scenarios.The dynamic complementarity of multiple bands are crucial for enhancing the spectrum efficiency,reducing network energy consumption,and ensuring a consistent user experience.This paper investigates the present researches and challenges associated with deployment of multi-band integrated networks in existing infrastructures.Then,an evolutionary path for integrated networking is proposed with the consideration of maturity of emerging technologies and practical network deployment.The proposed design principles for 6G multi-band integrated networking aim to achieve on-demand networking objectives,while the architecture supports full spectrum access and collaboration between high and low frequencies.In addition,the potential key air interface technologies and intelligent technologies for integrated networking are comprehensively discussed.It will be a crucial basis for the subsequent standards promotion of 6G multi-band integrated networking technology.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
Sensitivity encoding(SENSE)is a parallel magnetic resonance imaging(MRI)reconstruction model by utilizing the sensitivity information of receiver coils to achieve image reconstruction.The existing SENSE-based reconstr...Sensitivity encoding(SENSE)is a parallel magnetic resonance imaging(MRI)reconstruction model by utilizing the sensitivity information of receiver coils to achieve image reconstruction.The existing SENSE-based reconstruction algorithms usually used nonadaptive sparsifying transforms,resulting in a limited reconstruction accuracy.Therefore,we proposed a new model for accurate parallel MRI reconstruction by combining the L0 norm regularization term based on the efficient sum of outer products dictionary learning(SOUPDIL)with the SENSE model,called SOUPDIL-SENSE.The SOUPDIL-SENSE model is mainly solved by utilizing the variable splitting and alternating direction method of multipliers techniques.The experimental results on four human datasets show that the proposed algorithm effectively promotes the image sparsity,eliminates the noise and artifacts of the reconstructed images,and improves the reconstruction accuracy.展开更多
Wireless Sensor Networks(WSNs),as a crucial component of the Internet of Things(IoT),are widely used in environmental monitoring,industrial control,and security surveillance.However,WSNs still face challenges such as ...Wireless Sensor Networks(WSNs),as a crucial component of the Internet of Things(IoT),are widely used in environmental monitoring,industrial control,and security surveillance.However,WSNs still face challenges such as inaccurate node clustering,low energy efficiency,and shortened network lifespan in practical deployments,which significantly limit their large-scale application.To address these issues,this paper proposes an Adaptive Chaotic Ant Colony Optimization algorithm(AC-ACO),aiming to optimize the energy utilization and system lifespan of WSNs.AC-ACO combines the path-planning capability of Ant Colony Optimization(ACO)with the dynamic characteristics of chaotic mapping and introduces an adaptive mechanism to enhance the algorithm’s flexibility and adaptability.By dynamically adjusting the pheromone evaporation factor and heuristic weights,efficient node clustering is achieved.Additionally,a chaotic mapping initialization strategy is employed to enhance population diversity and avoid premature convergence.To validate the algorithm’s performance,this paper compares AC-ACO with clustering methods such as Low-Energy Adaptive Clustering Hierarchy(LEACH),ACO,Particle Swarm Optimization(PSO),and Genetic Algorithm(GA).Simulation results demonstrate that AC-ACO outperforms the compared algorithms in key metrics such as energy consumption optimization,network lifetime extension,and communication delay reduction,providing an efficient solution for improving energy efficiency and ensuring long-term stable operation of wireless sensor networks.展开更多
The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures ha...The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures has resulted in higher training memory,thereby limiting the scalability of model training.To address these challenges,we propose a novel model,RevFB-BEV,which is based on the Reversible Swin Transformer(RevSwin)with Forward-Backward View Transformation(FBVT)and LiDAR Guided Back Projection(LGBP).This approach includes the RevSwin backbone network,which employs a reversible architecture to minimise training memory by recomputing intermediate parameters.Moreover,we introduce the FBVT module that refines BEV features extracted from forward projection,yielding denser and more precise camera BEV representations.The LGBP module further utilises LiDAR BEV guidance for back projection to achieve more accurate camera BEV features.Extensive experiments on the nuScenes dataset demonstrate notable performance improvements,with our model achieving over a 4 x reduction in training memory and a more than 12x decrease in single-backbone training memory.These efficiency gains become even more pronounced with deeper network architectures.Additionally,RevFB-BEV achieves 68.1 mAP(mean Average Precision)on the validation set and 68.9 mAP on the test set,which is nearly on par with the baseline BEVFusion,underscoring its effectiveness in resource-constrained scenarios.展开更多
In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with l...In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines A...Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines Ant Colony Optimization(ACO)and the Greedy Algorithm(GA).ACO finds smart paths while Greedy makes quick decisions.This improves energy use and performance.ACOGA outperforms Hybrid Energy-Efficient(HEE)and Adaptive Lossless Data Compression(ALDC)algorithms.After 500 rounds,only 5%of ACOGA’s nodes are dead,compared to 15%for HEE and 20%for ALDC.The network using ACOGA runs for 1200 rounds before the first nodes fail.HEE lasts 900 rounds and ALDC only 850.ACOGA saves at least 15%more energy by better distributing the load.It also achieves a 98%packet delivery rate.The method works well in mixed IoT networks like Smart Water Management Systems(SWMS).These systems have different power levels and communication ranges.The simulation of proposed model has been done in MATLAB simulator.The results show that that the proposed model outperform then the existing models.展开更多
The deployment of multiple intelligent reflecting surfaces(IRSs)in blockage-prone millimeter wave(mmWave)communication networks have garnered considerable attention lately.Despite the remarkably low circuit power cons...The deployment of multiple intelligent reflecting surfaces(IRSs)in blockage-prone millimeter wave(mmWave)communication networks have garnered considerable attention lately.Despite the remarkably low circuit power consumption per IRS element,the aggregate energy consumption becomes substantial if all elements of an IRS are turned on given a considerable number of IRSs,resulting in lower overall energy efficiency(EE).To tackle this challenge,we propose a flexible and efficient approach that individually controls the status of each IRS element.Specifically,the network EE is maximized by jointly optimizing the associations of base stations(BSs)and user equipments(UEs),transmit beamforming,phase shifts of IRS elements,and the associations of individual IRS elements and UEs.The problem is efficiently addressed in two phases.First,the Gale-Shapley algorithm is applied for BS-UE association,followed by a block coordinate descent-based algorithm that iteratively solves the subproblems related to active beamforming,phase shifts,and element-UE associations.To reduce the tremendous dimensionality of optimization variables introduced by element-UE associations in large-scale IRS networks,we introduce an efficient algorithm to solve the associations between IRS elements and UEs.Numerical results show that the proposed elementwise control scheme improves EE by 34.24% compared to the network with IRS-all-on scheme.展开更多
The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive require...The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.展开更多
The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary gro...The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.展开更多
In rock engineering,natural cracks in rock masses subjected to external loads tend to initiate and propagate,leading to potential safety hazards.To investigate the effect of cracking behavior on the mechanical propert...In rock engineering,natural cracks in rock masses subjected to external loads tend to initiate and propagate,leading to potential safety hazards.To investigate the effect of cracking behavior on the mechanical properties of rocks,the cracking processes of pre-cracked rocks have been extensively studied using numerical modeling methods.The peridynamics(PD)exhibits advantages over other numerical methods due to the absence of the requirements for remeshing and external crack growth criterion.However,for modeling pre-cracked rock cracking processes under impact,current PD implementations lack generally applicable rock constitutive models and impact contact models,which leads to difficulties in determining rock material parameters and efficiently calculating impact loads.This paper proposes a non-ordinary state-based peridynamics(NOSBPD)modeling method integrating the Drucker-Prager(DP)plasticity model and an efficient contact model to address the above problems.In the proposed method,the Drucker-Prager plasticity model is integrated into the NOSBPD,thereby equipping NOSBPD with the capability to accurately characterize the nonlinear stress-strain relationship inherent in rocks.An efficient contact model between particles and meshes is designed to calculate the impact loads,which is essentially a coupling method of PD with the finite element method(FEM).The effectiveness of the proposed NOSBPD modeling method is verified by comparison with other numerical methods and experiments.Experimental results indicate that the proposed method can effectively and accurately predict the 3D cracking processes of pre-cracked cracks under impact loading,and the maximum principal stress is the key driver behind wing crack formation in pre-cracked rocks.展开更多
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin...The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.展开更多
Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different e...Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).展开更多
The goal of the present work is to demonstrate the potential of Artificial Neural Network(ANN)-driven Genetic Algorithm(GA)methods for energy efficiency and economic performance optimization of energy efficiency measu...The goal of the present work is to demonstrate the potential of Artificial Neural Network(ANN)-driven Genetic Algorithm(GA)methods for energy efficiency and economic performance optimization of energy efficiency measures in a multi-family house building in Greece.The energy efficiency measures include different heating/cooling systems(such as low-temperature and high-temperature heat pumps,natural gas boilers,split units),building envelope components for floor,walls,roof and windows of variable heat transfer coefficients,the installation of solar thermal collectors and PVs.The calculations of the building loads and investment and operating and maintenance costs of the measures are based on the methodology defined in Directive 2010/31/EU,while economic assumptions are based on EN 15459-1 standard.Typically,multi-objective optimization of energy efficiency measures often requires the simulation of very large numbers of cases involving numerous possible combinations,resulting in intense computational load.The results of the study indicate that ANN-driven GA methods can be used as an alternative,valuable tool for reliably predicting the optimal measures which minimize primary energy consumption and life cycle cost of the building with greatly reduced computational requirements.Through GA methods,the computational time needed for obtaining the optimal solutions is reduced by 96.4%-96.8%.展开更多
文摘Deep Learning-based systems for Finger vein recognition have gained rising attention in recent years due to improved efficiency and enhanced security.The performance of existing CNN-based methods is limited by the puny generalization of learned features and deficiency of the finger vein image training data.Considering the concerns of existing methods,in this work,a simplified deep transfer learning-based framework for finger-vein recognition is developed using an EfficientNet model of deep learning with a self-attention mechanism.Data augmentation using various geometrical methods is employed to address the problem of training data shortage required for a deep learning model.The proposed model is tested using K-fold cross-validation on three publicly available datasets:HKPU,FVUSM,and SDUMLA.Also,the developed network is compared with other modern deep nets to check its effectiveness.In addition,a comparison of the proposed method with other existing Finger vein recognition(FVR)methods is also done.The experimental results exhibited superior recognition accuracy of the proposed method compared to other existing methods.In addition,the developed method proves to be more effective and less sophisticated at extracting robust features.The proposed EffAttenNet achieves an accuracy of 98.14%on HKPU,99.03%on FVUSM,and 99.50%on SDUMLA databases.
基金support in part from the National Natural Science Foundation of China (No. 60962002)the Program to Sponsor Teams for Innovation in the Construction of Talent Highlands in Guangxi Institutions of Higher Learning+1 种基金the Foundation of Guangxi Key Laboratory of Information and Communication (NO. 20904)the Scientific Research Foundation of Guangxi University (Grant No.XBZ091006)
文摘Energy-efficient communications is crucial for wireless sensor networks(WSN) where energy consumption is constrained. The transmission and reception energy can be saved by applying network coding to many wireless communications systems. In this paper,we present a coded cooperation scheme which employs network coding to WSN. In the scheme,the partner node forwards the combination of the source data and its own data instead of sending the source data alone. Afterward,both of the system block error rates(BLERs) and energy performance are evaluated. Experiment results show that the proposed scheme has higher energy efficiency. When Noise power spectral density is-171dBm/Hz,the energy consumption of the coded cooperation scheme is 81.1% lower than that of the single-path scheme,43.9% lower than that of the cooperation scheme to reach the target average BLER of 10-2. When the channel condition is getting worse,the energy saving effect is more obvious.
基金supported by the Startup Funds for Introduced Talents of Wuyi University(YJ202304)the National Natural Science Foundation of China(22375044).
文摘Organic room-temperature phosphorescence(RTP)materials are promising for bioimaging applications due to their tunable structures,excellent biocompatibility,and long-lived luminescence.However,the development of highly efficient organic RTP materials for aqueous systems remains challenging,as the organic phosphorescence is prone to being quenched by the dissolved oxygen in water.Herein,heteroaromatic carboxylic acids serve as ligand vips to construct a series of host-vip composites with nontoxic,dense EDTA-M(M=Ca,Mg,and Al)coordination polymer in water.These composites exhibit ultra-long pure RTP of vip molecules with phosphorescence quantum yield up to 53%,and lifetime up to 589.7 ms,due to the synergistic effect of dual-network structure:a coordinatively cross-linked network of EDTA-M,and a non-covalent bonded network formed by ligands and water molecules.The phosphorescence intensity is more than three times that of the composite with a single coordination network.Notably,the dual-network configuration can form a rigid and dense structure and block the intrusion of external H_(2)O and O_(2) molecules to avoid phosphorescence quenching in water.As a result,the RTP of the composites remains unchanged after 1 month in water.Furthermore,the nanoparticles fabricated from composites and anionic surfactants can be successfully applied in in vivo imaging of mice for the stable RTP in water.This work provides a novel strategy for the development of high-performance RTP materials in aqueous systems.
基金The Open Access publication fee for this article was fully covered by Abu Dhabi University.
文摘In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe energy constraints inWSNs.This paper presents an Adaptive Enhanced GreyWolf Optimizer(AEGWO)for energy-efficient cluster head(CH)selection that mitigates the exploration–exploitation imbalance,preserves population diversity,and avoids premature convergence inherent in baseline GWO.The AEGWO combines adaptive control of the parameter of the search pressure to accelerate convergence without stagnation,a hybrid velocity-momentum update based on the dynamics of PSO,and an intelligent mutation operator to maintain the diversity of the population.The search is guided by a multi-objective fitness,which aims at maximizing the residual energy,equal distribution of CH,minimizing the intra-cluster distance,desirable proximity to sinks,and enhancing the coverage.Simulations on 100 nodes homogeneousWSN Tested the proposed AEGWO under the same conditions with LEACH,GWO,IGWO,PSO,WOA,and GA,AEGWO significantly increases stability and lifetime compared to LEACHand other tested algorithms;it has the best first,half,and last node dead,and higher residual energy and smaller communication overhead.The findings prove that AEGWO provides sustainable energy management and better lifetime extension,which makes it a robust,flexible clustering protocol of large-scaleWSNs.
基金supported in part by Natural Science Foundation of China(Grant No.62121001)in part by Key Research and Development Program of Shannxi(Grant No.2024CY2-GJHX-82)in part by the Qin Chuangyuan“Scientist+Engineer”Team Construction Program of Shaanxi(Grant No.2024QCYKXJ-156).
文摘This paper aims to improve energy efficiency(EE)of the integrated access and backhaul(IAB)aerial-terrestrial network,facilitating rapid and adjustable network infrastructure deployment.This is challenging,as interference generated by backhaul and access links degrades network throughput,and power imbalance between these links increases overall energy consumption.To this end,we jointly optimize aerial base station(ABS)deployment,user association,and downlink power allocation for both terrestrial base station and ABSs to maximize network EE.Specifically,using fractional programming,the EE maximization problem is transformed into a subtractive-form parametric problem,and then decomposed into ABS deployment and resource allocation subproblems.A hybrid algorithm combining particle swarm optimization and simulated annealing is proposed to solve the ABS deployment subproblem,determining ABS spatial configurations and updating power allocation given fixed user association.Meanwhile,a dynamic power allocation in response to network load is designed to solve the resource allocation subproblem.Furthermore,considering the quality of service requirements of ground users and the transmit power constraints of base stations,a joint EE optimization algorithm is proposed to enhance the network EE.Simulation results validate the effectiveness of the proposed methods in improving network EE,especially in scenarios involving more deployed ABSs.
基金supported by China’s National Key R&D Program(Project Number:2022YFB2902100)。
文摘The sixth-generation(6G)networks will consist of multiple bands such as low-frequency,midfrequency,millimeter wave,terahertz and other bands to meet various business requirements and networking scenarios.The dynamic complementarity of multiple bands are crucial for enhancing the spectrum efficiency,reducing network energy consumption,and ensuring a consistent user experience.This paper investigates the present researches and challenges associated with deployment of multi-band integrated networks in existing infrastructures.Then,an evolutionary path for integrated networking is proposed with the consideration of maturity of emerging technologies and practical network deployment.The proposed design principles for 6G multi-band integrated networking aim to achieve on-demand networking objectives,while the architecture supports full spectrum access and collaboration between high and low frequencies.In addition,the potential key air interface technologies and intelligent technologies for integrated networking are comprehensively discussed.It will be a crucial basis for the subsequent standards promotion of 6G multi-band integrated networking technology.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金the National Natural Science Foundation of China(No.61861023)the Yunnan Fundamental Research Project(No.202301AT070452)。
文摘Sensitivity encoding(SENSE)is a parallel magnetic resonance imaging(MRI)reconstruction model by utilizing the sensitivity information of receiver coils to achieve image reconstruction.The existing SENSE-based reconstruction algorithms usually used nonadaptive sparsifying transforms,resulting in a limited reconstruction accuracy.Therefore,we proposed a new model for accurate parallel MRI reconstruction by combining the L0 norm regularization term based on the efficient sum of outer products dictionary learning(SOUPDIL)with the SENSE model,called SOUPDIL-SENSE.The SOUPDIL-SENSE model is mainly solved by utilizing the variable splitting and alternating direction method of multipliers techniques.The experimental results on four human datasets show that the proposed algorithm effectively promotes the image sparsity,eliminates the noise and artifacts of the reconstructed images,and improves the reconstruction accuracy.
基金funded by the Natural Science Foundation of Xinjiang Uygur Autonomous Region:No.22D01B148Bidding Topics for the Center for Integration of Education and Production and Development of New Business in 2024:No.2024-KYJD05+1 种基金Basic Scientific Research Business Fee Project of Colleges and Universities in Autonomous Region:No.XJEDU2025P126Xinjiang College of Science&Technology School-level Scientific Research Fund Project:No.2024-KYTD01.
文摘Wireless Sensor Networks(WSNs),as a crucial component of the Internet of Things(IoT),are widely used in environmental monitoring,industrial control,and security surveillance.However,WSNs still face challenges such as inaccurate node clustering,low energy efficiency,and shortened network lifespan in practical deployments,which significantly limit their large-scale application.To address these issues,this paper proposes an Adaptive Chaotic Ant Colony Optimization algorithm(AC-ACO),aiming to optimize the energy utilization and system lifespan of WSNs.AC-ACO combines the path-planning capability of Ant Colony Optimization(ACO)with the dynamic characteristics of chaotic mapping and introduces an adaptive mechanism to enhance the algorithm’s flexibility and adaptability.By dynamically adjusting the pheromone evaporation factor and heuristic weights,efficient node clustering is achieved.Additionally,a chaotic mapping initialization strategy is employed to enhance population diversity and avoid premature convergence.To validate the algorithm’s performance,this paper compares AC-ACO with clustering methods such as Low-Energy Adaptive Clustering Hierarchy(LEACH),ACO,Particle Swarm Optimization(PSO),and Genetic Algorithm(GA).Simulation results demonstrate that AC-ACO outperforms the compared algorithms in key metrics such as energy consumption optimization,network lifetime extension,and communication delay reduction,providing an efficient solution for improving energy efficiency and ensuring long-term stable operation of wireless sensor networks.
基金supported by the Baima Lake Laboratory Joint Funds of the Zhejiang Provincial Natural Science Foundation of China under Grant LBMHD25F030001in part by NSFC No.62088101+1 种基金The authors certify that there are no competing financial interests or personal relationships influencing this work.Financial support originated exclusively from public research funds:Baima Lake Laboratory Joint Funds(Zhejiang Provincial NSF,China)Grant LBMHD25F030001National Natural Science Foundation of China Grant 62088101 for the'Autonomous Intelligent Unmanned Systems'project.
文摘The perception of Bird's Eye View(BEV)has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency.However,the increasing complexity of neural network architectures has resulted in higher training memory,thereby limiting the scalability of model training.To address these challenges,we propose a novel model,RevFB-BEV,which is based on the Reversible Swin Transformer(RevSwin)with Forward-Backward View Transformation(FBVT)and LiDAR Guided Back Projection(LGBP).This approach includes the RevSwin backbone network,which employs a reversible architecture to minimise training memory by recomputing intermediate parameters.Moreover,we introduce the FBVT module that refines BEV features extracted from forward projection,yielding denser and more precise camera BEV representations.The LGBP module further utilises LiDAR BEV guidance for back projection to achieve more accurate camera BEV features.Extensive experiments on the nuScenes dataset demonstrate notable performance improvements,with our model achieving over a 4 x reduction in training memory and a more than 12x decrease in single-backbone training memory.These efficiency gains become even more pronounced with deeper network architectures.Additionally,RevFB-BEV achieves 68.1 mAP(mean Average Precision)on the validation set and 68.9 mAP on the test set,which is nearly on par with the baseline BEVFusion,underscoring its effectiveness in resource-constrained scenarios.
基金supported by the National Science and Technology Council(NSTC),Taiwan,under Grants Numbers 112-2622-E-029-009 and 112-2221-E-029-019.
文摘In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
文摘Internet of things networks often suffer from early node failures and short lifespan due to energy limits.Traditional routing methods are not enough.This work proposes a new hybrid algorithm called ACOGA.It combines Ant Colony Optimization(ACO)and the Greedy Algorithm(GA).ACO finds smart paths while Greedy makes quick decisions.This improves energy use and performance.ACOGA outperforms Hybrid Energy-Efficient(HEE)and Adaptive Lossless Data Compression(ALDC)algorithms.After 500 rounds,only 5%of ACOGA’s nodes are dead,compared to 15%for HEE and 20%for ALDC.The network using ACOGA runs for 1200 rounds before the first nodes fail.HEE lasts 900 rounds and ALDC only 850.ACOGA saves at least 15%more energy by better distributing the load.It also achieves a 98%packet delivery rate.The method works well in mixed IoT networks like Smart Water Management Systems(SWMS).These systems have different power levels and communication ranges.The simulation of proposed model has been done in MATLAB simulator.The results show that that the proposed model outperform then the existing models.
基金supported by the National Natural Science Foundation of China under grant U22A2003 and 62271515Shenzhen Science and Technology Program under grant ZDSYS20210623091807023supported by the National Natural Science Foundation of China under Grant 62301300.
文摘The deployment of multiple intelligent reflecting surfaces(IRSs)in blockage-prone millimeter wave(mmWave)communication networks have garnered considerable attention lately.Despite the remarkably low circuit power consumption per IRS element,the aggregate energy consumption becomes substantial if all elements of an IRS are turned on given a considerable number of IRSs,resulting in lower overall energy efficiency(EE).To tackle this challenge,we propose a flexible and efficient approach that individually controls the status of each IRS element.Specifically,the network EE is maximized by jointly optimizing the associations of base stations(BSs)and user equipments(UEs),transmit beamforming,phase shifts of IRS elements,and the associations of individual IRS elements and UEs.The problem is efficiently addressed in two phases.First,the Gale-Shapley algorithm is applied for BS-UE association,followed by a block coordinate descent-based algorithm that iteratively solves the subproblems related to active beamforming,phase shifts,and element-UE associations.To reduce the tremendous dimensionality of optimization variables introduced by element-UE associations in large-scale IRS networks,we introduce an efficient algorithm to solve the associations between IRS elements and UEs.Numerical results show that the proposed elementwise control scheme improves EE by 34.24% compared to the network with IRS-all-on scheme.
基金supported by National Natural Science Foundation of China(No.62471254)National Natural Science Foundation of China(No.92367302)。
文摘The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.
文摘The rapid growth in available network bandwidth has directly contributed to an exponential increase in mobile data traffic,creating significant challenges for network energy consumption.Also,with the extraordinary growth of mobile communications,the data traffic has dramatically expanded,which has led to massive grid power consumption and incurred high operating expenditure(OPEX).However,the majority of current network designs struggle to efficientlymanage a massive amount of data using little power,which degrades energy efficiency performance.Thereby,it is necessary to have an efficient mechanism to reduce power consumption when processing large amounts of data in network data centers.Utilizing renewable energy sources to power the Cloud Radio Access Network(C-RAN)greatly reduces the need to purchase energy from the utility grid.In this paper,we propose a bandwidth-aware hybrid energypowered C-RAN that focuses on throughput and energy efficiency(EE)by lowering grid usage,aiming to enhance the EE.This paper examines the energy efficiency,spectral efficiency(SE),and average on-grid energy consumption,dealing with the major challenges of the temporal and spatial nature of traffic and renewable energy generation across various network setups.To assess the effectiveness of the suggested network by changing the transmission bandwidth,a comprehensive simulation has been conducted.The numerical findings support the efficacy of the suggested approach.
基金support from the National Natural Science Foundation of China(Grant Nos.42277161 and 42230709).
文摘In rock engineering,natural cracks in rock masses subjected to external loads tend to initiate and propagate,leading to potential safety hazards.To investigate the effect of cracking behavior on the mechanical properties of rocks,the cracking processes of pre-cracked rocks have been extensively studied using numerical modeling methods.The peridynamics(PD)exhibits advantages over other numerical methods due to the absence of the requirements for remeshing and external crack growth criterion.However,for modeling pre-cracked rock cracking processes under impact,current PD implementations lack generally applicable rock constitutive models and impact contact models,which leads to difficulties in determining rock material parameters and efficiently calculating impact loads.This paper proposes a non-ordinary state-based peridynamics(NOSBPD)modeling method integrating the Drucker-Prager(DP)plasticity model and an efficient contact model to address the above problems.In the proposed method,the Drucker-Prager plasticity model is integrated into the NOSBPD,thereby equipping NOSBPD with the capability to accurately characterize the nonlinear stress-strain relationship inherent in rocks.An efficient contact model between particles and meshes is designed to calculate the impact loads,which is essentially a coupling method of PD with the finite element method(FEM).The effectiveness of the proposed NOSBPD modeling method is verified by comparison with other numerical methods and experiments.Experimental results indicate that the proposed method can effectively and accurately predict the 3D cracking processes of pre-cracked cracks under impact loading,and the maximum principal stress is the key driver behind wing crack formation in pre-cracked rocks.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62276109The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Research Group Project number(ORF-2025-585).
文摘The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.
基金supported by the National Natural Science Foundation of China(62462040)the Yunnan Fundamental Research Projects(202501AT070345)the Major Science and Technology Projects in Yunnan Province(202202AD080013).
文摘Federated learning often experiences slow and unstable convergence due to edge-side data heterogeneity.This problem becomes more severe when edge participation rate is low,as the information collected from different edge devices varies significantly.As a result,communication overhead increases,which further slows down the convergence process.To address this challenge,we propose a simple yet effective federated learning framework that improves consistency among edge devices.The core idea is clusters the lookahead gradients collected from edge devices on the cloud server to obtain personalized momentum for steering local updates.In parallel,a global momentum is applied during model aggregation,enabling faster convergence while preserving personalization.This strategy enables efficient propagation of the estimated global update direction to all participating edge devices and maintains alignment in local training,without introducing extra memory or communication overhead.We conduct extensive experiments on benchmark datasets such as Cifar100 and Tiny-ImageNet.The results confirm the effectiveness of our framework.On CIFAR-100,our method reaches 55%accuracy with 37 fewer rounds and achieves a competitive final accuracy of 65.46%.Even under extreme non-IID scenarios,it delivers significant improvements in both accuracy and communication efficiency.The implementation is publicly available at https://github.com/sjmp525/CollaborativeComputing/tree/FedCCM(accessed on 20 October 2025).
文摘The goal of the present work is to demonstrate the potential of Artificial Neural Network(ANN)-driven Genetic Algorithm(GA)methods for energy efficiency and economic performance optimization of energy efficiency measures in a multi-family house building in Greece.The energy efficiency measures include different heating/cooling systems(such as low-temperature and high-temperature heat pumps,natural gas boilers,split units),building envelope components for floor,walls,roof and windows of variable heat transfer coefficients,the installation of solar thermal collectors and PVs.The calculations of the building loads and investment and operating and maintenance costs of the measures are based on the methodology defined in Directive 2010/31/EU,while economic assumptions are based on EN 15459-1 standard.Typically,multi-objective optimization of energy efficiency measures often requires the simulation of very large numbers of cases involving numerous possible combinations,resulting in intense computational load.The results of the study indicate that ANN-driven GA methods can be used as an alternative,valuable tool for reliably predicting the optimal measures which minimize primary energy consumption and life cycle cost of the building with greatly reduced computational requirements.Through GA methods,the computational time needed for obtaining the optimal solutions is reduced by 96.4%-96.8%.