The key parameters that characterize the morphological quality of multi-layer and multi-pass metal laser deposited parts are the surface roughness and the error between the actual printing height and the theoretical m...The key parameters that characterize the morphological quality of multi-layer and multi-pass metal laser deposited parts are the surface roughness and the error between the actual printing height and the theoretical model height.The Taguchi method was employed to establish the correlations between process parameter combinations and multi-objective characterization of metal deposition morphology(height error and roughness).Results show that using the signal-to-noise ratio and grey relational analysis,the optimal parameter combination for multi-layer and multi-pass deposition is determined as follows:laser power of 800 W,powder feeding rate of 0.3 r/min,step distance of 1.6 mm,and scanning speed of 20 mm/s.Subsequently,a Genetic Bayesian-back propagation(GB-BP)network is constructed to predict multi-objective responses.Compared with the traditional back propagation network,the GB-back propagation network improves the prediction accuracy of height error and surface roughness by 43.14%and 71.43%,respectively.This network can accurately predict the multi-objective characterization of morphological quality of multi-layer and multi-pass metal deposited parts.展开更多
At present,the emerging solid-phase friction-based additive manufacturing technology,including friction rolling additive man-ufacturing(FRAM),can only manufacture simple single-pass components.In this study,multi-laye...At present,the emerging solid-phase friction-based additive manufacturing technology,including friction rolling additive man-ufacturing(FRAM),can only manufacture simple single-pass components.In this study,multi-layer multi-pass FRAM-deposited alumin-um alloy samples were successfully prepared using a non-shoulder tool head.The material flow behavior and microstructure of the over-lapped zone between adjacent layers and passes during multi-layer multi-pass FRAM deposition were studied using the hybrid 6061 and 5052 aluminum alloys.The results showed that a mechanical interlocking structure was formed between the adjacent layers and the adja-cent passes in the overlapped center area.Repeated friction and rolling of the tool head led to different degrees of lateral flow and plastic deformation of the materials in the overlapped zone,which made the recrystallization degree in the left and right edge zones of the over-lapped zone the highest,followed by the overlapped center zone and the non-overlapped zone.The tensile strength of the overlapped zone exceeded 90%of that of the single-pass deposition sample.It is proved that although there are uneven grooves on the surface of the over-lapping area during multi-layer and multi-pass deposition,they can be filled by the flow of materials during the deposition of the next lay-er,thus ensuring the dense microstructure and excellent mechanical properties of the overlapping area.The multi-layer multi-pass FRAM deposition overcomes the limitation of deposition width and lays the foundation for the future deposition of large-scale high-performance components.展开更多
The huge impact kinetic energy cannot be quickly dissipated by the energy-absorbing structure and transferred to the other vehicle through the car body structure,which will cause structural damage and threaten the liv...The huge impact kinetic energy cannot be quickly dissipated by the energy-absorbing structure and transferred to the other vehicle through the car body structure,which will cause structural damage and threaten the lives of the occupants.Therefore,it is necessary to understand the laws of energy conversion,dissipation and transfer during train collisions.This study proposes a multi-layer progressive analysis method of energy flow during train collisions,considering the characteristics of the train.In this method,the train collision system is divided into conversion,dissipation,and transfer layers from the perspective of the train,collision interface,and car body structure to analyze the energy conversion,dissipation and transfer characteristics.Taking the collision process of a rail train as an example,a train collision energy transfer path analysis model was established based on power flow theory.The results show that when the maximum mean acceleration of the vehicle meets the standard requirements,the jerk may exceed the allowable limit of the human body,and there is a risk of injury to the occupants of a secondary collision.The decay rate of the collision energy along the direction of train operation reaches 79%.As the collision progresses,the collision energy gradually converges in the structure with holes,and the structure deforms when the gathered energy is greater than the maximum energy the structure can withstand.The proposed method helps to understand the train collision energy flow law and provides theoretical support for the train crashworthiness design in the future.展开更多
The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging at...The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.展开更多
In practical engineering construction,multi-layered barriers containing geomembranes are extensively applied to retard the migration of pollutants.However,the associated analytical theory on pollutants diffusion still...In practical engineering construction,multi-layered barriers containing geomembranes are extensively applied to retard the migration of pollutants.However,the associated analytical theory on pollutants diffusion still needs to be further improved.In this work,general analytical solutions are derived for one-dimensional diffusion of degradable organic contaminant(DOC)in the multi-layered media containing geomembranes under a time-varying concentration boundary condition,where the variable substitution and separated variable approaches are employed.These analytical solutions with clear expressions can be used not only to study the diffusion behaviors of DOC in bottom and vertical composite barrier systems,but also to verify other complex numerical models.The proposed general analytical solutions are then fully validated via three comparative analyses,including comparisons with the experimental measurements,an existing analytical solution,and a finite-difference solution.Ultimately,the influences of different factors on the composite cutoff wall’s(CCW,which consists of two soil-bentonite layers and a geomembrane)service performance are investigated through a composite vertical barrier system as the application example.The findings obtained from this investigation can provide scientific guidance for the barrier performance evaluation and the engineering design of CCWs.This application example also exhibits the necessity and effectiveness of the developed analytical solutions.展开更多
Transportation structures such as composite pavements and railway foundations typically consist of multi-layered media designed to withstand high bearing capacity.A theoretical understanding of load transfer mechanism...Transportation structures such as composite pavements and railway foundations typically consist of multi-layered media designed to withstand high bearing capacity.A theoretical understanding of load transfer mechanisms in these multi-layer composites is essential,as it offers intuitive insights into parametric influences and facilitates enhanced structural performance.This paper employs an improved transfer matrix method to address the limitations of existing theoretical approaches for analyzing multi-layer composite structures.By establishing a twodimensional composite pavement model,it investigates load transfer characteristics and validates the accuracy through finite element simulation.The proposed method offers a straightforward analytical approach for examining internal interactions between structural layers.Case studies indicate that the concrete surface layer is the main load-bearing layer for most vertical normal and shear stresses.The soil base layer reduces the overall mechanical response of the substructure,while horizontal actions increase the risk of interfacial slip and cracking.Structural optimization analysis demonstrates that increasing the thickness of the concrete surface layer,enhancing the thickness and stiffness of the soil base layer,or incorporating gradient layers can significantly mitigate these risks of interfacial slip and cracking.The findings of this study can guide the optimization design,parameter analysis,and damage prevention of multi-layer composite structures.展开更多
This study proposes a general imperfect thermal contact model to predict the thermal contact resistance at the interface among multi-layered composite structures.Based on the Green-Lindsay(GL)thermoelastic theory,semi...This study proposes a general imperfect thermal contact model to predict the thermal contact resistance at the interface among multi-layered composite structures.Based on the Green-Lindsay(GL)thermoelastic theory,semi analytical solutions of temperature increment and displacement of multi-layered composite structures are obtained by using the Laplace transform method,upon which the effects of thermal resistance coefficient,partition coefficient,thermal conductivity ratio and heat capacity ratio on the responses are studied.The results show that the generalized imperfect thermal contact model can realistically describe the imperfect thermal contact problem.Accordingly,it may degenerate into other thermal contact models by adjusting the thermal resistance coefficient and partition coefficient.展开更多
Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed ...Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed satellites,ground users now can be covered by multiple visible satellites,but also face complex handover issues with such massive high-mobility satellites in multi-layer.The end-to-end routing is also affected by the handover behavior.In this paper,we propose an intelligent handover strategy dedicated to multi-layer LEO mega-constellation networks.Firstly,an analytic model is utilized to rapidly estimate the end-to-end propagation latency as a key handover factor to construct a multi-objective optimization model.Subsequently,an intelligent handover strategy is proposed by employing the Dueling Double Deep Q Network(D3QN)-based deep reinforcement learning algorithm for single-layer constellations.Moreover,an optimal crosslayer handover scheme is proposed by predicting the latency-jitter and minimizing the cross-layer overhead.Simulation results demonstrate the superior performance of the proposed method in the multi-layer LEO mega-constellation,showcasing reductions of up to 8.2%and 59.5%in end-to-end latency and jitter respectively,when compared to the existing handover strategies.展开更多
Stab-resistant textiles play a critical role in personal protection,necessitating a deeper understanding of how structural and layering factors influence their performance.The current study experimentally examines the...Stab-resistant textiles play a critical role in personal protection,necessitating a deeper understanding of how structural and layering factors influence their performance.The current study experimentally examines the effects of textile structure,layering,and ply orientation on the stab resistance of multi-layer textiles.Three 3D warp interlock(3DWI)structures({f1},{f2},{f3})and a 2D woven fabric({f4}),all made of high-performance p-aramid yarns,were engineered and manufactured.Multi-layer specimens were prepared and subjected to drop-weight stabbing tests following HOSBD standards.Stabbing performance metrics,including Depth of Trauma(DoT),Depth of Penetration(DoP),and trauma deformation(Ymax,Xmax),were investigated and analyzed.Statistical analyses(Two-and One-Way ANOVA)indicated that fabric type and layer number significantly impacted DoP(P<0.05),while ply orientation significantly affected DoP(P<0.05)but not DoT(P>0.05).Further detailed analysis revealed that 2D woven fabrics exhibited greater trauma deformation than 3D WIF structures.Increasing the number of layers reduced both DoP and DoT across all fabric structures,with f3 demonstrating the best performance in multi-layer configurations.Aligned ply orientations also enhanced stab resistance,underscoring the importance of alignment in dissipating impact energy.展开更多
With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes rely...With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.展开更多
The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology...The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology.A well-established policy framework is defined in this paper to generate security policies which are compliant to requirements and capabilities.Moreover,a federated policy management schema is introduced based on the policy definition framework and a multi-level policy application to create and manage virtual clusters with identical or common security levels.The proposed model consists in the design of a well-established ontology according to security mechanisms,a procedure which classifies nodes with common policies into virtual clusters,a policy engine to enhance the process of mapping requests to a specific node as well as an associated cluster and matchmaker engine to eliminate inessential mapping processes.The suggested model has been evaluated according to performance and security parameters to prove the efficiency and reliability of this multilayered engine in cloud computing environments during policy definition,application and mapping procedures.展开更多
Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphys...Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to priva...Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.展开更多
Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudass...Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.展开更多
The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the clou...The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the cloud-aerosol continuum.This study assesses the global distribution of TZ conditions, analyzes its optical characteristics, and determines the cloud or aerosol types most commonly associated with them, using the cloud-aerosol discrimination(CAD) score of the CloudAerosol Lidar with Orthogonal Polarization(CALIOP) instrument on the CALIPSO satellite. The CAD score classifies clouds and aerosols by the probability density functions of attenuated backscatter, total color ratio, volume depolarization ratio, altitude, and latitude. After applying several filters to avoid artifacts, the TZ was identified as those atmospheric layers that cannot be clearly classified as clouds or aerosols, layers within the no-confidence range(NCR) of the CAD score, and cirrus fringes. The optical characteristics of NCR layers exhibit two main clusters: Cluster 1, with properties between high-altitude ice clouds and aerosols(e.g., wispy cloud fragments), and Cluster 2, with properties between water clouds and aerosols at lower altitudes(e.g., large hydrated aerosols). Our results highlight the significant ubiquity of TZ conditions, which appear in 9.5% of all profiles and comprise 6.4% of the detected layers. Cluster 1 and cirrus-fringe layers predominate near the ITCZ and in mid-latitudes, whereas Cluster 2 layers are more frequent over the oceans along the central West African and East Asian coasts, where elevated smoke and dusty marine aerosols are common.展开更多
基金National Natural Science Foundation of China(52175237)。
文摘The key parameters that characterize the morphological quality of multi-layer and multi-pass metal laser deposited parts are the surface roughness and the error between the actual printing height and the theoretical model height.The Taguchi method was employed to establish the correlations between process parameter combinations and multi-objective characterization of metal deposition morphology(height error and roughness).Results show that using the signal-to-noise ratio and grey relational analysis,the optimal parameter combination for multi-layer and multi-pass deposition is determined as follows:laser power of 800 W,powder feeding rate of 0.3 r/min,step distance of 1.6 mm,and scanning speed of 20 mm/s.Subsequently,a Genetic Bayesian-back propagation(GB-BP)network is constructed to predict multi-objective responses.Compared with the traditional back propagation network,the GB-back propagation network improves the prediction accuracy of height error and surface roughness by 43.14%and 71.43%,respectively.This network can accurately predict the multi-objective characterization of morphological quality of multi-layer and multi-pass metal deposited parts.
基金supported by the National Key Research and Development Program of China(No.2022YFB3404700)the National Natural Science Foundation of China(Nos.52105313 and 52275299)+2 种基金the Research and Development Program of Beijing Municipal Education Commission,China(No.KM202210005036)the Natural Science Foundation of Chongqing,China(No.CSTB2023NSCQ-MSX0701)the National Defense Basic Research Projects of China(No.JCKY2022405C002).
文摘At present,the emerging solid-phase friction-based additive manufacturing technology,including friction rolling additive man-ufacturing(FRAM),can only manufacture simple single-pass components.In this study,multi-layer multi-pass FRAM-deposited alumin-um alloy samples were successfully prepared using a non-shoulder tool head.The material flow behavior and microstructure of the over-lapped zone between adjacent layers and passes during multi-layer multi-pass FRAM deposition were studied using the hybrid 6061 and 5052 aluminum alloys.The results showed that a mechanical interlocking structure was formed between the adjacent layers and the adja-cent passes in the overlapped center area.Repeated friction and rolling of the tool head led to different degrees of lateral flow and plastic deformation of the materials in the overlapped zone,which made the recrystallization degree in the left and right edge zones of the over-lapped zone the highest,followed by the overlapped center zone and the non-overlapped zone.The tensile strength of the overlapped zone exceeded 90%of that of the single-pass deposition sample.It is proved that although there are uneven grooves on the surface of the over-lapping area during multi-layer and multi-pass deposition,they can be filled by the flow of materials during the deposition of the next lay-er,thus ensuring the dense microstructure and excellent mechanical properties of the overlapping area.The multi-layer multi-pass FRAM deposition overcomes the limitation of deposition width and lays the foundation for the future deposition of large-scale high-performance components.
基金Supported by the National Natural Science Foundation of China(Grant No.52172409)Postdoctoral Innovation Talents Support Program(Grant No.BX20240298)+1 种基金the Fundamental Research Funds for the Central Universities(Grant No.2682024GF023)Heilongjiang Province Postdoctoral Foundation Project(Grant No.LBH-Z23041).
文摘The huge impact kinetic energy cannot be quickly dissipated by the energy-absorbing structure and transferred to the other vehicle through the car body structure,which will cause structural damage and threaten the lives of the occupants.Therefore,it is necessary to understand the laws of energy conversion,dissipation and transfer during train collisions.This study proposes a multi-layer progressive analysis method of energy flow during train collisions,considering the characteristics of the train.In this method,the train collision system is divided into conversion,dissipation,and transfer layers from the perspective of the train,collision interface,and car body structure to analyze the energy conversion,dissipation and transfer characteristics.Taking the collision process of a rail train as an example,a train collision energy transfer path analysis model was established based on power flow theory.The results show that when the maximum mean acceleration of the vehicle meets the standard requirements,the jerk may exceed the allowable limit of the human body,and there is a risk of injury to the occupants of a secondary collision.The decay rate of the collision energy along the direction of train operation reaches 79%.As the collision progresses,the collision energy gradually converges in the structure with holes,and the structure deforms when the gathered energy is greater than the maximum energy the structure can withstand.The proposed method helps to understand the train collision energy flow law and provides theoretical support for the train crashworthiness design in the future.
基金Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2025R319)Riyadh,Saudi Arabia and Prince Sultan University for covering the article processing charges(APC)associated with this publication.Special acknowledgement to Automated Systems&Soft Computing Lab(ASSCL),Prince Sultan University,Riyadh,Saudi Arabia.
文摘The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.
基金Project(2023YFC3707800)supported by the National Key Research and Development Program of China。
文摘In practical engineering construction,multi-layered barriers containing geomembranes are extensively applied to retard the migration of pollutants.However,the associated analytical theory on pollutants diffusion still needs to be further improved.In this work,general analytical solutions are derived for one-dimensional diffusion of degradable organic contaminant(DOC)in the multi-layered media containing geomembranes under a time-varying concentration boundary condition,where the variable substitution and separated variable approaches are employed.These analytical solutions with clear expressions can be used not only to study the diffusion behaviors of DOC in bottom and vertical composite barrier systems,but also to verify other complex numerical models.The proposed general analytical solutions are then fully validated via three comparative analyses,including comparisons with the experimental measurements,an existing analytical solution,and a finite-difference solution.Ultimately,the influences of different factors on the composite cutoff wall’s(CCW,which consists of two soil-bentonite layers and a geomembrane)service performance are investigated through a composite vertical barrier system as the application example.The findings obtained from this investigation can provide scientific guidance for the barrier performance evaluation and the engineering design of CCWs.This application example also exhibits the necessity and effectiveness of the developed analytical solutions.
基金supported by Fundamental Research Funds for the Central Universities(No.lzujbky-2024-05)Innovation Foundation of Provincial Education Department of Gansu(2024B-005)+2 种基金Scientific Department of Gansu(24CXGA083,24CXGA024,JK2024-28,JK2024-32 and 23CXJA0007)Industrial Support Plan Project of Provincial Education Department of Gansu(2025CYZC-003 and CYZC-2024-10)the Hunan Natural Science Foundation Science and Education Joint Fund Project(2022JJ60109).
文摘Transportation structures such as composite pavements and railway foundations typically consist of multi-layered media designed to withstand high bearing capacity.A theoretical understanding of load transfer mechanisms in these multi-layer composites is essential,as it offers intuitive insights into parametric influences and facilitates enhanced structural performance.This paper employs an improved transfer matrix method to address the limitations of existing theoretical approaches for analyzing multi-layer composite structures.By establishing a twodimensional composite pavement model,it investigates load transfer characteristics and validates the accuracy through finite element simulation.The proposed method offers a straightforward analytical approach for examining internal interactions between structural layers.Case studies indicate that the concrete surface layer is the main load-bearing layer for most vertical normal and shear stresses.The soil base layer reduces the overall mechanical response of the substructure,while horizontal actions increase the risk of interfacial slip and cracking.Structural optimization analysis demonstrates that increasing the thickness of the concrete surface layer,enhancing the thickness and stiffness of the soil base layer,or incorporating gradient layers can significantly mitigate these risks of interfacial slip and cracking.The findings of this study can guide the optimization design,parameter analysis,and damage prevention of multi-layer composite structures.
基金Projects(42477162,52108347,52178371,52168046,52178321,52308383)supported by the National Natural Science Foundation of ChinaProjects(2023C03143,2022C01099,2024C01219,2022C03151)supported by the Zhejiang Key Research and Development Plan,China+6 种基金Project(LQ22E080010)supported by the Exploring Youth Project of Zhejiang Natural Science Foundation,ChinaProject(LR21E080005)supported by the Outstanding Youth Project of Natural Science Foundation of Zhejiang Province,ChinaProject(2022M712964)supported by the Postdoctoral Science Foundation of ChinaProject(2023AFB008)supported by the Natural Science Foundation of Hubei Province for Youth,ChinaProject(202203)supported by Engineering Research Centre of Rock-Soil Drilling&Excavation and Protection,Ministry of Education,ChinaProject(202305-2)supported by the Science and Technology Project of Zhejiang Provincial Communication Department,ChinaProject(2021K256)supported by the Construction Research Founds of Department of Housing and Urban-Rural Development of Zhejiang Province,China。
文摘This study proposes a general imperfect thermal contact model to predict the thermal contact resistance at the interface among multi-layered composite structures.Based on the Green-Lindsay(GL)thermoelastic theory,semi analytical solutions of temperature increment and displacement of multi-layered composite structures are obtained by using the Laplace transform method,upon which the effects of thermal resistance coefficient,partition coefficient,thermal conductivity ratio and heat capacity ratio on the responses are studied.The results show that the generalized imperfect thermal contact model can realistically describe the imperfect thermal contact problem.Accordingly,it may degenerate into other thermal contact models by adjusting the thermal resistance coefficient and partition coefficient.
基金supported by the National Natural Science Foundation of China(No.62401597)Natural Science Foundation of Hunan Province,China(No.2024JJ6469)the Research Project of National University of Defense Technology,China(No.ZK22-02).
文摘Low Earth Orbit(LEO)mega-constellation networks,exemplified by Starlink,are poised to play a pivotal role in future mobile communication networks,due to their low latency and high capacity.With the massively deployed satellites,ground users now can be covered by multiple visible satellites,but also face complex handover issues with such massive high-mobility satellites in multi-layer.The end-to-end routing is also affected by the handover behavior.In this paper,we propose an intelligent handover strategy dedicated to multi-layer LEO mega-constellation networks.Firstly,an analytic model is utilized to rapidly estimate the end-to-end propagation latency as a key handover factor to construct a multi-objective optimization model.Subsequently,an intelligent handover strategy is proposed by employing the Dueling Double Deep Q Network(D3QN)-based deep reinforcement learning algorithm for single-layer constellations.Moreover,an optimal crosslayer handover scheme is proposed by predicting the latency-jitter and minimizing the cross-layer overhead.Simulation results demonstrate the superior performance of the proposed method in the multi-layer LEO mega-constellation,showcasing reductions of up to 8.2%and 59.5%in end-to-end latency and jitter respectively,when compared to the existing handover strategies.
文摘Stab-resistant textiles play a critical role in personal protection,necessitating a deeper understanding of how structural and layering factors influence their performance.The current study experimentally examines the effects of textile structure,layering,and ply orientation on the stab resistance of multi-layer textiles.Three 3D warp interlock(3DWI)structures({f1},{f2},{f3})and a 2D woven fabric({f4}),all made of high-performance p-aramid yarns,were engineered and manufactured.Multi-layer specimens were prepared and subjected to drop-weight stabbing tests following HOSBD standards.Stabbing performance metrics,including Depth of Trauma(DoT),Depth of Penetration(DoP),and trauma deformation(Ymax,Xmax),were investigated and analyzed.Statistical analyses(Two-and One-Way ANOVA)indicated that fabric type and layer number significantly impacted DoP(P<0.05),while ply orientation significantly affected DoP(P<0.05)but not DoT(P>0.05).Further detailed analysis revealed that 2D woven fabrics exhibited greater trauma deformation than 3D WIF structures.Increasing the number of layers reduced both DoP and DoT across all fabric structures,with f3 demonstrating the best performance in multi-layer configurations.Aligned ply orientations also enhanced stab resistance,underscoring the importance of alignment in dissipating impact energy.
基金funded by the National Natural Science Foundation of China(New Design and Analysis of Fully Homomorphic Signatures,Grant No.62172436).
文摘With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.
文摘The long awaited cloud computing concept is a reality now due to the transformation of computer generations.However,security challenges have become the biggest obstacles for the advancement of this emerging technology.A well-established policy framework is defined in this paper to generate security policies which are compliant to requirements and capabilities.Moreover,a federated policy management schema is introduced based on the policy definition framework and a multi-level policy application to create and manage virtual clusters with identical or common security levels.The proposed model consists in the design of a well-established ontology according to security mechanisms,a procedure which classifies nodes with common policies into virtual clusters,a policy engine to enhance the process of mapping requests to a specific node as well as an associated cluster and matchmaker engine to eliminate inessential mapping processes.The suggested model has been evaluated according to performance and security parameters to prove the efficiency and reliability of this multilayered engine in cloud computing environments during policy definition,application and mapping procedures.
基金National Natural Science Foundation of China(42375153,42105153,42205157)Development of Science and Technology at Chinese Academy of Meteorological Sciences(2023KJ038)。
文摘Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金supported in part by the National Key Research and Development Program of Chinaunder(Grant 2021YFB3101100)in part by the National Natural Science Foundation of Chinaunder(Grant 42461057),(Grant 62272123),and(Grant 42371470)+1 种基金in part by the Fundamental Research Program of Shanxi Province under(Grant 202303021212164)in part by the Postgraduate Education Innovation Program of Shanxi Province under(Grant 2024KY474).
文摘Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.
文摘Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.
基金funded through project NUBOLOSYTI (PID2023149972NB-100) of the Spanish Ministry of Science and Innovation (MICINN)supported by an IFUdG 2022 fellowship。
文摘The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the cloud-aerosol continuum.This study assesses the global distribution of TZ conditions, analyzes its optical characteristics, and determines the cloud or aerosol types most commonly associated with them, using the cloud-aerosol discrimination(CAD) score of the CloudAerosol Lidar with Orthogonal Polarization(CALIOP) instrument on the CALIPSO satellite. The CAD score classifies clouds and aerosols by the probability density functions of attenuated backscatter, total color ratio, volume depolarization ratio, altitude, and latitude. After applying several filters to avoid artifacts, the TZ was identified as those atmospheric layers that cannot be clearly classified as clouds or aerosols, layers within the no-confidence range(NCR) of the CAD score, and cirrus fringes. The optical characteristics of NCR layers exhibit two main clusters: Cluster 1, with properties between high-altitude ice clouds and aerosols(e.g., wispy cloud fragments), and Cluster 2, with properties between water clouds and aerosols at lower altitudes(e.g., large hydrated aerosols). Our results highlight the significant ubiquity of TZ conditions, which appear in 9.5% of all profiles and comprise 6.4% of the detected layers. Cluster 1 and cirrus-fringe layers predominate near the ITCZ and in mid-latitudes, whereas Cluster 2 layers are more frequent over the oceans along the central West African and East Asian coasts, where elevated smoke and dusty marine aerosols are common.