Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional sign...Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional signature-based detection algorithm is not efficient at recognizing zero-day attacks,as the signatures of zero-day attacks are usually not previously accessible.A machine learning(ML)-based detection algorithm is proficient in capturing statistical features of attacks and,therefore,optimistic for zero-day attack detection.ML and deep learning(DL)are employed for designing intrusion detection systems.The improvement of absolute varieties of novel cyberattacks poses significant challenges for IDS solutions that are dependent on datasets of prior signatures of the attacks.This manuscript presents the Zero-day attack detection employing an equilibrium optimizer with a deep learning(ZDAD-EODL)method to ensure cybersecurity.The ZDAD-EODL technique employs meta-heuristic feature subset selection using an optimum DL-based classification technique for zero-day attacks.Initially,the min-max scalar is utilized for normalizing the input data.For feature selection(FS),the ZDAD-EODL method utilizes the equilibrium optimizer(EO)model to choose feature sub-sets.In addition,the ZDAD-EODL technique employs the bi-directional gated recurrent unit(BiGRU)technique for the classification and identification of zero-day attacks.Finally,the detection performance of the BiGRU technique is further enhanced through the implementation of the subtraction average-based optimizer(SABO)-based tuning process.The performance of the ZDAD-EODL approach is investigated on the benchmark dataset.The comparison study of the ZDAD-EODL approach portrayed a superior accuracy value of 98.47%over existing techniques.展开更多
Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance ...Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance and scalability,as current trends require the distribution of computation across network nodes/points.In this paper,we survey a large number of mapping and scheduling techniques designed for NoC architectures.This time,we concentrated on 3D systems.We take a systematic literature review approach to analyze existing methods across static,dynamic,hybrid,and machine-learning-based approaches,alongside preliminary AI-based dynamic models in recent works.We classify them into several main aspects covering power-aware mapping,fault tolerance,load-balancing,and adaptive for dynamic workloads.Also,we assess the efficacy of each method against performance parameters,such as latency,throughput,response time,and error rate.Key challenges,including energy efficiency,real-time adaptability,and reinforcement learning integration,are highlighted as well.To the best of our knowledge,this is one of the recent reviews that identifies both traditional and AI-based algorithms for mapping over a modern NoC,and opens research challenges.Finally,we provide directions for future work toward improved adaptability and scalability via lightweight learned models and hierarchical mapping frameworks.展开更多
Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces s...Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.展开更多
Photovoltaic(PV)systems are electrical systems designed to convert solar energy into electrical energy.As a crucial component of PV systems,harsh weather conditions,photovoltaic panel temperature and solar irradiance ...Photovoltaic(PV)systems are electrical systems designed to convert solar energy into electrical energy.As a crucial component of PV systems,harsh weather conditions,photovoltaic panel temperature and solar irradiance influence the power output of photovoltaic cells.Therefore,accurately identifying the parameters of PV models is essential for simulating,controlling and evaluating PV systems.In this study,we propose an enhanced weighted-mean-of-vectors optimisation(EINFO)for efficiently determining the unknown parameters in PV systems.EINFO introduces a Lambert W-based explicit objective function for the PV model,enhancing the computational accuracy of the algorithm's population fitness.This addresses the challenge of improving the metaheuristic algorithms'identification accuracy for unknown parameter identification in PV models.We experimentally apply EINFO to three types of PV models(single-diode,double-diode and PV-module models)to validate its accuracy and stability in parameter identification.The results demonstrate that EINFO achieves root mean square errors(RMSEs)of 7.7301E-04,6.8553E-04 and 2.0608E-03 for the single-diode model,double-diode model and PV-module model,respectively,surpassing those obtained by using INFO algorithm as well as other methods in terms of convergence speed,accuracy and stability.Furthermore,comprehensive experimental findings on three commercial PV modules(ST40,SM55 and KC200GT)indicate that EINFO consistently maintains high accuracy across varying temperatures and irradiation levels.In conclusion,EINFO emerges as a highly competitive and practical approach for parameter identification in diverse types of PV models.展开更多
Distributed Quantum Computing(DQC)provides a means for scaling available quantum computation by interconnecting multiple quantum processor units(QPUs).A key challenge in this domain is efficiently allocating logical q...Distributed Quantum Computing(DQC)provides a means for scaling available quantum computation by interconnecting multiple quantum processor units(QPUs).A key challenge in this domain is efficiently allocating logical qubits from quantum circuits to the physical qubits within QPUs,a task known to be NP-hard.Traditional approaches,primarily focused on graph partitioning strategies,have sought to reduce the number of required Bell pairs for executing non-local CNOT operations,a form of gate teleportation.However,these methods have limitations in terms of efficiency and scalability.Addressing this,our work jointly considers gate and qubit teleportations introducing a novel meta-heuristic algorithm to minimise the network cost of executing a quantum circuit.By allowing dynamic reallocation of qubits along with gate teleportations during circuit execution,our method significantly enhances the overall efficacy and potential scalability of DQC frameworks.In our numerical analysis,we demonstrate that integrating qubit teleportations into our genetic algorithm for optimizing circuit blocking reduces the required resources,specifically the number of EPR pairs,compared to traditional graph partitioning methods.Our results,derived fromboth benchmark and randomly generated circuits,show that as circuit complexity increases—demanding more qubit teleportations—our approach effectively optimises these teleportations throughout the execution,thereby enhancing performance through strategic circuit partitioning.This is a step forward in the pursuit of a global quantum compiler which will ultimately enable the efficient use of a‘quantum data center’in the future.展开更多
This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to imp...This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.展开更多
Estimating probability density functions(PDFs)is critical in data analysis,particularly for complex multimodal distributions.traditional kernel density estimator(KDE)methods often face challenges in accurately capturi...Estimating probability density functions(PDFs)is critical in data analysis,particularly for complex multimodal distributions.traditional kernel density estimator(KDE)methods often face challenges in accurately capturing multimodal structures due to their uniform weighting scheme,leading to mode loss and degraded estimation accuracy.This paper presents the flexible kernel density estimator(F-KDE),a novel nonparametric approach designed to address these limitations.F-KDE introduces the concept of kernel unit inequivalence,assigning adaptive weights to each kernel unit,which better models local density variations in multimodal data.The method optimises an objective function that integrates estimation error and log-likelihood,using a particle swarm optimisation(PSO)algorithm that automatically determines optimal weights and bandwidths.Through extensive experiments on synthetic and real-world datasets,we demonstrated that(1)the weights and bandwidths in F-KDE stabilise as the optimisation algorithm iterates,(2)F-KDE effectively captures the multimodal characteristics and(3)F-KDE outperforms state-of-the-art density estimation methods regarding accuracy and robustness.The results confirm that F-KDE provides a valuable solution for accurately estimating multimodal PDFs.展开更多
The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of ...The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.展开更多
With an optimised hall layout,progressive design collaborations,inspiring trends and AIdriven innovations,Heimtextil 2026 reacts to the current market situation–and offers the industry a reliable constant in challeng...With an optimised hall layout,progressive design collaborations,inspiring trends and AIdriven innovations,Heimtextil 2026 reacts to the current market situation–and offers the industry a reliable constant in challenging times.Under the motto‘Lead the Change’,the leading trade fair for home and contract textiles and textile design shows how challenges can be turned into opportunities.From 13 to 16 January,more than 3,100 exhibitors from 65 countries will provide a comprehensive market overview with new collections and textile solutions.As a knowledge hub,Heimtextil delivers new strategies and concrete solutions for future business success.展开更多
Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in ...Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in heat dissipation,and reduces the risk of thermal warping,residual stress,and distortion,particularly in the fabrication of complex geometries that challenge traditional manufacturing methods.Despite the importance of support structures in AM,a systematic review covering all aspects of the design,optimisation,and removal of support structures remains lacking.This review provides an overview of various support structure types—contact and non-contact,as well as identical and dissimilar material configurations—and outlines optimisation methods,including geometric,topology,simulation-driven,data-driven,and multi-objective approaches.Additionally,the mechanisms of support removal,such as mechanical milling and chemical dissolution,and innovations like dissolvable supports and sensitised interfaces,are discussed.Future research directions are outlined,emphasising artificial intelligence(AI)-driven intelligent design,multi-material supports,sustainable support materials,support-free AM techniques,and innovative support removal methods,all of which are essential for advancing AM technology.Overall,this review aims to serve as a foundational reference for the design and optimisation of the support structure in AM.展开更多
The challenge of optimising multimodal functions within high-dimensional domains constitutes a notable difficulty in evolutionary computation research.Addressing this issue,this study introduces the Deep Backtracking ...The challenge of optimising multimodal functions within high-dimensional domains constitutes a notable difficulty in evolutionary computation research.Addressing this issue,this study introduces the Deep Backtracking Bare-Bones Particle Swarm Optimisation(DBPSO)algorithm,an innovative approach built upon the integration of the Deep Memory Storage Mechanism(DMSM)and the Dynamic Memory Activation Strategy(DMAS).The DMSM enhances the memory retention for the globally optimal particle,promoting interaction between standard particles and their historically optimal counterparts.In parallel,DMAS assures the updated position of the globally optimal particle is appropriately aligned with the deep memory repository.The efficacy of DBPSO was rigorously assessed through a series of simulations employing the CEC2017 benchmark suite.A comparative analysis juxtaposed DBPSO's performance against five contemporary evolutionary algorithms across two experimental conditions:Dimension-50 and Dimension-100.In the 50D trials,DBPSO attained an average ranking of 2.03,whereas in the 100D scenarios,it improved to an average ranking of 1.9.Further examination utilising the CEC2019 benchmark functions revealed DBPSO's robustness,securing four first-place finishes,three second-place standings,and three third-place positions,culminating in an unmatched average ranking of 1.9 across all algorithms.These empirical results corroborate DBPSO's proficiency in delivering precise solutions for complex,high-dimensional optimisation challenges.展开更多
Impact ground pressure events occur frequently in coal mining processes,significantly affecting the personal safety of construction workers.Real-time microseismic monitoring of coal rock body rupture information can p...Impact ground pressure events occur frequently in coal mining processes,significantly affecting the personal safety of construction workers.Real-time microseismic monitoring of coal rock body rupture information can provide early warnings,and the seismic source location method is an essential indicator for evaluating a microseismic monitoring system.This paper proposes a nonlinear hybrid optimal particle swarm optimisation(PSO)microseismic positioning method based on this technique.The method first improves the PSO algorithm by using the global search performance of this method to quickly find a feasible solution and provide a better initial solution for the subsequent solution of the nonlinear optimal microseismic positioning method.This approach effectively prevents the problem of the microseismic positioning method falling into a local optimum because of an over-reliance on the initial value.In addition,the nonlinear optimal microseismic positioning method further narrows the localisation error based on the PSO algorithm.A simulation test demonstrates that the new method has a good positioning effect,and engineering application examples also show that the proposed method has high accuracy and strong positioning stability.The new method is better than the separate positioning method,both overall and in three directions,making it more suitable for solving the microseismic positioning problem.展开更多
The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment ...The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment process integrating graphite intercalation compound(GIC)adsorption,direct anodic oxidation,and·OH oxidation for decolourising Reactive Black 5(RB5)from aqueous solutions.The electrochemical process was optimised using the novel progressive central composite design-response surface methodology(CCD-NPRSM),hybrid artificial neural network-extreme gradient boosting(hybrid ANN-XGBoost),and classification and regression trees(CART).CCD-NPRSM and hybrid ANN-XGBoost were employed to minimise errors in evaluating the electrochemical process involving three manipulated operational parameters:current density,electrolysis(treatment)time,and initial dye concentration.The optimised decolourisation efficiencies were 99.30%,96.63%,and 99.14%for CCD-NPRSM,hybrid ANN-XGBoost,and CART,respectively,compared to the 98.46%RB5 removal rate observed experimentally under optimum conditions:approximately 20 mA/cm^(2) of current density,20 min of electrolysis time,and 65 mg/L of RB5.The optimised mineralisation efficiencies ranged between 89%and 92%for different models based on total organic carbon(TOC).Experimental studies confirmed that the predictive efficiency of optimised models ranked in the descending order of hybrid ANN-XGBoost,CCD-NPRSM,and CART.Model validation using analysis of variance(ANOVA)revealed that hybrid ANN-XGBoost had a mean squared error(MSE)and a coefficient of determination(R^(2))of approximately 0.014 and 0.998,respectively,for the RB5 removal efficiency,outperforming CCD-NPRSM with MSE and R^(2) of 0.518 and 0.998,respectively.Overall,the hybrid ANN-XGBoost approach is the most feasible technique for assessing the electrochemical treatment efficiency in RB5 dye wastewater decolourisation.展开更多
This review synthesises and assesses the most recent developments in Unmanned Aerial Vehicles(UAVs)and swarm robotics,with a specific emphasis on optimisation strategies,path planning,and formation control.The study i...This review synthesises and assesses the most recent developments in Unmanned Aerial Vehicles(UAVs)and swarm robotics,with a specific emphasis on optimisation strategies,path planning,and formation control.The study identifies key methodologies that are driving progress in the field by conducting a comprehensive analysis of seven critical publications.The following are included:sensor-based platforms that facilitate effective obstacle avoidance,cluster-based hierarchical path planning for efficient navigation,and adaptive hybrid controllers for dynamic environments.The review emphasises the substantial contribution of optimisation techniques,including Max-Min Ant Colony Optimisation(MMACO),to the improvement of convergence rates and the enhancement of path efficiency.The effectiveness of various navigation systems in diverse operational contexts is demonstrated through comparative analysis,which provides valuable insights into the system’s adaptability and performance.The primary findings underscore the strengths and limitations of current methodologies,thereby identifying voids in research and practical applications.This review offers actionable insights for academicians and practitioners who are striving to advance UAV and swarm robotics technology by addressing these challenges.The study concludes with a discussion of future directions,which underscores the potential for innovative solutions to enhance UAV systems in complex,dynamic environments.展开更多
This article presents the design of a microfabricated bio-inspired flapping-wing Nnano Aaerial Vvehicle(NAV),driven by an electromagnetic system.Our approach is based on artificial wings composed of rigid bodies conne...This article presents the design of a microfabricated bio-inspired flapping-wing Nnano Aaerial Vvehicle(NAV),driven by an electromagnetic system.Our approach is based on artificial wings composed of rigid bodies connected by compliant links,which optimise aerodynamic forces though replicating the complex wing kinematics of insects.The originality of this article lies in a new design methodology based on a triple equivalence between a 3D model,a multibody model,and a mass/spring model(0D)which reduces the number of parameters in the problem.This approach facilitates NAV optimisation by using only the mass/spring model,thereby simplifying the design process while maintaining high accuracy.Two wing geometries are studied and optimised in this article to produce large-amplitude wing motions(approximately 40^\circ),and enabling flapping and twisting motion in quadrature.The results are validated thanks to experimental measurements for the large amplitude and through finite element simulations for the combined motion,confirming the effectiveness of this strategy for a NAV weighing less than 40 mg with a wingspan of under 3 cm.展开更多
With the energy problem becoming increasingly severe,industrial energy efficiency issues need to be solved urgently.The tube and shell heat exchanger,as the most widely used device for energy recovery and utilisation,...With the energy problem becoming increasingly severe,industrial energy efficiency issues need to be solved urgently.The tube and shell heat exchanger,as the most widely used device for energy recovery and utilisation,its optimal design has become a significantly important research topic.Optimal design solves the tediousness of traditional design and can easily and accurately give the desired design results.Several industrial heat duty targets and pressure drop limitations for the design of shell-and-tube heat exchangers,such as high heat transfer efficiency,large temperature correction coefficient,and high capacity,cannot be met by single-shell units and require series or parallel arrangements.This paper uses Set Trimming to optimize the design of double shell and tube heat exchangers,assuming the possibility of series,parallel,or series-parallel arrangements.Minimum heat transfer area and minimum total annualized cost are used as objective functions to optimize the design of a single or double-shell heat exchanger that better meets the objectives.展开更多
This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters ...This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters were varied:B_(4)C weight percentage,milling time,and normal load,to evaluate their effects on two output parameters:wear loss and the coefficient of friction.AA2024 alloy was used as the matrix alloy,while B_(4)C particles were used as reinforcement.Due to the high hardness and wear resistance of B_(4)C,the optimized composite shows strong potential for use in aerospace structural elements and automotive brake components.The optimisation of tribological behaviour was conducted using a Taguchi-Grey Relational Analysis(Taguchi-GRA)and the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS).A total of 27 combinations of input parameters were analysed,varying the B_(4)C content(0,10,and 15 wt.%),milling time(0,15,and 25 h),and normal load(1,5,and 10 N).Wear loss and the coefficient of friction were numerically evaluated and selected as criteria for optimisation.Artificial Neural Networks(ANNs)were also applied for two outputs simultaneously.TOPSIS identified Alternative 1 as the optimal solution,confirming the results obtained using the Taguchi Grey method.The optimal condition obtained(10 wt.%B_(4)C,25 h milling time,10 N load)resulted in a minimum wear loss of 1.7 mg and a coefficient of friction of 0.176,confirming significant enhancement in tribological behaviour.Based on the results,both the B_(4)C content and the applied processing conditions have a significant impact on wear loss and frictional properties.This approach demonstrates high reliability and confidence,enabling the design of future composite materials with optimal properties for specific applications.展开更多
An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are in...An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.展开更多
Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for ...Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for direct engineering application in their natural form. In an attempt to make them more feasible for construction purposes, numerous materials and techniques have been used to stabilise the soil. In this study, the additives and techniques applied for stabilising expansive soils will be focused on,with respect to their efficiency in improving the engineering properties of the soils. Then we discussed the microstructural interaction, chemical process, economic implication, nanotechnology application, as well as waste reuse and sustainability. Some issues regarding the effective application of the emerging trends in expansive soil stabilisation were presented with three categories, namely geoenvironmental,standardisation and optimisation issues. Techniques like predictive modelling and exploring methods such as reliability-based design optimisation, response surface methodology, dimensional analysis, and artificial intelligence technology were also proposed in order to ensure that expansive soil stabilisation is efficient.展开更多
Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has becom...Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has become increasingly popular mainly due to the growing use of lightweight materials in transportation applications.However,SPR joining of these advanced light materials remains a challenge as these materials often lack a good combination of high strength and ductility to resist the large plastic deformation induced by the SPR process.In this paper,SPR joints of advanced materials and their corresponding failure mechanisms are discussed,aiming to provide the foundation for future improvement of SPR joint quality.This paper is divided into three major sections:1)joint failures focusing on joint defects originated from the SPR process and joint failure modes under different mechanical loading conditions,2)joint corrosion issues,and 3)joint optimisation via process parameters and advanced techniques.展开更多
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/286/46Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R732),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+2 种基金Ongoing Research Funding program(ORFFT-2025-100-7),King Saud University,Riyadh,Saudi Arabia for financial supportthe Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,for funding this research work through the project number“NBU-FFR-2025-2913-07”the Deanship of Graduate Studies and Scientific Research at the University of Bisha for supporting this work through the Fast-Track Research Support Program。
文摘Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional signature-based detection algorithm is not efficient at recognizing zero-day attacks,as the signatures of zero-day attacks are usually not previously accessible.A machine learning(ML)-based detection algorithm is proficient in capturing statistical features of attacks and,therefore,optimistic for zero-day attack detection.ML and deep learning(DL)are employed for designing intrusion detection systems.The improvement of absolute varieties of novel cyberattacks poses significant challenges for IDS solutions that are dependent on datasets of prior signatures of the attacks.This manuscript presents the Zero-day attack detection employing an equilibrium optimizer with a deep learning(ZDAD-EODL)method to ensure cybersecurity.The ZDAD-EODL technique employs meta-heuristic feature subset selection using an optimum DL-based classification technique for zero-day attacks.Initially,the min-max scalar is utilized for normalizing the input data.For feature selection(FS),the ZDAD-EODL method utilizes the equilibrium optimizer(EO)model to choose feature sub-sets.In addition,the ZDAD-EODL technique employs the bi-directional gated recurrent unit(BiGRU)technique for the classification and identification of zero-day attacks.Finally,the detection performance of the BiGRU technique is further enhanced through the implementation of the subtraction average-based optimizer(SABO)-based tuning process.The performance of the ZDAD-EODL approach is investigated on the benchmark dataset.The comparison study of the ZDAD-EODL approach portrayed a superior accuracy value of 98.47%over existing techniques.
基金the Deanship of Graduate Studies and Scientific Research at University of Bisha for supporting this work through the Fast-Track Research Support Programthe Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2025-2903-09”.
文摘Network-on-Chip(NoC)systems are progressively deployed in connecting massively parallel megacore systems in the new computing architecture.As a result,application mapping has become an important aspect of performance and scalability,as current trends require the distribution of computation across network nodes/points.In this paper,we survey a large number of mapping and scheduling techniques designed for NoC architectures.This time,we concentrated on 3D systems.We take a systematic literature review approach to analyze existing methods across static,dynamic,hybrid,and machine-learning-based approaches,alongside preliminary AI-based dynamic models in recent works.We classify them into several main aspects covering power-aware mapping,fault tolerance,load-balancing,and adaptive for dynamic workloads.Also,we assess the efficacy of each method against performance parameters,such as latency,throughput,response time,and error rate.Key challenges,including energy efficiency,real-time adaptability,and reinforcement learning integration,are highlighted as well.To the best of our knowledge,this is one of the recent reviews that identifies both traditional and AI-based algorithms for mapping over a modern NoC,and opens research challenges.Finally,we provide directions for future work toward improved adaptability and scalability via lightweight learned models and hierarchical mapping frameworks.
基金supported by the National Research Foundation of Korea grant funded by the Korea government(RS-2023-00217116)。
文摘Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(P202PF11)Sino-UK Industrial Fund(RP202G0289)Sino-UK Education Fund(OP202006)LIAS(P202ED10,P202RE969)Data Science Enhancement Fund(P202RE237)Fight for Sight(24NN201)BBSRC(RM32G0178B8).
文摘Photovoltaic(PV)systems are electrical systems designed to convert solar energy into electrical energy.As a crucial component of PV systems,harsh weather conditions,photovoltaic panel temperature and solar irradiance influence the power output of photovoltaic cells.Therefore,accurately identifying the parameters of PV models is essential for simulating,controlling and evaluating PV systems.In this study,we propose an enhanced weighted-mean-of-vectors optimisation(EINFO)for efficiently determining the unknown parameters in PV systems.EINFO introduces a Lambert W-based explicit objective function for the PV model,enhancing the computational accuracy of the algorithm's population fitness.This addresses the challenge of improving the metaheuristic algorithms'identification accuracy for unknown parameter identification in PV models.We experimentally apply EINFO to three types of PV models(single-diode,double-diode and PV-module models)to validate its accuracy and stability in parameter identification.The results demonstrate that EINFO achieves root mean square errors(RMSEs)of 7.7301E-04,6.8553E-04 and 2.0608E-03 for the single-diode model,double-diode model and PV-module model,respectively,surpassing those obtained by using INFO algorithm as well as other methods in terms of convergence speed,accuracy and stability.Furthermore,comprehensive experimental findings on three commercial PV modules(ST40,SM55 and KC200GT)indicate that EINFO consistently maintains high accuracy across varying temperatures and irradiation levels.In conclusion,EINFO emerges as a highly competitive and practical approach for parameter identification in diverse types of PV models.
文摘Distributed Quantum Computing(DQC)provides a means for scaling available quantum computation by interconnecting multiple quantum processor units(QPUs).A key challenge in this domain is efficiently allocating logical qubits from quantum circuits to the physical qubits within QPUs,a task known to be NP-hard.Traditional approaches,primarily focused on graph partitioning strategies,have sought to reduce the number of required Bell pairs for executing non-local CNOT operations,a form of gate teleportation.However,these methods have limitations in terms of efficiency and scalability.Addressing this,our work jointly considers gate and qubit teleportations introducing a novel meta-heuristic algorithm to minimise the network cost of executing a quantum circuit.By allowing dynamic reallocation of qubits along with gate teleportations during circuit execution,our method significantly enhances the overall efficacy and potential scalability of DQC frameworks.In our numerical analysis,we demonstrate that integrating qubit teleportations into our genetic algorithm for optimizing circuit blocking reduces the required resources,specifically the number of EPR pairs,compared to traditional graph partitioning methods.Our results,derived fromboth benchmark and randomly generated circuits,show that as circuit complexity increases—demanding more qubit teleportations—our approach effectively optimises these teleportations throughout the execution,thereby enhancing performance through strategic circuit partitioning.This is a step forward in the pursuit of a global quantum compiler which will ultimately enable the efficient use of a‘quantum data center’in the future.
基金support from the following institutional grant.Internal Grant Agency of the Faculty of Economics and Management,Czech University of Life Sciences Prague,grant no.2023A0004(https://iga.pef.czu.cz/,accessed on 6 June 2025).
文摘This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.
基金supported by the Natural Science Foundation of Guangdong Province(Grant 2023A1515011667)Science and Technology Major Project of Shenzhen(Grant KJZD20230923114809020)Key Basic Research Foundation of Shenzhen(Grant JCYJ20220818100205012).
文摘Estimating probability density functions(PDFs)is critical in data analysis,particularly for complex multimodal distributions.traditional kernel density estimator(KDE)methods often face challenges in accurately capturing multimodal structures due to their uniform weighting scheme,leading to mode loss and degraded estimation accuracy.This paper presents the flexible kernel density estimator(F-KDE),a novel nonparametric approach designed to address these limitations.F-KDE introduces the concept of kernel unit inequivalence,assigning adaptive weights to each kernel unit,which better models local density variations in multimodal data.The method optimises an objective function that integrates estimation error and log-likelihood,using a particle swarm optimisation(PSO)algorithm that automatically determines optimal weights and bandwidths.Through extensive experiments on synthetic and real-world datasets,we demonstrated that(1)the weights and bandwidths in F-KDE stabilise as the optimisation algorithm iterates,(2)F-KDE effectively captures the multimodal characteristics and(3)F-KDE outperforms state-of-the-art density estimation methods regarding accuracy and robustness.The results confirm that F-KDE provides a valuable solution for accurately estimating multimodal PDFs.
基金supported by the Australian Research Council(Grant No.IC190100020)the Australian Research Council Indus〓〓try Fellowship(Grant No.IE230100435)the National Natural Science Foundation of China(Grant Nos.12032014 and T2488101)。
文摘The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.
文摘With an optimised hall layout,progressive design collaborations,inspiring trends and AIdriven innovations,Heimtextil 2026 reacts to the current market situation–and offers the industry a reliable constant in challenging times.Under the motto‘Lead the Change’,the leading trade fair for home and contract textiles and textile design shows how challenges can be turned into opportunities.From 13 to 16 January,more than 3,100 exhibitors from 65 countries will provide a comprehensive market overview with new collections and textile solutions.As a knowledge hub,Heimtextil delivers new strategies and concrete solutions for future business success.
基金supported by the Advanced Research and Technology Innovation Centre (ARTIC)the National University of Singapore under Grant (Project Number:ADTRP1)the sponsorship of the China Scholarship Council (No. 202306130143).
文摘Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in heat dissipation,and reduces the risk of thermal warping,residual stress,and distortion,particularly in the fabrication of complex geometries that challenge traditional manufacturing methods.Despite the importance of support structures in AM,a systematic review covering all aspects of the design,optimisation,and removal of support structures remains lacking.This review provides an overview of various support structure types—contact and non-contact,as well as identical and dissimilar material configurations—and outlines optimisation methods,including geometric,topology,simulation-driven,data-driven,and multi-objective approaches.Additionally,the mechanisms of support removal,such as mechanical milling and chemical dissolution,and innovations like dissolvable supports and sensitised interfaces,are discussed.Future research directions are outlined,emphasising artificial intelligence(AI)-driven intelligent design,multi-material supports,sustainable support materials,support-free AM techniques,and innovative support removal methods,all of which are essential for advancing AM technology.Overall,this review aims to serve as a foundational reference for the design and optimisation of the support structure in AM.
基金supported by the Artificial Intelligence Innovation Project of Wuhan Science and Technology Bureau,2023010402040016the Natural Science Foundation of Hubei Province of China,2022CFB076,JSPS KAKENHI,JP25K15279,Natural Science Foundation of Hubei Province,2023AFB003+1 种基金the National Natural Science Foundation of China,52201363the Education Department Scientific Research Programme Project of Hubei Province of China,Q20222208.
文摘The challenge of optimising multimodal functions within high-dimensional domains constitutes a notable difficulty in evolutionary computation research.Addressing this issue,this study introduces the Deep Backtracking Bare-Bones Particle Swarm Optimisation(DBPSO)algorithm,an innovative approach built upon the integration of the Deep Memory Storage Mechanism(DMSM)and the Dynamic Memory Activation Strategy(DMAS).The DMSM enhances the memory retention for the globally optimal particle,promoting interaction between standard particles and their historically optimal counterparts.In parallel,DMAS assures the updated position of the globally optimal particle is appropriately aligned with the deep memory repository.The efficacy of DBPSO was rigorously assessed through a series of simulations employing the CEC2017 benchmark suite.A comparative analysis juxtaposed DBPSO's performance against five contemporary evolutionary algorithms across two experimental conditions:Dimension-50 and Dimension-100.In the 50D trials,DBPSO attained an average ranking of 2.03,whereas in the 100D scenarios,it improved to an average ranking of 1.9.Further examination utilising the CEC2019 benchmark functions revealed DBPSO's robustness,securing four first-place finishes,three second-place standings,and three third-place positions,culminating in an unmatched average ranking of 1.9 across all algorithms.These empirical results corroborate DBPSO's proficiency in delivering precise solutions for complex,high-dimensional optimisation challenges.
基金supported by the Natural Science Foundation of Henan Province,China.(No,222300420596).
文摘Impact ground pressure events occur frequently in coal mining processes,significantly affecting the personal safety of construction workers.Real-time microseismic monitoring of coal rock body rupture information can provide early warnings,and the seismic source location method is an essential indicator for evaluating a microseismic monitoring system.This paper proposes a nonlinear hybrid optimal particle swarm optimisation(PSO)microseismic positioning method based on this technique.The method first improves the PSO algorithm by using the global search performance of this method to quickly find a feasible solution and provide a better initial solution for the subsequent solution of the nonlinear optimal microseismic positioning method.This approach effectively prevents the problem of the microseismic positioning method falling into a local optimum because of an over-reliance on the initial value.In addition,the nonlinear optimal microseismic positioning method further narrows the localisation error based on the PSO algorithm.A simulation test demonstrates that the new method has a good positioning effect,and engineering application examples also show that the proposed method has high accuracy and strong positioning stability.The new method is better than the separate positioning method,both overall and in three directions,making it more suitable for solving the microseismic positioning problem.
文摘The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment process integrating graphite intercalation compound(GIC)adsorption,direct anodic oxidation,and·OH oxidation for decolourising Reactive Black 5(RB5)from aqueous solutions.The electrochemical process was optimised using the novel progressive central composite design-response surface methodology(CCD-NPRSM),hybrid artificial neural network-extreme gradient boosting(hybrid ANN-XGBoost),and classification and regression trees(CART).CCD-NPRSM and hybrid ANN-XGBoost were employed to minimise errors in evaluating the electrochemical process involving three manipulated operational parameters:current density,electrolysis(treatment)time,and initial dye concentration.The optimised decolourisation efficiencies were 99.30%,96.63%,and 99.14%for CCD-NPRSM,hybrid ANN-XGBoost,and CART,respectively,compared to the 98.46%RB5 removal rate observed experimentally under optimum conditions:approximately 20 mA/cm^(2) of current density,20 min of electrolysis time,and 65 mg/L of RB5.The optimised mineralisation efficiencies ranged between 89%and 92%for different models based on total organic carbon(TOC).Experimental studies confirmed that the predictive efficiency of optimised models ranked in the descending order of hybrid ANN-XGBoost,CCD-NPRSM,and CART.Model validation using analysis of variance(ANOVA)revealed that hybrid ANN-XGBoost had a mean squared error(MSE)and a coefficient of determination(R^(2))of approximately 0.014 and 0.998,respectively,for the RB5 removal efficiency,outperforming CCD-NPRSM with MSE and R^(2) of 0.518 and 0.998,respectively.Overall,the hybrid ANN-XGBoost approach is the most feasible technique for assessing the electrochemical treatment efficiency in RB5 dye wastewater decolourisation.
文摘This review synthesises and assesses the most recent developments in Unmanned Aerial Vehicles(UAVs)and swarm robotics,with a specific emphasis on optimisation strategies,path planning,and formation control.The study identifies key methodologies that are driving progress in the field by conducting a comprehensive analysis of seven critical publications.The following are included:sensor-based platforms that facilitate effective obstacle avoidance,cluster-based hierarchical path planning for efficient navigation,and adaptive hybrid controllers for dynamic environments.The review emphasises the substantial contribution of optimisation techniques,including Max-Min Ant Colony Optimisation(MMACO),to the improvement of convergence rates and the enhancement of path efficiency.The effectiveness of various navigation systems in diverse operational contexts is demonstrated through comparative analysis,which provides valuable insights into the system’s adaptability and performance.The primary findings underscore the strengths and limitations of current methodologies,thereby identifying voids in research and practical applications.This review offers actionable insights for academicians and practitioners who are striving to advance UAV and swarm robotics technology by addressing these challenges.The study concludes with a discussion of future directions,which underscores the potential for innovative solutions to enhance UAV systems in complex,dynamic environments.
基金supported by ANR-ASTRID NANOFLY(ANR-19-ASTR-0023)and French AID(Defense Innovation Agency).
文摘This article presents the design of a microfabricated bio-inspired flapping-wing Nnano Aaerial Vvehicle(NAV),driven by an electromagnetic system.Our approach is based on artificial wings composed of rigid bodies connected by compliant links,which optimise aerodynamic forces though replicating the complex wing kinematics of insects.The originality of this article lies in a new design methodology based on a triple equivalence between a 3D model,a multibody model,and a mass/spring model(0D)which reduces the number of parameters in the problem.This approach facilitates NAV optimisation by using only the mass/spring model,thereby simplifying the design process while maintaining high accuracy.Two wing geometries are studied and optimised in this article to produce large-amplitude wing motions(approximately 40^\circ),and enabling flapping and twisting motion in quadrature.The results are validated thanks to experimental measurements for the large amplitude and through finite element simulations for the combined motion,confirming the effectiveness of this strategy for a NAV weighing less than 40 mg with a wingspan of under 3 cm.
基金Financial support from the National Natural Science Foundation of China under Grant(22393954)is gratefully acknowledged。
文摘With the energy problem becoming increasingly severe,industrial energy efficiency issues need to be solved urgently.The tube and shell heat exchanger,as the most widely used device for energy recovery and utilisation,its optimal design has become a significantly important research topic.Optimal design solves the tediousness of traditional design and can easily and accurately give the desired design results.Several industrial heat duty targets and pressure drop limitations for the design of shell-and-tube heat exchangers,such as high heat transfer efficiency,large temperature correction coefficient,and high capacity,cannot be met by single-shell units and require series or parallel arrangements.This paper uses Set Trimming to optimize the design of double shell and tube heat exchangers,assuming the possibility of series,parallel,or series-parallel arrangements.Minimum heat transfer area and minimum total annualized cost are used as objective functions to optimize the design of a single or double-shell heat exchanger that better meets the objectives.
文摘This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters were varied:B_(4)C weight percentage,milling time,and normal load,to evaluate their effects on two output parameters:wear loss and the coefficient of friction.AA2024 alloy was used as the matrix alloy,while B_(4)C particles were used as reinforcement.Due to the high hardness and wear resistance of B_(4)C,the optimized composite shows strong potential for use in aerospace structural elements and automotive brake components.The optimisation of tribological behaviour was conducted using a Taguchi-Grey Relational Analysis(Taguchi-GRA)and the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS).A total of 27 combinations of input parameters were analysed,varying the B_(4)C content(0,10,and 15 wt.%),milling time(0,15,and 25 h),and normal load(1,5,and 10 N).Wear loss and the coefficient of friction were numerically evaluated and selected as criteria for optimisation.Artificial Neural Networks(ANNs)were also applied for two outputs simultaneously.TOPSIS identified Alternative 1 as the optimal solution,confirming the results obtained using the Taguchi Grey method.The optimal condition obtained(10 wt.%B_(4)C,25 h milling time,10 N load)resulted in a minimum wear loss of 1.7 mg and a coefficient of friction of 0.176,confirming significant enhancement in tribological behaviour.Based on the results,both the B_(4)C content and the applied processing conditions have a significant impact on wear loss and frictional properties.This approach demonstrates high reliability and confidence,enabling the design of future composite materials with optimal properties for specific applications.
基金supported by the National Natural Science Foundation of China under grant nos.61772091,61802035,61962006,61962038,U1802271,U2001212,and 62072311the Sichuan Science and Technology Program under grant nos.2021JDJQ0021 and 22ZDYF2680+7 种基金the CCF‐Huawei Database System Innovation Research Plan under grant no.CCF‐HuaweiDBIR2020004ADigital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under grant no.21DMAKL02the Chengdu Major Science and Technology Innovation Project under grant no.2021‐YF08‐00156‐GXthe Chengdu Technology Innovation and Research and Development Project under grant no.2021‐YF05‐00491‐SNthe Natural Science Foundation of Guangxi under grant no.2018GXNSFDA138005the Guangdong Basic and Applied Basic Research Foundation under grant no.2020B1515120028the Science and Technology Innovation Seedling Project of Sichuan Province under grant no 2021006the College Student Innovation and Entrepreneurship Training Program of Chengdu University of Information Technology under grant nos.202110621179 and 202110621186.
文摘An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.
文摘Expansive soils are problematic due to the performances of their clay mineral constituent, which makes them exhibit the shrink-swell characteristics. The shrink-swell behaviours make expansive soils inappropriate for direct engineering application in their natural form. In an attempt to make them more feasible for construction purposes, numerous materials and techniques have been used to stabilise the soil. In this study, the additives and techniques applied for stabilising expansive soils will be focused on,with respect to their efficiency in improving the engineering properties of the soils. Then we discussed the microstructural interaction, chemical process, economic implication, nanotechnology application, as well as waste reuse and sustainability. Some issues regarding the effective application of the emerging trends in expansive soil stabilisation were presented with three categories, namely geoenvironmental,standardisation and optimisation issues. Techniques like predictive modelling and exploring methods such as reliability-based design optimisation, response surface methodology, dimensional analysis, and artificial intelligence technology were also proposed in order to ensure that expansive soil stabilisation is efficient.
文摘Self-piercing riveting(SPR)is a cold forming technique used to fasten together two or more sheets of materials with a rivet without the need to predrill a hole.The application of SPR in the automotive sector has become increasingly popular mainly due to the growing use of lightweight materials in transportation applications.However,SPR joining of these advanced light materials remains a challenge as these materials often lack a good combination of high strength and ductility to resist the large plastic deformation induced by the SPR process.In this paper,SPR joints of advanced materials and their corresponding failure mechanisms are discussed,aiming to provide the foundation for future improvement of SPR joint quality.This paper is divided into three major sections:1)joint failures focusing on joint defects originated from the SPR process and joint failure modes under different mechanical loading conditions,2)joint corrosion issues,and 3)joint optimisation via process parameters and advanced techniques.