Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional sign...Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional signature-based detection algorithm is not efficient at recognizing zero-day attacks,as the signatures of zero-day attacks are usually not previously accessible.A machine learning(ML)-based detection algorithm is proficient in capturing statistical features of attacks and,therefore,optimistic for zero-day attack detection.ML and deep learning(DL)are employed for designing intrusion detection systems.The improvement of absolute varieties of novel cyberattacks poses significant challenges for IDS solutions that are dependent on datasets of prior signatures of the attacks.This manuscript presents the Zero-day attack detection employing an equilibrium optimizer with a deep learning(ZDAD-EODL)method to ensure cybersecurity.The ZDAD-EODL technique employs meta-heuristic feature subset selection using an optimum DL-based classification technique for zero-day attacks.Initially,the min-max scalar is utilized for normalizing the input data.For feature selection(FS),the ZDAD-EODL method utilizes the equilibrium optimizer(EO)model to choose feature sub-sets.In addition,the ZDAD-EODL technique employs the bi-directional gated recurrent unit(BiGRU)technique for the classification and identification of zero-day attacks.Finally,the detection performance of the BiGRU technique is further enhanced through the implementation of the subtraction average-based optimizer(SABO)-based tuning process.The performance of the ZDAD-EODL approach is investigated on the benchmark dataset.The comparison study of the ZDAD-EODL approach portrayed a superior accuracy value of 98.47%over existing techniques.展开更多
This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to imp...This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.展开更多
The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of ...The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.展开更多
Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in ...Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in heat dissipation,and reduces the risk of thermal warping,residual stress,and distortion,particularly in the fabrication of complex geometries that challenge traditional manufacturing methods.Despite the importance of support structures in AM,a systematic review covering all aspects of the design,optimisation,and removal of support structures remains lacking.This review provides an overview of various support structure types—contact and non-contact,as well as identical and dissimilar material configurations—and outlines optimisation methods,including geometric,topology,simulation-driven,data-driven,and multi-objective approaches.Additionally,the mechanisms of support removal,such as mechanical milling and chemical dissolution,and innovations like dissolvable supports and sensitised interfaces,are discussed.Future research directions are outlined,emphasising artificial intelligence(AI)-driven intelligent design,multi-material supports,sustainable support materials,support-free AM techniques,and innovative support removal methods,all of which are essential for advancing AM technology.Overall,this review aims to serve as a foundational reference for the design and optimisation of the support structure in AM.展开更多
The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment ...The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment process integrating graphite intercalation compound(GIC)adsorption,direct anodic oxidation,and·OH oxidation for decolourising Reactive Black 5(RB5)from aqueous solutions.The electrochemical process was optimised using the novel progressive central composite design-response surface methodology(CCD-NPRSM),hybrid artificial neural network-extreme gradient boosting(hybrid ANN-XGBoost),and classification and regression trees(CART).CCD-NPRSM and hybrid ANN-XGBoost were employed to minimise errors in evaluating the electrochemical process involving three manipulated operational parameters:current density,electrolysis(treatment)time,and initial dye concentration.The optimised decolourisation efficiencies were 99.30%,96.63%,and 99.14%for CCD-NPRSM,hybrid ANN-XGBoost,and CART,respectively,compared to the 98.46%RB5 removal rate observed experimentally under optimum conditions:approximately 20 mA/cm^(2) of current density,20 min of electrolysis time,and 65 mg/L of RB5.The optimised mineralisation efficiencies ranged between 89%and 92%for different models based on total organic carbon(TOC).Experimental studies confirmed that the predictive efficiency of optimised models ranked in the descending order of hybrid ANN-XGBoost,CCD-NPRSM,and CART.Model validation using analysis of variance(ANOVA)revealed that hybrid ANN-XGBoost had a mean squared error(MSE)and a coefficient of determination(R^(2))of approximately 0.014 and 0.998,respectively,for the RB5 removal efficiency,outperforming CCD-NPRSM with MSE and R^(2) of 0.518 and 0.998,respectively.Overall,the hybrid ANN-XGBoost approach is the most feasible technique for assessing the electrochemical treatment efficiency in RB5 dye wastewater decolourisation.展开更多
This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters ...This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters were varied:B_(4)C weight percentage,milling time,and normal load,to evaluate their effects on two output parameters:wear loss and the coefficient of friction.AA2024 alloy was used as the matrix alloy,while B_(4)C particles were used as reinforcement.Due to the high hardness and wear resistance of B_(4)C,the optimized composite shows strong potential for use in aerospace structural elements and automotive brake components.The optimisation of tribological behaviour was conducted using a Taguchi-Grey Relational Analysis(Taguchi-GRA)and the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS).A total of 27 combinations of input parameters were analysed,varying the B_(4)C content(0,10,and 15 wt.%),milling time(0,15,and 25 h),and normal load(1,5,and 10 N).Wear loss and the coefficient of friction were numerically evaluated and selected as criteria for optimisation.Artificial Neural Networks(ANNs)were also applied for two outputs simultaneously.TOPSIS identified Alternative 1 as the optimal solution,confirming the results obtained using the Taguchi Grey method.The optimal condition obtained(10 wt.%B_(4)C,25 h milling time,10 N load)resulted in a minimum wear loss of 1.7 mg and a coefficient of friction of 0.176,confirming significant enhancement in tribological behaviour.Based on the results,both the B_(4)C content and the applied processing conditions have a significant impact on wear loss and frictional properties.This approach demonstrates high reliability and confidence,enabling the design of future composite materials with optimal properties for specific applications.展开更多
Proportioning is an important part of sintering,as it affects the cost of sintering and the quality of sintered ore.To address the problems posed by the complex raw material information and numerous constraints in the...Proportioning is an important part of sintering,as it affects the cost of sintering and the quality of sintered ore.To address the problems posed by the complex raw material information and numerous constraints in the sintering process,a multi-objective optimisation model for sintering proportioning was established,which takes the proportioning cost and TFe as the optimisation objectives.Additionally,an improved multi-objective beluga whale optimisation(IMOBWO)algorithm was proposed to solve the nonlinear,multi-constrained multi-objective optimisation problems.The algorithm uses the con-strained non-dominance criterion to deal with the constraint problem in the model.Moreover,the algorithm employs an opposite learning strategy and a population guidance mechanism based on angular competition and two-population competition strategy to enhance convergence and population diversity.The actual proportioning of a steel plant indicates that the IMOBWO algorithm applied to the ore proportioning process has good convergence and obtains the uniformly distributed Pareto front.Meanwhile,compared with the actual proportioning scheme,the proportioning cost is reduced by 4.3361¥/t,and the TFe content in the mixture is increased by 0.0367%in the optimal compromise solution.Therefore,the proposed method effectively balances the cost and total iron,facilitating the comprehensive utilisation of sintered iron ore resources while ensuring quality assurance.展开更多
We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod proj...We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.展开更多
Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communicati...Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.展开更多
Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technolog...Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technology and sponsored by Zhejiang Univer-sity.Digital Twins and Applications aim to provide a specialised platform for researchers,practitioners,and industry experts to publish high-quality,state-of-the-art research on digital twin technologies and their applications.展开更多
In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is...In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.展开更多
Ionic Polymer Metal Composites(IPMCs)are considered important electroactive polymers that have recently attracted the attention of the scientific community owing to their simple structure,adaptable form,high degree of...Ionic Polymer Metal Composites(IPMCs)are considered important electroactive polymers that have recently attracted the attention of the scientific community owing to their simple structure,adaptable form,high degree of flexibility,and biocompatibility during their utilization as sensing elements.Along these lines,in this work,the recent developments in performance optimization,model construction,and applications of IPMC sensors were reported.Different methods were introduced to enhance the sensitivity,preparation efficiency,and stability of the IPMC sensors including optimising the electrode and substrate membrane preparation,as well as implementing structural and shape modifications,etc.The IPMC sensing model,which serves as the theoretical foundation for the IPMC sensor,was summarized herein to offer directions for future application research activities.The applications of these sensors in a wide range of areas were also reviewed,such as wearable electronic devices,flow sensors,humidity sensors,energy harvesting devices,etc.展开更多
In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about t...In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.展开更多
Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networ...Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.展开更多
Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and r...Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
The technological breakthroughs in generative artificial intelligence,represented by ChatGPT,have brought about significant social changes as well as new problems and challenges.Generative artificial intelligence has ...The technological breakthroughs in generative artificial intelligence,represented by ChatGPT,have brought about significant social changes as well as new problems and challenges.Generative artificial intelligence has inherent flaws such as language imbalance,algorithmic black box,and algorithmic bias,and at the same time,it has external risks such as algorithmic comfort zone,data pollution,algorithmic infringement,and inaccurate output.These problems lead to the difficulty in legislation for the governance of generative artificial intelligence.Taking the data contamination incident in Google Translate as an example,this article proposes that in the process of constructing machine translation ethics,the responsibility mechanism of generative artificial intelligence should be constructed around three elements:data processing,algorithmic optimisation,and ethical alignment.展开更多
To enhance the rationality of the layout of electric vehicle charging stations,meet the actual needs of users,and optimise the service range and coverage efficiency of charging stations,this paper proposes an optimisa...To enhance the rationality of the layout of electric vehicle charging stations,meet the actual needs of users,and optimise the service range and coverage efficiency of charging stations,this paper proposes an optimisation strategy for the layout of electric vehicle charging stations that integrates Mini Batch K-Means and simulated annealing algorithms.By constructing a circle-like service area model with the charging station as the centre and a certain distance as the radius,the maximum coverage of electric vehicle charging stations in the region and the influence of different regional environments on charging demand are considered.Based on the real data of electric vehicle charging stations in Nanjing,Jiangsu Province,this paper uses the model proposed in this paper to optimise the layout of charging stations in the study area.The results show that the optimisation strategy incorporating Mini Batch K-Means and simulated annealing algorithms outperforms the existing charging station layouts in terms of coverage and the number of stations served,and compared to the original charging station layouts,the optimised charging station layouts have flatter Lorentzian curves and are closer to the average distribution.The proposed optimisation strategy not only improves the service efficiency and user satisfaction of EV(Electric Vehicle)charging stations but also provides a reference for the layout optimisation of EV charging stations in other cities,which has important practical value and promotion potential.展开更多
Commercial organisations commonly use operational research tools to solve vehicle routing problems. This practice is less commonplace in charity and voluntary organisations. In this paper, we provide an elementary app...Commercial organisations commonly use operational research tools to solve vehicle routing problems. This practice is less commonplace in charity and voluntary organisations. In this paper, we provide an elementary approach for solving the Vehicle Routing Problem (VRP) that we believe can be easily implemented in these types of organisations. The proposed model leverages mixed integer linear programming to optimize the pickup sequence of all customers, each with distinct time windows and locations, transporting them to a final destination using a fleet of vehicles. To ensure ease of implementation, the model utilises Python, a user-friendly programming language, and integrates with the Google Maps API, which simplifies data input by eliminating the need for manual entry of travel times between locations. Troubleshooting methods are incorporated into the model design to ensure easy debugging of the model’s infeasibilities. Additionally, a computation time analysis is conducted to evaluate the efficiency of the code. A node partitioning approach is also discussed, which aims to reduce computational times, especially when handling larger datasets, ensuring this model is realistic and practical for real-world application. By implementing this optimized routing strategy, logistics companies or organisations can expect significant improvements in their day-to-day operations, with minimal computational cost or need for specialised expertise. This includes reduced travel times, minimized fuel consumption, and thus lower operational costs, while ensuring punctuality and meeting the demands of all passengers.展开更多
An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are in...An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.展开更多
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Research Project under grant number RGP2/286/46Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R732),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+2 种基金Ongoing Research Funding program(ORFFT-2025-100-7),King Saud University,Riyadh,Saudi Arabia for financial supportthe Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,for funding this research work through the project number“NBU-FFR-2025-2913-07”the Deanship of Graduate Studies and Scientific Research at the University of Bisha for supporting this work through the Fast-Track Research Support Program。
文摘Zero-day attacks use unknown vulnerabilities that prevent being identified by cybersecurity detection tools.This study indicates that zero-day attacks have a significant impact on computer security.A conventional signature-based detection algorithm is not efficient at recognizing zero-day attacks,as the signatures of zero-day attacks are usually not previously accessible.A machine learning(ML)-based detection algorithm is proficient in capturing statistical features of attacks and,therefore,optimistic for zero-day attack detection.ML and deep learning(DL)are employed for designing intrusion detection systems.The improvement of absolute varieties of novel cyberattacks poses significant challenges for IDS solutions that are dependent on datasets of prior signatures of the attacks.This manuscript presents the Zero-day attack detection employing an equilibrium optimizer with a deep learning(ZDAD-EODL)method to ensure cybersecurity.The ZDAD-EODL technique employs meta-heuristic feature subset selection using an optimum DL-based classification technique for zero-day attacks.Initially,the min-max scalar is utilized for normalizing the input data.For feature selection(FS),the ZDAD-EODL method utilizes the equilibrium optimizer(EO)model to choose feature sub-sets.In addition,the ZDAD-EODL technique employs the bi-directional gated recurrent unit(BiGRU)technique for the classification and identification of zero-day attacks.Finally,the detection performance of the BiGRU technique is further enhanced through the implementation of the subtraction average-based optimizer(SABO)-based tuning process.The performance of the ZDAD-EODL approach is investigated on the benchmark dataset.The comparison study of the ZDAD-EODL approach portrayed a superior accuracy value of 98.47%over existing techniques.
基金support from the following institutional grant.Internal Grant Agency of the Faculty of Economics and Management,Czech University of Life Sciences Prague,grant no.2023A0004(https://iga.pef.czu.cz/,accessed on 6 June 2025).
文摘This study proposes a new component of the composite loss function minimised during training of the Super-Resolution(SR)algorithms—the normalised structural similarity index loss LSSIMN,which has the potential to improve the natural appearance of reconstructed images.Deep learning-based super-resolution(SR)algorithms reconstruct high-resolution images from low-resolution inputs,offering a practical means to enhance image quality without requiring superior imaging hardware,which is particularly important in medical applications where diagnostic accuracy is critical.Although recent SR methods employing convolutional and generative adversarial networks achieve high pixel fidelity,visual artefacts may persist,making the design of the loss function during training essential for ensuring reliable and naturalistic image reconstruction.Our research shows on two models—SR and Invertible Rescaling Neural Network(IRN)—trained on multiple benchmark datasets that the function LSSIMN significantly contributes to the visual quality,preserving the structural fidelity on the reference datasets.The quantitative analysis of results while incorporating LSSIMN shows that including this loss function component has a mean 2.88%impact on the improvement of the final structural similarity of the reconstructed images in the validation set,in comparison to leaving it out and 0.218%in comparison when this component is non-normalised.
基金supported by the Australian Research Council(Grant No.IC190100020)the Australian Research Council Indus〓〓try Fellowship(Grant No.IE230100435)the National Natural Science Foundation of China(Grant Nos.12032014 and T2488101)。
文摘The integration of physics-based modelling and data-driven artificial intelligence(AI)has emerged as a transformative paradigm in computational mechanics.This perspective reviews the development and current status of AI-empowered frameworks,including data-driven methods,physics-informed neural networks,and neural operators.While these approaches have demonstrated significant promise,challenges remain in terms of robustness,generalisation,and computational efficiency.We delineate four promising research directions:(1)Modular neural architectures inspired by traditional computational mechanics,(2)physics informed neural operators for resolution-invariant operator learning,(3)intelligent frameworks for multiphysics and multiscale biomechanics problems,and(4)structural optimisation strategies based on physics constraints and reinforcement learning.These directions represent a shift toward foundational frameworks that combine the strengths of physics and data,opening new avenues for the modelling,simulation,and optimisation of complex physical systems.
基金supported by the Advanced Research and Technology Innovation Centre (ARTIC)the National University of Singapore under Grant (Project Number:ADTRP1)the sponsorship of the China Scholarship Council (No. 202306130143).
文摘Support structure,a critical component in the design for additive manufacturing(DfAM),has been largely overlooked by additive manufacturing(AM)communities.The support structure stabilises overhanging sections,aids in heat dissipation,and reduces the risk of thermal warping,residual stress,and distortion,particularly in the fabrication of complex geometries that challenge traditional manufacturing methods.Despite the importance of support structures in AM,a systematic review covering all aspects of the design,optimisation,and removal of support structures remains lacking.This review provides an overview of various support structure types—contact and non-contact,as well as identical and dissimilar material configurations—and outlines optimisation methods,including geometric,topology,simulation-driven,data-driven,and multi-objective approaches.Additionally,the mechanisms of support removal,such as mechanical milling and chemical dissolution,and innovations like dissolvable supports and sensitised interfaces,are discussed.Future research directions are outlined,emphasising artificial intelligence(AI)-driven intelligent design,multi-material supports,sustainable support materials,support-free AM techniques,and innovative support removal methods,all of which are essential for advancing AM technology.Overall,this review aims to serve as a foundational reference for the design and optimisation of the support structure in AM.
文摘The highly efficient electrochemical treatment technology for dye-polluted wastewater is one of hot research topics in industrial wastewater treatment.This study reported a three-dimensional electrochemical treatment process integrating graphite intercalation compound(GIC)adsorption,direct anodic oxidation,and·OH oxidation for decolourising Reactive Black 5(RB5)from aqueous solutions.The electrochemical process was optimised using the novel progressive central composite design-response surface methodology(CCD-NPRSM),hybrid artificial neural network-extreme gradient boosting(hybrid ANN-XGBoost),and classification and regression trees(CART).CCD-NPRSM and hybrid ANN-XGBoost were employed to minimise errors in evaluating the electrochemical process involving three manipulated operational parameters:current density,electrolysis(treatment)time,and initial dye concentration.The optimised decolourisation efficiencies were 99.30%,96.63%,and 99.14%for CCD-NPRSM,hybrid ANN-XGBoost,and CART,respectively,compared to the 98.46%RB5 removal rate observed experimentally under optimum conditions:approximately 20 mA/cm^(2) of current density,20 min of electrolysis time,and 65 mg/L of RB5.The optimised mineralisation efficiencies ranged between 89%and 92%for different models based on total organic carbon(TOC).Experimental studies confirmed that the predictive efficiency of optimised models ranked in the descending order of hybrid ANN-XGBoost,CCD-NPRSM,and CART.Model validation using analysis of variance(ANOVA)revealed that hybrid ANN-XGBoost had a mean squared error(MSE)and a coefficient of determination(R^(2))of approximately 0.014 and 0.998,respectively,for the RB5 removal efficiency,outperforming CCD-NPRSM with MSE and R^(2) of 0.518 and 0.998,respectively.Overall,the hybrid ANN-XGBoost approach is the most feasible technique for assessing the electrochemical treatment efficiency in RB5 dye wastewater decolourisation.
文摘This paper presents an investigation of the tribological performance of AA2024–B_(4)C composites,with a specific focus on the influence of reinforcement and processing parameters.In this study three input parameters were varied:B_(4)C weight percentage,milling time,and normal load,to evaluate their effects on two output parameters:wear loss and the coefficient of friction.AA2024 alloy was used as the matrix alloy,while B_(4)C particles were used as reinforcement.Due to the high hardness and wear resistance of B_(4)C,the optimized composite shows strong potential for use in aerospace structural elements and automotive brake components.The optimisation of tribological behaviour was conducted using a Taguchi-Grey Relational Analysis(Taguchi-GRA)and the Technique for Order of Preference by Similarity to Ideal Solution(TOPSIS).A total of 27 combinations of input parameters were analysed,varying the B_(4)C content(0,10,and 15 wt.%),milling time(0,15,and 25 h),and normal load(1,5,and 10 N).Wear loss and the coefficient of friction were numerically evaluated and selected as criteria for optimisation.Artificial Neural Networks(ANNs)were also applied for two outputs simultaneously.TOPSIS identified Alternative 1 as the optimal solution,confirming the results obtained using the Taguchi Grey method.The optimal condition obtained(10 wt.%B_(4)C,25 h milling time,10 N load)resulted in a minimum wear loss of 1.7 mg and a coefficient of friction of 0.176,confirming significant enhancement in tribological behaviour.Based on the results,both the B_(4)C content and the applied processing conditions have a significant impact on wear loss and frictional properties.This approach demonstrates high reliability and confidence,enabling the design of future composite materials with optimal properties for specific applications.
基金supported by the National Key Research and Development Program of China (2022YFB3304700)Hunan Province Natural Science Foundation (2022JJ50132,2022JCYJ05 and 2022JCYJ09).
文摘Proportioning is an important part of sintering,as it affects the cost of sintering and the quality of sintered ore.To address the problems posed by the complex raw material information and numerous constraints in the sintering process,a multi-objective optimisation model for sintering proportioning was established,which takes the proportioning cost and TFe as the optimisation objectives.Additionally,an improved multi-objective beluga whale optimisation(IMOBWO)algorithm was proposed to solve the nonlinear,multi-constrained multi-objective optimisation problems.The algorithm uses the con-strained non-dominance criterion to deal with the constraint problem in the model.Moreover,the algorithm employs an opposite learning strategy and a population guidance mechanism based on angular competition and two-population competition strategy to enhance convergence and population diversity.The actual proportioning of a steel plant indicates that the IMOBWO algorithm applied to the ore proportioning process has good convergence and obtains the uniformly distributed Pareto front.Meanwhile,compared with the actual proportioning scheme,the proportioning cost is reduced by 4.3361¥/t,and the TFe content in the mixture is increased by 0.0367%in the optimal compromise solution.Therefore,the proposed method effectively balances the cost and total iron,facilitating the comprehensive utilisation of sintered iron ore resources while ensuring quality assurance.
文摘We evaluate an adaptive optimisation methodology,Bayesian optimisation(BO),for designing a minimum weight explosive reactive armour(ERA)for protection against a surrogate medium calibre kinetic energy(KE)long rod projectile and surrogate shaped charge(SC)warhead.We perform the optimisation using a conventional BO methodology and compare it with a conventional trial-and-error approach from a human expert.A third approach,utilising a novel human-machine teaming framework for BO is also evaluated.Data for the optimisation is generated using numerical simulations that are demonstrated to provide reasonable qualitative agreement with reference experiments.The human-machine teaming methodology is shown to identify the optimum ERA design in the fewest number of evaluations,outperforming both the stand-alone human and stand-alone BO methodologies.From a design space of almost 1800 configurations the human-machine teaming approach identifies the minimum weight ERA design in 10 samples.
基金supported in part by the National Natural Science Foundation of China (62376288,U23A20347)the Engineering and Physical Sciences Research Council of UK (EP/X041239/1)the Royal Society International Exchanges Scheme of UK (IEC/NSFC/211404)。
文摘Decomposition of a complex multi-objective optimisation problem(MOP)to multiple simple subMOPs,known as M2M for short,is an effective approach to multi-objective optimisation.However,M2M facilitates little communication/collaboration between subMOPs,which limits its use in complex optimisation scenarios.This paper extends the M2M framework to develop a unified algorithm for both multi-objective and manyobjective optimisation.Through bilevel decomposition,an MOP is divided into multiple subMOPs at upper level,each of which is further divided into a number of single-objective subproblems at lower level.Neighbouring subMOPs are allowed to share some subproblems so that the knowledge gained from solving one subMOP can be transferred to another,and eventually to all the subMOPs.The bilevel decomposition is readily combined with some new mating selection and population update strategies,leading to a high-performance algorithm that competes effectively against a number of state-of-the-arts studied in this paper for both multiand many-objective optimisation.Parameter analysis and component analysis have been also carried out to further justify the proposed algorithm.
文摘Academician of the CAE member Youxian Sun from Zhejiang University initiated Digital Twins and Applications(ISSN 2995-2182).It is published by Zhejiang University Press and the Institution of Engineering and Technology and sponsored by Zhejiang Univer-sity.Digital Twins and Applications aim to provide a specialised platform for researchers,practitioners,and industry experts to publish high-quality,state-of-the-art research on digital twin technologies and their applications.
基金supported by the National Natural Science Foundation of China(52177081).
文摘In order to play a positive role of decentralised wind power on-grid for voltage stability improvement and loss reduction of distribution network,a multi-objective two-stage decentralised wind power planning method is proposed in the paper,which takes into account the network loss correction for the extreme cold region.Firstly,an electro-thermal model is introduced to reflect the effect of temperature on conductor resistance and to correct the results of active network loss calculation;secondly,a two-stage multi-objective two-stage decentralised wind power siting and capacity allocation and reactive voltage optimisation control model is constructed to take account of the network loss correction,and the multi-objective multi-planning model is established in the first stage to consider the whole-life cycle investment cost of WTGs,the system operating cost and the voltage quality of power supply,and the multi-objective planning model is established in the second stage.planning model,and the second stage further develops the reactive voltage control strategy of WTGs on this basis,and obtains the distribution network loss reduction method based on WTG siting and capacity allocation and reactive power control strategy.Finally,the optimal configuration scheme is solved by the manta ray foraging optimisation(MRFO)algorithm,and the loss of each branch line and bus loss of the distribution network before and after the adoption of this loss reduction method is calculated by taking the IEEE33 distribution system as an example,which verifies the practicability and validity of the proposed method,and provides a reference introduction for decision-making for the distributed energy planning of the distribution network.
基金supported by the National Natural Science Foundation of China grant nos.52075248the Research Fund of State Key Laboratory of Mechanics and Control for Aerospace Structures grant nos.1005-ZAG23011the Postgraduate Research&Practice Innovation Program of NUAA grant nos.xcxjh20230507.
文摘Ionic Polymer Metal Composites(IPMCs)are considered important electroactive polymers that have recently attracted the attention of the scientific community owing to their simple structure,adaptable form,high degree of flexibility,and biocompatibility during their utilization as sensing elements.Along these lines,in this work,the recent developments in performance optimization,model construction,and applications of IPMC sensors were reported.Different methods were introduced to enhance the sensitivity,preparation efficiency,and stability of the IPMC sensors including optimising the electrode and substrate membrane preparation,as well as implementing structural and shape modifications,etc.The IPMC sensing model,which serves as the theoretical foundation for the IPMC sensor,was summarized herein to offer directions for future application research activities.The applications of these sensors in a wide range of areas were also reviewed,such as wearable electronic devices,flow sensors,humidity sensors,energy harvesting devices,etc.
文摘In recent years, there has been remarkable progress in the performance of metal halide perovskite solar cells. Studies have shown significant interest in lead-free perovskite solar cells (PSCs) due to concerns about the toxicity of lead in lead halide perovskites. CH3NH3SnI3 emerges as a viable alternative to CH3NH3PbX3. In this work, we studied the effect of various parameters on the performance of lead-free perovskite solar cells using simulation with the SCAPS 1D software. The cell structure consists of α-Fe2O3/CH3NH3SnI3/PEDOT: PSS. We analyzed parameters such as thickness, doping, and layer concentration. The study revealed that, without considering other optimized parameters, the efficiency of the cell increased from 22% to 35% when the perovskite thickness varied from 100 to 1000 nm. After optimization, solar cell efficiency reaches up to 42%. The optimization parameters are such that, for example, for perovskite: the layer thickness is 700 nm, the doping concentration is 1020 and the defect density is 1013 cm−3, and for hematite: the thickness is 5 nm, the doping concentration is 1022 and the defect concentration is 1011 cm−3. These results are encouraging because they highlight the good agreement between perovskite and hematite when used as the active and electron transport layers, respectively. Now, it is still necessary to produce real, viable photovoltaic solar cells with the proposed material layer parameters.
文摘Over the last decade, the rapid growth in traffic and the number of network devices has implicitly led to an increase in network energy consumption. In this context, a new paradigm has emerged, Software-Defined Networking (SDN), which is an emerging technique that separates the control plane and the data plane of the deployed network, enabling centralized control of the network, while offering flexibility in data center network management. Some research work is moving in the direction of optimizing the energy consumption of SD-DCN, but still does not guarantee good performance and quality of service for SDN networks. To solve this problem, we propose a new mathematical model based on the principle of combinatorial optimization to dynamically solve the problem of activating and deactivating switches and unused links that consume energy in SDN networks while guaranteeing quality of service (QoS) and ensuring load balancing in the network.
文摘Cloud computing has rapidly evolved into a critical technology,seamlessly integrating into various aspects of daily life.As user demand for cloud services continues to surge,the need for efficient virtualization and resource management becomes paramount.At the core of this efficiency lies task scheduling,a complex process that determines how tasks are allocated and executed across cloud resources.While extensive research has been conducted in the area of task scheduling,optimizing multiple objectives simultaneously remains a significant challenge due to the NP(Non-deterministic Polynomial)Complete nature of the problem.This study aims to address these challenges by providing a comprehensive review and experimental analysis of task scheduling approaches,with a particular focus on hybrid techniques that offer promising solutions.Utilizing the CloudSim simulation toolkit,we evaluated the performance of three hybrid algorithms:Estimation of Distribution Algorithm-Genetic Algorithm(EDA-GA),Hybrid Genetic Algorithm-Ant Colony Optimization(HGA-ACO),and Improved Discrete Particle Swarm Optimization(IDPSO).Our experimental results demonstrate that these hybrid methods significantly outperform traditional standalone algorithms in reducing Makespan,which is a critical measure of task completion time.Notably,the IDPSO algorithm exhibited superior performance,achieving a Makespan of just 0.64 milliseconds for a set of 150 tasks.These findings underscore the potential of hybrid algorithms to enhance task scheduling efficiency in cloud computing environments.This paper concludes with a discussion of the implications of our findings and offers recommendations for future research aimed at further improving task scheduling strategies,particularly in the context of increasingly complex and dynamic cloud environments.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金supported by Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies(Grant No.2022B1212010005)XJTLU Research Development Funding(Grant No.RDF-22-01-053).
文摘The technological breakthroughs in generative artificial intelligence,represented by ChatGPT,have brought about significant social changes as well as new problems and challenges.Generative artificial intelligence has inherent flaws such as language imbalance,algorithmic black box,and algorithmic bias,and at the same time,it has external risks such as algorithmic comfort zone,data pollution,algorithmic infringement,and inaccurate output.These problems lead to the difficulty in legislation for the governance of generative artificial intelligence.Taking the data contamination incident in Google Translate as an example,this article proposes that in the process of constructing machine translation ethics,the responsibility mechanism of generative artificial intelligence should be constructed around three elements:data processing,algorithmic optimisation,and ethical alignment.
基金supported by the Jiangsu Provincial College Students Innovation andEntrepreneurship Training Plan Project(grant number 202411276037Z)the Nanjing Institute ofTechnology Fund for Research Startup Projects of Introduced Talents(grant number TB202406012).
文摘To enhance the rationality of the layout of electric vehicle charging stations,meet the actual needs of users,and optimise the service range and coverage efficiency of charging stations,this paper proposes an optimisation strategy for the layout of electric vehicle charging stations that integrates Mini Batch K-Means and simulated annealing algorithms.By constructing a circle-like service area model with the charging station as the centre and a certain distance as the radius,the maximum coverage of electric vehicle charging stations in the region and the influence of different regional environments on charging demand are considered.Based on the real data of electric vehicle charging stations in Nanjing,Jiangsu Province,this paper uses the model proposed in this paper to optimise the layout of charging stations in the study area.The results show that the optimisation strategy incorporating Mini Batch K-Means and simulated annealing algorithms outperforms the existing charging station layouts in terms of coverage and the number of stations served,and compared to the original charging station layouts,the optimised charging station layouts have flatter Lorentzian curves and are closer to the average distribution.The proposed optimisation strategy not only improves the service efficiency and user satisfaction of EV(Electric Vehicle)charging stations but also provides a reference for the layout optimisation of EV charging stations in other cities,which has important practical value and promotion potential.
文摘Commercial organisations commonly use operational research tools to solve vehicle routing problems. This practice is less commonplace in charity and voluntary organisations. In this paper, we provide an elementary approach for solving the Vehicle Routing Problem (VRP) that we believe can be easily implemented in these types of organisations. The proposed model leverages mixed integer linear programming to optimize the pickup sequence of all customers, each with distinct time windows and locations, transporting them to a final destination using a fleet of vehicles. To ensure ease of implementation, the model utilises Python, a user-friendly programming language, and integrates with the Google Maps API, which simplifies data input by eliminating the need for manual entry of travel times between locations. Troubleshooting methods are incorporated into the model design to ensure easy debugging of the model’s infeasibilities. Additionally, a computation time analysis is conducted to evaluate the efficiency of the code. A node partitioning approach is also discussed, which aims to reduce computational times, especially when handling larger datasets, ensuring this model is realistic and practical for real-world application. By implementing this optimized routing strategy, logistics companies or organisations can expect significant improvements in their day-to-day operations, with minimal computational cost or need for specialised expertise. This includes reduced travel times, minimized fuel consumption, and thus lower operational costs, while ensuring punctuality and meeting the demands of all passengers.
基金supported by the National Natural Science Foundation of China under grant nos.61772091,61802035,61962006,61962038,U1802271,U2001212,and 62072311the Sichuan Science and Technology Program under grant nos.2021JDJQ0021 and 22ZDYF2680+7 种基金the CCF‐Huawei Database System Innovation Research Plan under grant no.CCF‐HuaweiDBIR2020004ADigital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under grant no.21DMAKL02the Chengdu Major Science and Technology Innovation Project under grant no.2021‐YF08‐00156‐GXthe Chengdu Technology Innovation and Research and Development Project under grant no.2021‐YF05‐00491‐SNthe Natural Science Foundation of Guangxi under grant no.2018GXNSFDA138005the Guangdong Basic and Applied Basic Research Foundation under grant no.2020B1515120028the Science and Technology Innovation Seedling Project of Sichuan Province under grant no 2021006the College Student Innovation and Entrepreneurship Training Program of Chengdu University of Information Technology under grant nos.202110621179 and 202110621186.
文摘An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.