Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for devic...As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introduc...In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the ...As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.展开更多
Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with...Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.展开更多
The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity be...The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.展开更多
Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy...Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.展开更多
Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration ...Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。展开更多
In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer paral...In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baf...The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.展开更多
Seismic finite-difference(FD) modeling suffers from numerical dispersion including both the temporal and spatial dispersion, which can decrease the accuracy of the numerical modeling. To improve the accuracy and effic...Seismic finite-difference(FD) modeling suffers from numerical dispersion including both the temporal and spatial dispersion, which can decrease the accuracy of the numerical modeling. To improve the accuracy and efficiency of the conventional numerical modeling, I develop a new seismic modeling method by combining the FD scheme with the numerical dispersion suppression neural network(NDSNN). This method involves the following steps. First, a training data set composed of a small number of wavefield snapshots is generated. The wavefield snapshots with the low-accuracy wavefield data and the high-accuracy wavefield data are paired, and the low-accuracy wavefield snapshots involve the obvious numerical dispersion including both the temporal and spatial dispersion. Second, the NDSNN is trained until the network converges to simultaneously suppress the temporal and spatial dispersion.Third, the entire set of low-accuracy wavefield data is computed quickly using FD modeling with the large time step and the coarse grid. Fourth, the NDSNN is applied to the entire set of low-accuracy wavefield data to suppress the numerical dispersion including the temporal and spatial dispersion.Numerical modeling examples verify the effectiveness of my proposed method in improving the computational accuracy and efficiency.展开更多
Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phe...Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phenomenon of local failure is modeled in fail-safe topology optimization.In this work,we first conduct a comprehensive study to explore the impact of patch size,shape,and distribution on the robustness of fail-safe designs.The findings suggest that larger sizes and finer distribution of material patches can yield more robust fail-safe structures.However,a finer patch distribution can significantly increase computational costs,particularly for 3D structures.To keep computational efforts tractable,an efficient fail-safe topology optimization approach is established based on the framework of multi-resolution topology optimization(MTOP).Within the MTOP framework,the extended finite element method is introduced to establish a decoupling connection between the analysis mesh and the topology description model.Numerical examples demonstrate that the developed methodology is 2 times faster for 2D problems and over 25 times faster for 3D problems than traditional fail-safe topology optimization while maintaining similar levels of robustness.展开更多
Simulations of contact problems involving at least one plastic solid may be costly due to their strong nonlinearity and requirements of stability.In this work,we develop an explicit asynchronous variational integrator...Simulations of contact problems involving at least one plastic solid may be costly due to their strong nonlinearity and requirements of stability.In this work,we develop an explicit asynchronous variational integrator(AVI)for inelastic non-frictional contact problems involving a plastic solid.The AVI assigns each element in the mesh an independent time step and updates the solution at the elements and nodes asynchronously.This asynchrony makes the AVI highly efficient in solving such bi-material problems.Taking advantage of the AVI,the constitutive update is locally performed in one element at a time,and contact constraints are also enforced on only one element.The time step of the contact element is subdivided into multiple segments,and the fields are updated accordingly.During a contact event,only one element involving a few degrees of freedom is considered,leading to high efficiency.The proposed formulation is first verified with a pure elastodynamics benchmark and further applied to a contact problem involving an elastoplastic solid with non-associative volumetric hardening.The numerical results indicate that the AVI exhibits excellent energy behaviors and has high computational efficiency.展开更多
This paper considers the automatic carrier landing problem of carrier-based aircrafts subjected to constraints,deck motion,measurement noises,and unknown disturbances.The iterative model predictive control(MPC)strateg...This paper considers the automatic carrier landing problem of carrier-based aircrafts subjected to constraints,deck motion,measurement noises,and unknown disturbances.The iterative model predictive control(MPC)strategy with constraints is proposed for automatic landing control of the aircraft.First,the long short-term memory(LSTM)neural network is used to calculate the adaptive reference trajectories of the aircraft.Then the Sage-Husa adaptive Kalman filter and the disturbance observer are introduced to design the composite compensator.Second,an iterative optimization algorithm is presented to fast solve the receding horizon optimal control problem of MPC based on the Lagrange’s theory.Moreover,some sufficient conditions are derived to guarantee the stability of the landing system in a closed loop with the MPC.Finally,the simulation results of F/A-18A aircraft show that compared with the conventional MPC,the presented MPC strategy improves the computational efficiency by nearly 56%and satisfies the control performance requirements of carrier landing.展开更多
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes...Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.展开更多
It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly eval...It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
基金Project supported by the National Natural Science Foundation of China (Grant No.60876027)the National Science Foundation for Distinguished Young Scholars of China (Grant No.60925015)+1 种基金the National Basic Research Program of China (Grant No.2011CBA00600)the Fundamental Research Project of Shenzhen Science & Technology Foundation,China (Grant No.JC200903160353A)
文摘As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金Supported by the National Natural Science Foundation of China(12061048)NSF of Jiangxi Province(20232BAB201026,20232BAB201018)。
文摘In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金Supported by the Grant of National Science Foundation of China(11971433)Zhejiang Gongshang University“Digital+”Disciplinary Construction Management Project(SZJ2022B004)+1 种基金Institute for International People-to-People Exchange in Artificial Intelligence and Advanced Manufacturing(CCIPERGZN202439)the Development Fund for Zhejiang College of Shanghai University of Finance and Economics(2023FZJJ15).
文摘As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.
文摘Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.
基金National Natural Science Foundation of China(No.52071306)the Natural Science Foundation of Shandong Province(No.ZR2019MEE050)the Natural Science Foundation of Zhejiang Province(No.LZ22E090003).
文摘The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.
基金National Natural Science Foundation of China under Grant Nos.51639006 and 51725901
文摘Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
文摘Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。
基金supported by Northern Border University Researchers Supporting Project number(NBU-FFR-2025-432-03),Northern Border University,Arar,Saudi Arabia.
文摘In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
文摘The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.
基金supported by the National Natural Science Foundation of China (grant numbers: 41874160 and 92055213)。
文摘Seismic finite-difference(FD) modeling suffers from numerical dispersion including both the temporal and spatial dispersion, which can decrease the accuracy of the numerical modeling. To improve the accuracy and efficiency of the conventional numerical modeling, I develop a new seismic modeling method by combining the FD scheme with the numerical dispersion suppression neural network(NDSNN). This method involves the following steps. First, a training data set composed of a small number of wavefield snapshots is generated. The wavefield snapshots with the low-accuracy wavefield data and the high-accuracy wavefield data are paired, and the low-accuracy wavefield snapshots involve the obvious numerical dispersion including both the temporal and spatial dispersion. Second, the NDSNN is trained until the network converges to simultaneously suppress the temporal and spatial dispersion.Third, the entire set of low-accuracy wavefield data is computed quickly using FD modeling with the large time step and the coarse grid. Fourth, the NDSNN is applied to the entire set of low-accuracy wavefield data to suppress the numerical dispersion including the temporal and spatial dispersion.Numerical modeling examples verify the effectiveness of my proposed method in improving the computational accuracy and efficiency.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.12172095,11832009,and 12302008)the Natural Science Foundation of Guangdong Province(Grant No.2023A1515011770)Guangzhou Science and Technology Planning Project(Grant Nos.202201010570,202201020239,202201020193,and 202201010399)。
文摘Fail-safe topology optimization is valuable for ensuring that optimized structures remain operable even under damaged conditions.By selectively removing material stiffness in patches with a fixed shape,the complex phenomenon of local failure is modeled in fail-safe topology optimization.In this work,we first conduct a comprehensive study to explore the impact of patch size,shape,and distribution on the robustness of fail-safe designs.The findings suggest that larger sizes and finer distribution of material patches can yield more robust fail-safe structures.However,a finer patch distribution can significantly increase computational costs,particularly for 3D structures.To keep computational efforts tractable,an efficient fail-safe topology optimization approach is established based on the framework of multi-resolution topology optimization(MTOP).Within the MTOP framework,the extended finite element method is introduced to establish a decoupling connection between the analysis mesh and the topology description model.Numerical examples demonstrate that the developed methodology is 2 times faster for 2D problems and over 25 times faster for 3D problems than traditional fail-safe topology optimization while maintaining similar levels of robustness.
基金support from the Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment(CURE).
文摘Simulations of contact problems involving at least one plastic solid may be costly due to their strong nonlinearity and requirements of stability.In this work,we develop an explicit asynchronous variational integrator(AVI)for inelastic non-frictional contact problems involving a plastic solid.The AVI assigns each element in the mesh an independent time step and updates the solution at the elements and nodes asynchronously.This asynchrony makes the AVI highly efficient in solving such bi-material problems.Taking advantage of the AVI,the constitutive update is locally performed in one element at a time,and contact constraints are also enforced on only one element.The time step of the contact element is subdivided into multiple segments,and the fields are updated accordingly.During a contact event,only one element involving a few degrees of freedom is considered,leading to high efficiency.The proposed formulation is first verified with a pure elastodynamics benchmark and further applied to a contact problem involving an elastoplastic solid with non-associative volumetric hardening.The numerical results indicate that the AVI exhibits excellent energy behaviors and has high computational efficiency.
基金National Defense Science and Technology Innovation Project(No.2022-4b5s-wwht-0041)。
文摘This paper considers the automatic carrier landing problem of carrier-based aircrafts subjected to constraints,deck motion,measurement noises,and unknown disturbances.The iterative model predictive control(MPC)strategy with constraints is proposed for automatic landing control of the aircraft.First,the long short-term memory(LSTM)neural network is used to calculate the adaptive reference trajectories of the aircraft.Then the Sage-Husa adaptive Kalman filter and the disturbance observer are introduced to design the composite compensator.Second,an iterative optimization algorithm is presented to fast solve the receding horizon optimal control problem of MPC based on the Lagrange’s theory.Moreover,some sufficient conditions are derived to guarantee the stability of the landing system in a closed loop with the MPC.Finally,the simulation results of F/A-18A aircraft show that compared with the conventional MPC,the presented MPC strategy improves the computational efficiency by nearly 56%and satisfies the control performance requirements of carrier landing.
文摘Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.
基金supported by the National Natural Science Foundation of China (12072365)the Natural Science Foundation of Hunan Province of China (2020JJ4657)。
文摘It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.