Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for devic...As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy...Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baf...The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introduc...In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the ...As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.展开更多
Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with...Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.展开更多
The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitatio...The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitations due to the low efficiency of traditional algorithms and the lack of effective constraint strategies,resulting in excessive solution spaces.This study proposes forward shannon entropy wave function collapse(FSE-WFC),a novel method for designing panel configurations of one-dimensional deployable phased-array antennas using the wave function collapse algorithm.This addresses two key challenges:the excessive number of panel layout options and high computational costs.First,it analyzes the relationship between the panel connection positions and the folded form to impose constraints on the panel combinations.It then calculates the information entropy of the potential configurations to identify low-entropy solutions,thereby narrowing the solution space.Finally,boundary constraints and interference check were applied to refine the results.This approach significantly reduced the calculation time while improving the folding state and envelope volume of the antenna.The results show that the FSE-WFC algorithm reduces the envelope area by 18.3%for a 350 mm high satellite and 9.0%for a 600 mm high satellite,while satisfying the connectivity constraints.As the first application of the wave-function collapse algorithm to antenna folding design,this study introduces an information entropy-based constraint generation method that provides an efficient solution for deployable antenna optimization.展开更多
The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity be...The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.展开更多
Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration ...Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。展开更多
Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this stud...Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.展开更多
In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer paral...In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.展开更多
Piles are long, slender structural elements used to transfer the loads from the superstructure through weak strata onto stiffer soils or rocks. For driven piles, the impact of the piling hammer induces compression and...Piles are long, slender structural elements used to transfer the loads from the superstructure through weak strata onto stiffer soils or rocks. For driven piles, the impact of the piling hammer induces compression and tension stresses in the piles. Hence, an important design consideration is to check that the strength of the pile is sufficient to resist the stresses caused by the impact of the pile hammer. Due to its complexity, pile drivability lacks a precise analytical solution with regard to the phenomena involved.In situations where measured data or numerical hypothetical results are available, neural networks stand out in mapping the nonlinear interactions and relationships between the system’s predictors and dependent responses. In addition, unlike most computational tools, no mathematical relationship assumption between the dependent and independent variables has to be made. Nevertheless, neural networks have been criticized for their long trial-and-error training process since the optimal configuration is not known a priori. This paper investigates the use of a fairly simple nonparametric regression algorithm known as multivariate adaptive regression splines(MARS), as an alternative to neural networks, to approximate the relationship between the inputs and dependent response, and to mathematically interpret the relationship between the various parameters. In this paper, the Back propagation neural network(BPNN) and MARS models are developed for assessing pile drivability in relation to the prediction of the Maximum compressive stresses(MCS), Maximum tensile stresses(MTS), and Blow per foot(BPF). A database of more than four thousand piles is utilized for model development and comparative performance between BPNN and MARS predictions.展开更多
A multi-scale narrow band correlated-k distribution(MSNBCK) model is developed to simulate infrared radiation(IR) from an exhaust system of a typical aircraft engine.In this model,an approximate approach instead o...A multi-scale narrow band correlated-k distribution(MSNBCK) model is developed to simulate infrared radiation(IR) from an exhaust system of a typical aircraft engine.In this model,an approximate approach instead of statistically uncorrelated assumption is used to treat overlapping bands in gas mixture.It significantly reduces the requirement for computing power through converting the exponential increase of computing power consumption with the increase of participating gas species to linear increase.Besides,MSNBCK model has a great advantage compared with conventional methods which can estimate each species' contribution to the total gas mixture radiation intensity.Line by line(LBL) results,experimental data and other results in the references are used to evaluate this new model,which demonstrates its advantage in terms of accuracy and computing efficiency.By coupling this model and finite volume method(FVM) into radiative transfer equation(RTE),a comparative study is conducted to simulate IR signature from the exhaust system.The results indicate that wall's IR emission should be considered in both 3-5 μm and8-14 μm bands while gases' IR emission plays an important role only in 3-5 μm band.For plume IR radiation,carbon dioxide's emission is much more significant than that of water vapor in both3-5μm and 8-14 μm bands.Especially in 3-5 μm band,the water vapor's IR signal can even be neglected compared with that of carbon dioxide.展开更多
The large and complex structures are divided into hundreds of thousands or millions degrees of freedom(DOF) when they are calculated which will spend a lot of time and the efficiency will be extremely low. The class...The large and complex structures are divided into hundreds of thousands or millions degrees of freedom(DOF) when they are calculated which will spend a lot of time and the efficiency will be extremely low. The classical component modal synthesis method (CMSM) are used extensively, but for many structures in the engineering of high-rise buildings, aerospace systemic engineerings, marine oil platforms etc, a large amount of calculation is still needed. An improved hybrid interface substructural component modal synthesis method(HISCMSM) is proposed. The parametric model of the mistuned blisk is built by the improved HISCMSM. The double coordinating conditions of the displacement and the force are introduced to ensure the computational accuracy. Compared with the overall structure finite element model method(FEMM), the computational time is shortened by23.86%–31.56%and the modal deviation is 0.002%–0.157% which meets the requirement of the computational accuracy. It is faster 4.46%–10.57% than the classical HISCMSM. So the improved HISCMSM is better than the classical HISCMSM and the overall structure FEMM. Meanwhile, the frequency and the modal shape are researched, considering the factors including rotational speed, gas temperature and geometry size. The strong localization phenomenon of the modal shape’s the maximum displacement and the maximum stress is observed in the second frequency band and it is the most sensitive in the frequency veering. But the localization phenomenon is relatively weak in 1st and the 3d frequency band. The localization of the modal shape is more serious under the condition of the geometric dimensioning mistuned. An improved HISCMSM is proposed, the computational efficiency of the mistuned blisk can be increased observably by this method.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
基金Project supported by the National Natural Science Foundation of China (Grant No.60876027)the National Science Foundation for Distinguished Young Scholars of China (Grant No.60925015)+1 种基金the National Basic Research Program of China (Grant No.2011CBA00600)the Fundamental Research Project of Shenzhen Science & Technology Foundation,China (Grant No.JC200903160353A)
文摘As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金National Natural Science Foundation of China under Grant Nos.51639006 and 51725901
文摘Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
文摘The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金Supported by the National Natural Science Foundation of China(12061048)NSF of Jiangxi Province(20232BAB201026,20232BAB201018)。
文摘In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金Supported by the Grant of National Science Foundation of China(11971433)Zhejiang Gongshang University“Digital+”Disciplinary Construction Management Project(SZJ2022B004)+1 种基金Institute for International People-to-People Exchange in Artificial Intelligence and Advanced Manufacturing(CCIPERGZN202439)the Development Fund for Zhejiang College of Shanghai University of Finance and Economics(2023FZJJ15).
文摘As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.
文摘Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.
基金Supported by National Natural Science Foundation of China(Grant Nos.52105035,62203094)Special Central Funds for Guiding Local Scientific and Technological Development(Grant No.236Z1801G)+2 种基金Higher Education Youth Top Talent Project of Hebei Province of China(Grant No.BJK2024042)Natural Science Foundation of Hebei Province of China(Grant Nos.E2021203109,F2023501021)Graduate Student Innovation Capability Training and Support Project of Hebei Province(Grant No.CXZZBS2024053).
文摘The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitations due to the low efficiency of traditional algorithms and the lack of effective constraint strategies,resulting in excessive solution spaces.This study proposes forward shannon entropy wave function collapse(FSE-WFC),a novel method for designing panel configurations of one-dimensional deployable phased-array antennas using the wave function collapse algorithm.This addresses two key challenges:the excessive number of panel layout options and high computational costs.First,it analyzes the relationship between the panel connection positions and the folded form to impose constraints on the panel combinations.It then calculates the information entropy of the potential configurations to identify low-entropy solutions,thereby narrowing the solution space.Finally,boundary constraints and interference check were applied to refine the results.This approach significantly reduced the calculation time while improving the folding state and envelope volume of the antenna.The results show that the FSE-WFC algorithm reduces the envelope area by 18.3%for a 350 mm high satellite and 9.0%for a 600 mm high satellite,while satisfying the connectivity constraints.As the first application of the wave-function collapse algorithm to antenna folding design,this study introduces an information entropy-based constraint generation method that provides an efficient solution for deployable antenna optimization.
基金National Natural Science Foundation of China(No.52071306)the Natural Science Foundation of Shandong Province(No.ZR2019MEE050)the Natural Science Foundation of Zhejiang Province(No.LZ22E090003).
文摘The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.
文摘Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。
基金financially supported by the National Key R&D Program of China(Grant No.2023YFC3081200)the National Natural Science Foundation of China(Grant Nos.U21A20159 and 52179117).
文摘Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.
基金supported by Northern Border University Researchers Supporting Project number(NBU-FFR-2025-432-03),Northern Border University,Arar,Saudi Arabia.
文摘In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.
文摘Piles are long, slender structural elements used to transfer the loads from the superstructure through weak strata onto stiffer soils or rocks. For driven piles, the impact of the piling hammer induces compression and tension stresses in the piles. Hence, an important design consideration is to check that the strength of the pile is sufficient to resist the stresses caused by the impact of the pile hammer. Due to its complexity, pile drivability lacks a precise analytical solution with regard to the phenomena involved.In situations where measured data or numerical hypothetical results are available, neural networks stand out in mapping the nonlinear interactions and relationships between the system’s predictors and dependent responses. In addition, unlike most computational tools, no mathematical relationship assumption between the dependent and independent variables has to be made. Nevertheless, neural networks have been criticized for their long trial-and-error training process since the optimal configuration is not known a priori. This paper investigates the use of a fairly simple nonparametric regression algorithm known as multivariate adaptive regression splines(MARS), as an alternative to neural networks, to approximate the relationship between the inputs and dependent response, and to mathematically interpret the relationship between the various parameters. In this paper, the Back propagation neural network(BPNN) and MARS models are developed for assessing pile drivability in relation to the prediction of the Maximum compressive stresses(MCS), Maximum tensile stresses(MTS), and Blow per foot(BPF). A database of more than four thousand piles is utilized for model development and comparative performance between BPNN and MARS predictions.
文摘A multi-scale narrow band correlated-k distribution(MSNBCK) model is developed to simulate infrared radiation(IR) from an exhaust system of a typical aircraft engine.In this model,an approximate approach instead of statistically uncorrelated assumption is used to treat overlapping bands in gas mixture.It significantly reduces the requirement for computing power through converting the exponential increase of computing power consumption with the increase of participating gas species to linear increase.Besides,MSNBCK model has a great advantage compared with conventional methods which can estimate each species' contribution to the total gas mixture radiation intensity.Line by line(LBL) results,experimental data and other results in the references are used to evaluate this new model,which demonstrates its advantage in terms of accuracy and computing efficiency.By coupling this model and finite volume method(FVM) into radiative transfer equation(RTE),a comparative study is conducted to simulate IR signature from the exhaust system.The results indicate that wall's IR emission should be considered in both 3-5 μm and8-14 μm bands while gases' IR emission plays an important role only in 3-5 μm band.For plume IR radiation,carbon dioxide's emission is much more significant than that of water vapor in both3-5μm and 8-14 μm bands.Especially in 3-5 μm band,the water vapor's IR signal can even be neglected compared with that of carbon dioxide.
基金Supported by National Natural Science Foundation of China (Grant Nos.51375032,51335003)
文摘The large and complex structures are divided into hundreds of thousands or millions degrees of freedom(DOF) when they are calculated which will spend a lot of time and the efficiency will be extremely low. The classical component modal synthesis method (CMSM) are used extensively, but for many structures in the engineering of high-rise buildings, aerospace systemic engineerings, marine oil platforms etc, a large amount of calculation is still needed. An improved hybrid interface substructural component modal synthesis method(HISCMSM) is proposed. The parametric model of the mistuned blisk is built by the improved HISCMSM. The double coordinating conditions of the displacement and the force are introduced to ensure the computational accuracy. Compared with the overall structure finite element model method(FEMM), the computational time is shortened by23.86%–31.56%and the modal deviation is 0.002%–0.157% which meets the requirement of the computational accuracy. It is faster 4.46%–10.57% than the classical HISCMSM. So the improved HISCMSM is better than the classical HISCMSM and the overall structure FEMM. Meanwhile, the frequency and the modal shape are researched, considering the factors including rotational speed, gas temperature and geometry size. The strong localization phenomenon of the modal shape’s the maximum displacement and the maximum stress is observed in the second frequency band and it is the most sensitive in the frequency veering. But the localization phenomenon is relatively weak in 1st and the 3d frequency band. The localization of the modal shape is more serious under the condition of the geometric dimensioning mistuned. An improved HISCMSM is proposed, the computational efficiency of the mistuned blisk can be increased observably by this method.