Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the eq...Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.展开更多
In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature ...In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
The research of efficient computation focus on special structures of NP-hard problem instances and request providing reasonable computing cost of instances in polynomial time. Based on the theory of combinatorial opti...The research of efficient computation focus on special structures of NP-hard problem instances and request providing reasonable computing cost of instances in polynomial time. Based on the theory of combinatorial optimization, by studying the clusters partition and the clusters complexity measurement in Nvehicle exploration problem, we build a frame of efficient computation and provide an application of tractability for NP-hard problem. Three N-vehicle examples show that when we use efficient computation mechanism on N-vehicle, through polynomial steps of tractability analysis, decision makers can get the computing cost of searching optimal solution before practical calculation.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two ...Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two methods indicates that BCR method is superior to Fourier method in terms of speed and accuracy. Therefore. BCR method is applied to solve (?)2(?)= ζ and (?)2X= D from observed vorticity and divergent values. Thereafter the rotational and divergent components of the horizontal monsoon wind in the lower troposphere are reconstructed and are com pared with the results obtained by Successive Over-Relaxation (SOR) method as this indirect method is generally in more use for obtaining the streamfunction ((?)) and velocity potential (X) fields in NWP models. It is found that the results of BCR method are more reliable than SOR method.展开更多
In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introduc...In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.展开更多
In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer paral...In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.展开更多
Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration ...Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。展开更多
The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity be...The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.展开更多
As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the ...As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
Disease forecasting and surveillance often involve fitting models to a tremendous volume of historical testing data collected over space and time.Bayesian spatio-temporal regression models fit with Markov chain Monte ...Disease forecasting and surveillance often involve fitting models to a tremendous volume of historical testing data collected over space and time.Bayesian spatio-temporal regression models fit with Markov chain Monte Carlo(MCMC)methods are commonly used for such data.When the spatio-temporal support of the model is large,implementing an MCMC algorithm becomes a significant computational burden.This research proposes a computationally efficient gradient boosting algorithm for fitting a Bayesian spatiotemporal mixed effects binomial regression model.We demonstrate our method on a disease forecasting model and compare it to a computationally optimized MCMC approach.Both methods are used to produce monthly forecasts for Lyme disease,anaplasmosis,ehrlichiosis,and heartworm disease in domestic dogs for the contiguous United States.The data have a spatial support of 3108 counties and a temporal support of 108e138 months with 71e135 million test results.The proposed estimation approach is several orders of magnitude faster than the optimized MCMC algorithm,with a similar mean absolute prediction error.展开更多
Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with...Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.展开更多
A multiphysics model for a production scale planar solid oxide fuel cell (SOFC) stack is important for the SOFC technology, but usually requires an unpractical amount of computing resource. The major cause for the h...A multiphysics model for a production scale planar solid oxide fuel cell (SOFC) stack is important for the SOFC technology, but usually requires an unpractical amount of computing resource. The major cause for the huge computing resource requirement is identified as the need to solve the cathode O2 transport and the associated electrochemistry. To overcome the technical obstacle, an analytical model for solving the O2 transport and its coupling with the electrochemistry is derived. The analytical model is used to greatly reduce the numerical mesh complexity of a multiphysics model. Numerical test shows that the analytical approximation is highly accurate and stable. A multiphysics numerical modeling tool taking advantage of the analytical solution is then developed through Fluent@. The numerical efficiency and stability of this modeling tool are further demonstrated by simulating a 30- cell stack with a production scale cell size. Detailed information about the stack performance is revealed and briefly discussed. The multiphysics modeling tool can be used to guide the stack design and select the operating parameters.展开更多
Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baf...The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.展开更多
Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies...Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies that the attribute authority can access all encrypted data,which is known as the key escrow problem.In addition,because all access privileges are defined over a single attribute universe and attributes are shared among multiple data users,the revocation of users is inefficient for the existing ABE scheme.In this paper,we propose a novel scheme that solves the key escrow problem and supports efficient user revocation.First,an access controller is introduced into the existing scheme,and then,secret keys are generated corporately by the attribute authority and access controller.Second,an efficient user revocation mechanism is achieved using a version key that supports forward and backward security.The analysis proves that our scheme is secure and efficient in user authorization and revocation.展开更多
In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Conver...In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Convergence analysis proved that the order of convergence of the family of derivative free simultaneous iterative method is nine.Our main aim is to check out the most regularly used simultaneous iterative methods for finding all roots of non-linear equations by studying their dynamical planes,numerical experiments and CPU time-methodology.Dynamical planes of iterative methods are drawn by using MATLAB for the comparison of global convergence properties of simultaneous iterative methods.Convergence behavior of the higher order simultaneous iterative methods are also illustrated by residual graph obtained from some numerical test examples.Numerical test examples,dynamical behavior and computational efficiency are provided to present the performance and dominant efficiency of the newly constructed derivative free family of simultaneous iterative method over existing higher order simultaneous methods in literature.展开更多
基金supported by the Foundation of the Science and Technology of Jilin Province (20070541)985-Automotive Engineering of Jilin University and Innovation Fund for 985 Engineering of Jilin University (20080104).
文摘Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.
文摘In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金Supported by Key Laboratory of Management,Decision and Information Systems,Chinese Academy of Science
文摘The research of efficient computation focus on special structures of NP-hard problem instances and request providing reasonable computing cost of instances in polynomial time. Based on the theory of combinatorial optimization, by studying the clusters partition and the clusters complexity measurement in Nvehicle exploration problem, we build a frame of efficient computation and provide an application of tractability for NP-hard problem. Three N-vehicle examples show that when we use efficient computation mechanism on N-vehicle, through polynomial steps of tractability analysis, decision makers can get the computing cost of searching optimal solution before practical calculation.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
文摘Poisson's equation is solved numerically by two direct methods, viz. Block Cyclic Reduction (BCR) method and Fourier Method. Qualitative and quantitative comparison between the numerical solutions obtained by two methods indicates that BCR method is superior to Fourier method in terms of speed and accuracy. Therefore. BCR method is applied to solve (?)2(?)= ζ and (?)2X= D from observed vorticity and divergent values. Thereafter the rotational and divergent components of the horizontal monsoon wind in the lower troposphere are reconstructed and are com pared with the results obtained by Successive Over-Relaxation (SOR) method as this indirect method is generally in more use for obtaining the streamfunction ((?)) and velocity potential (X) fields in NWP models. It is found that the results of BCR method are more reliable than SOR method.
基金Supported by the National Natural Science Foundation of China(12061048)NSF of Jiangxi Province(20232BAB201026,20232BAB201018)。
文摘In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.
基金supported by Northern Border University Researchers Supporting Project number(NBU-FFR-2025-432-03),Northern Border University,Arar,Saudi Arabia.
文摘In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.
文摘Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。
基金National Natural Science Foundation of China(No.52071306)the Natural Science Foundation of Shandong Province(No.ZR2019MEE050)the Natural Science Foundation of Zhejiang Province(No.LZ22E090003).
文摘The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.
基金Supported by the Grant of National Science Foundation of China(11971433)Zhejiang Gongshang University“Digital+”Disciplinary Construction Management Project(SZJ2022B004)+1 种基金Institute for International People-to-People Exchange in Artificial Intelligence and Advanced Manufacturing(CCIPERGZN202439)the Development Fund for Zhejiang College of Shanghai University of Finance and Economics(2023FZJJ15).
文摘As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金RH and SS were supported in part or in full by the Companion Animal Parasite Council.SSAM were supported in part by the Research Center for Child Well-Being[NIGMS P20GM130420].
文摘Disease forecasting and surveillance often involve fitting models to a tremendous volume of historical testing data collected over space and time.Bayesian spatio-temporal regression models fit with Markov chain Monte Carlo(MCMC)methods are commonly used for such data.When the spatio-temporal support of the model is large,implementing an MCMC algorithm becomes a significant computational burden.This research proposes a computationally efficient gradient boosting algorithm for fitting a Bayesian spatiotemporal mixed effects binomial regression model.We demonstrate our method on a disease forecasting model and compare it to a computationally optimized MCMC approach.Both methods are used to produce monthly forecasts for Lyme disease,anaplasmosis,ehrlichiosis,and heartworm disease in domestic dogs for the contiguous United States.The data have a spatial support of 3108 counties and a temporal support of 108e138 months with 71e135 million test results.The proposed estimation approach is several orders of magnitude faster than the optimized MCMC algorithm,with a similar mean absolute prediction error.
文摘Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.
基金This work is supported the National Natural Science Foundation of China (No. 11374272 and No. 11574284), the National Basic Research Program of China (No.2012CB215405) and Collaborative Innovation Center of Suzhou Nano Science and Technology are gratefully acknowledged.
文摘A multiphysics model for a production scale planar solid oxide fuel cell (SOFC) stack is important for the SOFC technology, but usually requires an unpractical amount of computing resource. The major cause for the huge computing resource requirement is identified as the need to solve the cathode O2 transport and the associated electrochemistry. To overcome the technical obstacle, an analytical model for solving the O2 transport and its coupling with the electrochemistry is derived. The analytical model is used to greatly reduce the numerical mesh complexity of a multiphysics model. Numerical test shows that the analytical approximation is highly accurate and stable. A multiphysics numerical modeling tool taking advantage of the analytical solution is then developed through Fluent@. The numerical efficiency and stability of this modeling tool are further demonstrated by simulating a 30- cell stack with a production scale cell size. Detailed information about the stack performance is revealed and briefly discussed. The multiphysics modeling tool can be used to guide the stack design and select the operating parameters.
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
文摘The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.
基金supported by the NSFC(61173141,U1536206,61232016, U1405254,61373133,61502242,61572258)BK20150925+3 种基金Fund of Jiangsu Engineering Center of Network Monitoring(KJR1402)Fund of MOE Internet Innovation Platform(KJRP1403)CICAEETthe PAPD fund
文摘Attribute-based encryption(ABE) supports the fine-grained sharing of encrypted data.In some common designs,attributes are managed by an attribute authority that is supposed to be fully trustworthy.This concept implies that the attribute authority can access all encrypted data,which is known as the key escrow problem.In addition,because all access privileges are defined over a single attribute universe and attributes are shared among multiple data users,the revocation of users is inefficient for the existing ABE scheme.In this paper,we propose a novel scheme that solves the key escrow problem and supports efficient user revocation.First,an access controller is introduced into the existing scheme,and then,secret keys are generated corporately by the attribute authority and access controller.Second,an efficient user revocation mechanism is achieved using a version key that supports forward and backward security.The analysis proves that our scheme is secure and efficient in user authorization and revocation.
基金the Natural Science Foundation of China(Grant Nos.61673169,11301127,11701176,11626101,and 11601485)The Natural Science Foundation of Huzhou City(Grant No.2018YZ07).
文摘In this article,we construct the most powerful family of simultaneous iterative method with global convergence behavior among all the existing methods in literature for finding all roots of non-linear equations.Convergence analysis proved that the order of convergence of the family of derivative free simultaneous iterative method is nine.Our main aim is to check out the most regularly used simultaneous iterative methods for finding all roots of non-linear equations by studying their dynamical planes,numerical experiments and CPU time-methodology.Dynamical planes of iterative methods are drawn by using MATLAB for the comparison of global convergence properties of simultaneous iterative methods.Convergence behavior of the higher order simultaneous iterative methods are also illustrated by residual graph obtained from some numerical test examples.Numerical test examples,dynamical behavior and computational efficiency are provided to present the performance and dominant efficiency of the newly constructed derivative free family of simultaneous iterative method over existing higher order simultaneous methods in literature.