Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic an...Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic and lightweight SR framework designed for arbitrary scaling factors.DDNet integrates a residual learning structure with an Adaptively fusion Feature Block(AFB)and a scale-aware upsampling module,effectively reducing parameter overhead while preserving reconstruction quality.Additionally,we introduce DDNetGAN,an enhanced variant that leverages a relativistic Generative Adversarial Network(GAN)to further improve texture realism.To validate the proposed models,we conduct extensive training using the DIV2K and Flickr2K datasets and evaluate performance across standard benchmarks including Set5,Set14,Urban100,Manga109,and BSD100.Our experiments cover both symmetric and asymmetric upscaling factors and incorporate ablation studies to assess key components.Results show that DDNet and DDNetGAN achieve competitive performance compared with mainstream SR algorithms,demonstrating a strong balance between accuracy,efficiency,and flexibility.These findings highlight the potential of our approach for practical real-world super-resolution applications.展开更多
A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, th...A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.展开更多
To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorit...To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.展开更多
Based on results of chaos characteristics comparing one-dimensional iterative chaotic self-map x = sin(2/x) with infinite collapses within the finite region[-1, 1] to some representative iterative chaotic maps with ...Based on results of chaos characteristics comparing one-dimensional iterative chaotic self-map x = sin(2/x) with infinite collapses within the finite region[-1, 1] to some representative iterative chaotic maps with finite collapses (e.g., Logistic map, Tent map, and Chebyshev map), a new adaptive mutative scale chaos optimization algorithm (AMSCOA) is proposed by using the chaos model x = sin(2/x). In the optimization algorithm, in order to ensure its advantage of speed convergence and high precision in the seeking optimization process, some measures are taken: 1) the searching space of optimized variables is reduced continuously due to adaptive mutative scale method and the searching precision is enhanced accordingly; 2) the most circle time is regarded as its control guideline. The calculation examples about three testing functions reveal that the adaptive mutative scale chaos optimization algorithm has both high searching speed and precision.展开更多
In order to avoid such problems as low convergent speed and local optimalsolution in simple genetic algorithms, a new hybrid genetic algorithm is proposed. In thisalgorithm, a mutative scale chaos optimization strateg...In order to avoid such problems as low convergent speed and local optimalsolution in simple genetic algorithms, a new hybrid genetic algorithm is proposed. In thisalgorithm, a mutative scale chaos optimization strategy is operated on the population after agenetic operation. And according to the searching process, the searching space of the optimalvariables is gradually diminished and the regulating coefficient of the secondary searching processis gradually changed which will lead to the quick evolution of the population. The algorithm hassuch advantages as fast search, precise results and convenient using etc. The simulation resultsshow that the performance of the method is better than that of simple genetic algorithms.展开更多
The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special fe...The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special features as well. One of the most prominent advantages is its capability of calculating stress intensity factors (SIFs) directly from the stress solutions whose singularities at crack tips are analytically represented. This advantage is taken in this study to model static and dynamic fracture problems. For static problems, a remeshing algorithm as simple as used in the BEM is developed while retaining the generality and flexibility of the FEM. Fully-automatic modelling of the mixed-mode crack propagation is then realised by combining the remeshing algorithm with a propagation criterion. For dynamic fracture problems, a newly developed series-increasing solution to the SBFEM governing equations in the frequency domain is applied to calculate dynamic SIFs. Three plane problems are modelled. The numerical results show that the SBFEM can accurately predict static and dynamic SIFs, cracking paths and load-displacement curves, using only a fraction of degrees of freedom generally needed by the traditional finite element methods.展开更多
A simplified group search optimizer algorithm denoted as"SGSO"for large scale global optimization is presented in this paper to obtain a simple algorithm with superior performance on high-dimensional problem...A simplified group search optimizer algorithm denoted as"SGSO"for large scale global optimization is presented in this paper to obtain a simple algorithm with superior performance on high-dimensional problems.The SGSO adopts an improved sharing strategy which shares information of not only the best member but also the other good members,and uses a simpler search method instead of searching by the head angle.Furthermore,the SGSO increases the percentage of scroungers to accelerate convergence speed.Compared with genetic algorithm(GA),particle swarm optimizer(PSO)and group search optimizer(GSO),SGSO is tested on seven benchmark functions with dimensions 30,100,500 and 1 000.It can be concluded that the SGSO has a remarkably superior performance to GA,PSO and GSO for large scale global optimization.展开更多
A new spectral matching algorithm is proposed by us- ing nonsubsampled contourlet transform and scale-invariant fea- ture transform. The nonsubsampled contourlet transform is used to decompose an image into a low freq...A new spectral matching algorithm is proposed by us- ing nonsubsampled contourlet transform and scale-invariant fea- ture transform. The nonsubsampled contourlet transform is used to decompose an image into a low frequency image and several high frequency images, and the scale-invariant feature transform is employed to extract feature points from the low frequency im- age. A proximity matrix is constructed for the feature points of two related images. By singular value decomposition of the proximity matrix, a matching matrix (or matching result) reflecting the match- ing degree among feature points is obtained. Experimental results indicate that the proposed algorithm can reduce time complexity and possess a higher accuracy.展开更多
Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that...Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom-up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top-down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.展开更多
This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting...This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting from a basis of its kernel which forms a Chebyshev asymptotic scale at an endpoint. These algorithms arise quite naturally in our asymptotic context and prove very simple in special cases and/or for scales with a small numbers of terms. All the results in the three Parts of this work are well illustrated by a class of asymptotic scales featuring interesting properties. Examples and counterexamples complete the exposition.展开更多
Due to the recent proliferation of cyber-attacks,highly robust wireless sensor networks(WSN)become a critical issue as they survive node failures.Scale-free WSN is essential because they endure random attacks effectiv...Due to the recent proliferation of cyber-attacks,highly robust wireless sensor networks(WSN)become a critical issue as they survive node failures.Scale-free WSN is essential because they endure random attacks effectively.But they are susceptible to malicious attacks,which mainly targets particular significant nodes.Therefore,the robustness of the network becomes important for ensuring the network security.This paper presents a Robust Hybrid Artificial Fish Swarm Simulated Annealing Optimization(RHAFS-SA)Algorithm.It is introduced for improving the robust nature of free scale networks over malicious attacks(MA)with no change in degree distribution.The proposed RHAFS-SA is an enhanced version of the Improved Artificial Fish Swarm algorithm(IAFSA)by the simulated annealing(SA)algorithm.The proposed RHAFS-SA algorithm eliminates the IAFSA from unforeseen vibration and speeds up the convergence rate.For experimentation,free scale networks are produced by the Barabási–Albert(BA)model,and real-world networks are employed for testing the outcome on both synthetic-free scale and real-world networks.The experimental results exhibited that the RHAFS-SA model is superior to other models interms of diverse aspects.展开更多
Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm wi...Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.展开更多
In cognitive radio, the detection probability of primary user affects the signal receiving performance for both primary and secondary users significantly. In this paper, a new Dempster-Shafer (D-S) algorithm with cr...In cognitive radio, the detection probability of primary user affects the signal receiving performance for both primary and secondary users significantly. In this paper, a new Dempster-Shafer (D-S) algorithm with credit scale for decision fusion in spectrum sensing is proposed for the purpose to improve the performance of detection in cognitive radio. The validity of this method is established by simulation in the environment of multiple cognitive users who know their signal to noise ratios (SNR) and a central node. The channels between the cognitive users and the central node are considered to be additive white Ganssian noise (AWGN). Compared with traditional data fusion rules, the proposed D-S algorithm with credit scale provides a better detection performance.展开更多
A multiple-time-scale algorithm is developed to numerically simulate certain structural components in civil structures where local defects inevitably exist. Spatially, the size of local defects is relatively small com...A multiple-time-scale algorithm is developed to numerically simulate certain structural components in civil structures where local defects inevitably exist. Spatially, the size of local defects is relatively small compared to the structural scale. Different length scales should be adopted considering the efficiency and computational cost. In the principle of physics, different length scales are stipulated to correspond to different time scales. This concept lays the foundation of the framework for this multiple-time-scale algorithm. A multiple-time-scale algorithm, which involves different time steps for different regions, while enforcing the compatibility of displacement, force and stress fields across the interface, is proposed. Furthermore, a defected beam component is studied as a numerical sample. The structural component is divided into two regions: a coarse one and a fine one; a micro-defect exists in the fine region and the finite element sizes of the two regions are diametrically different. Correspondingly, two different time steps are adopted. With dynamic load applied to the beam, stress and displacement distribution of the defected beam is investigated from the global and local perspectives. The numerical sample reflects that the proposed algorithm is physically rational and computationally efficient in the potential damage simulation of civil structures.展开更多
As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for th...As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.展开更多
This paper presents a modified frequency scaling algorithm for frequency modulated continuous wave synthetic aperture radar (FMCW SAR) data processing. The relative motion between radar and target in FMCW SAR during...This paper presents a modified frequency scaling algorithm for frequency modulated continuous wave synthetic aperture radar (FMCW SAR) data processing. The relative motion between radar and target in FMCW SAR during reception and between transmission and reception will introduce serious dilation in the received signal. The dilation can cause serious distortions in the reconstructed images using conventional signal processing methods. The received signal is derived and the received signal in range-Doppler domain is given. The relation between the phase resulting from antenna motion and the azimuth frequency is analyzed. The modified frequency scaling algorithm is proposed to process the received signal with serious dilation. The algorithm can effectively eliminate the impact of the dilation. The algorithm performances are shown by the simulation results.展开更多
We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is in...We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is introduced to reduce the overall evaluation complexity. The complexity of the proposed algorithm is (N1 + N2 ) × O( n^2 ) + N3× O(n^4lgn) ,where N1, N2 ,and N3 denote the number of modules in each stage, N1 + N2 + N3 = n, and N3〈〈 n. This complexity is much less than the original time complexity of O(n^5lgn). Experimental results indicate that this approach is quite promising.展开更多
基金supported by Sichuan Science and Technology Program[2023YFSY0026,2023YFH0004].
文摘Recent Super-Resolution(SR)algorithms often suffer from excessive model complexity,high computational costs,and limited flexibility across varying image scales.To address these challenges,we propose DDNet,a dynamic and lightweight SR framework designed for arbitrary scaling factors.DDNet integrates a residual learning structure with an Adaptively fusion Feature Block(AFB)and a scale-aware upsampling module,effectively reducing parameter overhead while preserving reconstruction quality.Additionally,we introduce DDNetGAN,an enhanced variant that leverages a relativistic Generative Adversarial Network(GAN)to further improve texture realism.To validate the proposed models,we conduct extensive training using the DIV2K and Flickr2K datasets and evaluate performance across standard benchmarks including Set5,Set14,Urban100,Manga109,and BSD100.Our experiments cover both symmetric and asymmetric upscaling factors and incorporate ablation studies to assess key components.Results show that DDNet and DDNetGAN achieve competitive performance compared with mainstream SR algorithms,demonstrating a strong balance between accuracy,efficiency,and flexibility.These findings highlight the potential of our approach for practical real-world super-resolution applications.
基金the National Natural Science Foundation of China(10471062)the Natural Science Foundation of Jiangsu Province(BK2006184)~~
文摘A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.
基金The National High Technology Research and Development Program of China (863 Program)(No.2007AA01Z280)
文摘To overcome the drawbacks such as irregular circuit construction and low system throughput that exist in conventional methods, a new factor correction scheme for coordinate rotation digital computer( CORDIC) algorithm is proposed. Based on the relationship between the iteration formulae, a new iteration formula is introduced, which leads the correction operation to be several simple shifting and adding operations. As one key part, the effects caused by rounding error are analyzed mathematically and it is concluded that the effects can be degraded by an appropriate selection of coefficients in the iteration formula. The model is then set up in Matlab and coded in Verilog HDL language. The proposed algorithm is also synthesized and verified in field-programmable gate array (FPGA). The results show that this new scheme requires only one additional clock cycle and there is no change in the elementary iteration for the same precision compared with the conventional algorithm. In addition, the circuit realization is regular and the change in system throughput is very minimal.
基金Hunan Provincial Natural Science Foundation of China (No. 06JJ50103)the National Natural Science Foundationof China (No. 60375001)
文摘Based on results of chaos characteristics comparing one-dimensional iterative chaotic self-map x = sin(2/x) with infinite collapses within the finite region[-1, 1] to some representative iterative chaotic maps with finite collapses (e.g., Logistic map, Tent map, and Chebyshev map), a new adaptive mutative scale chaos optimization algorithm (AMSCOA) is proposed by using the chaos model x = sin(2/x). In the optimization algorithm, in order to ensure its advantage of speed convergence and high precision in the seeking optimization process, some measures are taken: 1) the searching space of optimized variables is reduced continuously due to adaptive mutative scale method and the searching precision is enhanced accordingly; 2) the most circle time is regarded as its control guideline. The calculation examples about three testing functions reveal that the adaptive mutative scale chaos optimization algorithm has both high searching speed and precision.
文摘In order to avoid such problems as low convergent speed and local optimalsolution in simple genetic algorithms, a new hybrid genetic algorithm is proposed. In thisalgorithm, a mutative scale chaos optimization strategy is operated on the population after agenetic operation. And according to the searching process, the searching space of the optimalvariables is gradually diminished and the regulating coefficient of the secondary searching processis gradually changed which will lead to the quick evolution of the population. The algorithm hassuch advantages as fast search, precise results and convenient using etc. The simulation resultsshow that the performance of the method is better than that of simple genetic algorithms.
基金The project supported by the National Natural Science Foundation of China (50579081)the Australian Research Council (DP0452681)The English text was polished by Keren Wang
文摘The scaled boundary finite element method (SBFEM) is a recently developed numerical method combining advantages of both finite element methods (FEM) and boundary element methods (BEM) and with its own special features as well. One of the most prominent advantages is its capability of calculating stress intensity factors (SIFs) directly from the stress solutions whose singularities at crack tips are analytically represented. This advantage is taken in this study to model static and dynamic fracture problems. For static problems, a remeshing algorithm as simple as used in the BEM is developed while retaining the generality and flexibility of the FEM. Fully-automatic modelling of the mixed-mode crack propagation is then realised by combining the remeshing algorithm with a propagation criterion. For dynamic fracture problems, a newly developed series-increasing solution to the SBFEM governing equations in the frequency domain is applied to calculate dynamic SIFs. Three plane problems are modelled. The numerical results show that the SBFEM can accurately predict static and dynamic SIFs, cracking paths and load-displacement curves, using only a fraction of degrees of freedom generally needed by the traditional finite element methods.
基金the Science and Technology Planning Project of Hunan Province(No.2011TP4016-3)the Construct Program of the Key Discipline(Technology of Computer Application)in Xiangnan University
文摘A simplified group search optimizer algorithm denoted as"SGSO"for large scale global optimization is presented in this paper to obtain a simple algorithm with superior performance on high-dimensional problems.The SGSO adopts an improved sharing strategy which shares information of not only the best member but also the other good members,and uses a simpler search method instead of searching by the head angle.Furthermore,the SGSO increases the percentage of scroungers to accelerate convergence speed.Compared with genetic algorithm(GA),particle swarm optimizer(PSO)and group search optimizer(GSO),SGSO is tested on seven benchmark functions with dimensions 30,100,500 and 1 000.It can be concluded that the SGSO has a remarkably superior performance to GA,PSO and GSO for large scale global optimization.
基金supported by the National Natural Science Foundation of China (6117212711071002)+1 种基金the Specialized Research Fund for the Doctoral Program of Higher Education (20113401110006)the Innovative Research Team of 211 Project in Anhui University (KJTD007A)
文摘A new spectral matching algorithm is proposed by us- ing nonsubsampled contourlet transform and scale-invariant fea- ture transform. The nonsubsampled contourlet transform is used to decompose an image into a low frequency image and several high frequency images, and the scale-invariant feature transform is employed to extract feature points from the low frequency im- age. A proximity matrix is constructed for the feature points of two related images. By singular value decomposition of the proximity matrix, a matching matrix (or matching result) reflecting the match- ing degree among feature points is obtained. Experimental results indicate that the proposed algorithm can reduce time complexity and possess a higher accuracy.
基金supported by the National Science Foundation for Distinguished Young Scholars of China(Grant Nos.61003082 and 60903059)the National Natural Science Foundation of China(Grant No.60873014)the Foundation for Innovative Research Groups of the National Natural Science Foundation of China(Grant No.60921062)
文摘Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom-up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top-down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches.
文摘This part II-C of our work completes the factorizational theory of asymptotic expansions in the real domain. Here we present two algorithms for constructing canonical factorizations of a disconjugate operator starting from a basis of its kernel which forms a Chebyshev asymptotic scale at an endpoint. These algorithms arise quite naturally in our asymptotic context and prove very simple in special cases and/or for scales with a small numbers of terms. All the results in the three Parts of this work are well illustrated by a class of asymptotic scales featuring interesting properties. Examples and counterexamples complete the exposition.
文摘Due to the recent proliferation of cyber-attacks,highly robust wireless sensor networks(WSN)become a critical issue as they survive node failures.Scale-free WSN is essential because they endure random attacks effectively.But they are susceptible to malicious attacks,which mainly targets particular significant nodes.Therefore,the robustness of the network becomes important for ensuring the network security.This paper presents a Robust Hybrid Artificial Fish Swarm Simulated Annealing Optimization(RHAFS-SA)Algorithm.It is introduced for improving the robust nature of free scale networks over malicious attacks(MA)with no change in degree distribution.The proposed RHAFS-SA is an enhanced version of the Improved Artificial Fish Swarm algorithm(IAFSA)by the simulated annealing(SA)algorithm.The proposed RHAFS-SA algorithm eliminates the IAFSA from unforeseen vibration and speeds up the convergence rate.For experimentation,free scale networks are produced by the Barabási–Albert(BA)model,and real-world networks are employed for testing the outcome on both synthetic-free scale and real-world networks.The experimental results exhibited that the RHAFS-SA model is superior to other models interms of diverse aspects.
文摘Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.
基金Supported by the National High Technology Research and Development Programme of China (No. 2007AA01Z268), National Natural Science Foundation of China (No. 60702028)and the Starting Ftmd for Science Research of NJUST (AIM1947).
文摘In cognitive radio, the detection probability of primary user affects the signal receiving performance for both primary and secondary users significantly. In this paper, a new Dempster-Shafer (D-S) algorithm with credit scale for decision fusion in spectrum sensing is proposed for the purpose to improve the performance of detection in cognitive radio. The validity of this method is established by simulation in the environment of multiple cognitive users who know their signal to noise ratios (SNR) and a central node. The channels between the cognitive users and the central node are considered to be additive white Ganssian noise (AWGN). Compared with traditional data fusion rules, the proposed D-S algorithm with credit scale provides a better detection performance.
基金supports from NSFC(No.11302078)China Postdoctoral Science Foundation(No.2013M531139)Shanghai Postdoctoral Sustentation Fund(No.12R21412000)
文摘A multiple-time-scale algorithm is developed to numerically simulate certain structural components in civil structures where local defects inevitably exist. Spatially, the size of local defects is relatively small compared to the structural scale. Different length scales should be adopted considering the efficiency and computational cost. In the principle of physics, different length scales are stipulated to correspond to different time scales. This concept lays the foundation of the framework for this multiple-time-scale algorithm. A multiple-time-scale algorithm, which involves different time steps for different regions, while enforcing the compatibility of displacement, force and stress fields across the interface, is proposed. Furthermore, a defected beam component is studied as a numerical sample. The structural component is divided into two regions: a coarse one and a fine one; a micro-defect exists in the fine region and the finite element sizes of the two regions are diametrically different. Correspondingly, two different time steps are adopted. With dynamic load applied to the beam, stress and displacement distribution of the defected beam is investigated from the global and local perspectives. The numerical sample reflects that the proposed algorithm is physically rational and computationally efficient in the potential damage simulation of civil structures.
文摘As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.
文摘This paper presents a modified frequency scaling algorithm for frequency modulated continuous wave synthetic aperture radar (FMCW SAR) data processing. The relative motion between radar and target in FMCW SAR during reception and between transmission and reception will introduce serious dilation in the received signal. The dilation can cause serious distortions in the reconstructed images using conventional signal processing methods. The received signal is derived and the received signal in range-Doppler domain is given. The relation between the phase resulting from antenna motion and the azimuth frequency is analyzed. The modified frequency scaling algorithm is proposed to process the received signal with serious dilation. The algorithm can effectively eliminate the impact of the dilation. The algorithm performances are shown by the simulation results.
文摘We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is introduced to reduce the overall evaluation complexity. The complexity of the proposed algorithm is (N1 + N2 ) × O( n^2 ) + N3× O(n^4lgn) ,where N1, N2 ,and N3 denote the number of modules in each stage, N1 + N2 + N3 = n, and N3〈〈 n. This complexity is much less than the original time complexity of O(n^5lgn). Experimental results indicate that this approach is quite promising.