Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro...Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.展开更多
Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-div...Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-division multiplexing-based visible light communication(OFDM-VLC)system is presented.In order to analyze the effect of the resolution of ADC on NHS OFDM-VLC,a quantized mathematical model of NHS OFDM-VLC is established.Based on the proposed quantized model,a closed-form bit error rate(BER)expression is derived.The theoretical analysis and simulation results both confirm the effectiveness of the obtained BER formula in high-resolution ADC.In addition,channel coding is helpful in compensating for the BER performance loss due to the utilization of lower resolution ADC.展开更多
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati...The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.展开更多
We consider a relativistic two-fluid model of superfluidity,in which the superfluid is described by an order parameter that is a complex scalar field satisfying the nonlinear Klein-Gordon equation(NLKG).The coupling t...We consider a relativistic two-fluid model of superfluidity,in which the superfluid is described by an order parameter that is a complex scalar field satisfying the nonlinear Klein-Gordon equation(NLKG).The coupling to the normal fluid is introduced via a covariant current-current interaction,which results in the addition of an effective potential,whose imaginary part describes particle transfer between superfluid and normal fluid.Quantized vorticity arises in a class of singular solutions and the related vortex dynamics is incorporated in the modified NLKG,facilitating numerical analysis which is usually very complicated in the phenomenology of vortex filaments.The dual transformation to a string theory description(Kalb-Ramond)of quantum vorticity,the Magnus force,and the mutual friction between quantized vortices and normal fluid are also studied.展开更多
In order to overcome data-quantization, networked-induced delay, network packet dropouts and wrong sequences in the nonlinear networked control system, a novel nonlinear networked control system model is built by the ...In order to overcome data-quantization, networked-induced delay, network packet dropouts and wrong sequences in the nonlinear networked control system, a novel nonlinear networked control system model is built by the T-S fuzzy method. Two time-varying quantizers are added in the model. The key analysis steps in the method are to construct an improved interval-delay-dependent Lyapunov functional and to introduce the free-weighting matrix. By making use of the parallel distributed compensation technology and the convexity of the matrix function, the improved criteria of the stabilization and stability are obtained. Simulation experiments show that the parameters of the controllers and quantizers satisfying a certain performance can be obtained by solving a set of LMIs. The application of the nonlinear mass-spring system is provided to show that the proposed method is effective.展开更多
Formal state space models of quantum control systems are deduced and a scheme to establish formal state space models via quantization could been obtained for quantum control systems is proposed. State evolution of qua...Formal state space models of quantum control systems are deduced and a scheme to establish formal state space models via quantization could been obtained for quantum control systems is proposed. State evolution of quantum control systems must accord with Schrdinger equations, so it is foremost to obtain Hamiltonian operators of systems. There are corresponding relations between operators of quantum systems and corresponding physical quantities of classical systems, such as momentum, energy and Hamiltonian, so Schrdinger equation models of corresponding quantum control systems via quantization could been obtained from classical control systems, and then establish formal state space models through the suitable transformation from Schrdinger equations for these quantum control systems. This method provides a new kind of path for modeling in quantum control.展开更多
This paper addresses a modified auxiliary model stochastic gradient recursive parameter identification algorithm(M-AM-SGRPIA)for a class of single input single output(SISO)linear output error models with multi-thresho...This paper addresses a modified auxiliary model stochastic gradient recursive parameter identification algorithm(M-AM-SGRPIA)for a class of single input single output(SISO)linear output error models with multi-threshold quantized observations.It proves the convergence of the designed algorithm.A pattern-moving-based system dynamics description method with hybrid metrics is proposed for a kind of practical single input multiple output(SIMO)or SISO nonlinear systems,and a SISO linear output error model with multi-threshold quantized observations is adopted to approximate the unknown system.The system input design is accomplished using the measurement technology of random repeatability test,and the probabilistic characteristic of the explicit metric value is employed to estimate the implicit metric value of the pattern class variable.A modified auxiliary model stochastic gradient recursive algorithm(M-AM-SGRA)is designed to identify the model parameters,and the contraction mapping principle proves its convergence.Two numerical examples are given to demonstrate the feasibility and effectiveness of the achieved identification algorithm.展开更多
A half-harmonic oscillator, which gets its name because the position coordinate is strictly positive, has been quantized and determined that it was a physically correct quantization. This positive result was found usi...A half-harmonic oscillator, which gets its name because the position coordinate is strictly positive, has been quantized and determined that it was a physically correct quantization. This positive result was found using affine quantization (AQ). The main purpose of this paper is to compare results of this new quantization procedure with those of canonical quantization (CQ). Using Ashtekar-like classical variables and CQ, we quantize the same toy model. While these two quantizations lead to different results, they both would reduce to the same classical Hamiltonian if ħ→ 0. Since these two quantizations have differing results, only one of the quantizations can be physically correct. Two brief sections also illustrate how AQ can correctly help quantum gravity and the quantization of most field theory problems.展开更多
Using the Faddeev-Jackiw (FJ) quantization method, this paper treats the CP^1nonlinear sigma model with ChernSimons term. The generalized FJ brackets are obtained in the framework of this quantization method, which ...Using the Faddeev-Jackiw (FJ) quantization method, this paper treats the CP^1nonlinear sigma model with ChernSimons term. The generalized FJ brackets are obtained in the framework of this quantization method, which agree with the results obtained by using the Dirac's method.展开更多
At the present time,the Industrial Internet of Things(IIoT)has swiftly evolved and emerged,and picture data that is collected by terminal devices or IoT nodes are tied to the user's private data.The use of image s...At the present time,the Industrial Internet of Things(IIoT)has swiftly evolved and emerged,and picture data that is collected by terminal devices or IoT nodes are tied to the user's private data.The use of image sensors as an automa-tion tool for the IIoT is increasingly becoming more common.Due to the fact that this organisation transfers an enormous number of photographs at any one time,one of the most significant issues that it has is reducing the total quantity of data that is sent and,as a result,the available bandwidth,without compromising the image quality.Image compression in the sensor,on the other hand,expedites the transfer of data while simultaneously reducing bandwidth use.The traditional method of protecting sensitive data is rendered less effective in an environment dominated by IoT owing to the involvement of third parties.The image encryp-tion model provides a safe and adaptable method to protect the confidentiality of picture transformation and storage inside an IIoT system.This helps to ensure that image datasets are kept safe.The Linde–Buzo–Gray(LBG)methodology is an example of a vector quantization algorithm that is extensively used and a rela-tively new form of picture reduction known as vector quantization(VQ).As a result,the purpose of this research is to create an artificial humming bird optimi-zation approach that combines LBG-enabled codebook creation and encryption(AHBO-LBGCCE)for use in an IIoT setting.In the beginning,the AHBO-LBGCCE method used the LBG model in conjunction with the AHBO algorithm in order to construct the VQ.The Burrows-Wheeler Transform(BWT)model is used in order to accomplish codebook compression.In addition,the Blowfish algorithm is used in order to carry out the encryption procedure so that security may be attained.A comprehensive experimental investigation is carried out in order to verify the effectiveness of the proposed algorithm in comparison to other algorithms.The experimental values ensure that the suggested approach and the outcomes are examined in a variety of different perspectives in order to further enhance them.展开更多
The so-called “global polytropic model” is based on the assumption of hydrostatic equilibrium for the solar system, or for a planet’s system of statellites (like the Jovian system), described by the Lane-Emden diff...The so-called “global polytropic model” is based on the assumption of hydrostatic equilibrium for the solar system, or for a planet’s system of statellites (like the Jovian system), described by the Lane-Emden differential equation. A polytropic sphere of polytropic index?n?and radius?R1?represents the central component?S1?(Sun or planet) of a polytropic configuration with further components the polytropic spherical shells?S2,?S3,?..., defined by the pairs of radi (R1,?R2), (R2,?R3),?..., respectively.?R1,?R2,?R3,?..., are the roots of the real part Re(θ) of the complex Lane-Emden function?θ. Each polytropic shell is assumed to be an appropriate place for a planet, or a planet’s satellite, to be “born” and “live”. This scenario has been studied numerically for the cases of the solar and the Jovian systems. In the present paper, the Lane-Emden differential equation is solved numerically in the complex plane by using the Fortran code DCRKF54 (modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane along complex paths). We include in our numerical study some trans-Neptunian objects.展开更多
The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to d...The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.展开更多
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金This work was supported by Open Fund Project of State Key Laboratory of Intelligent Vehicle Safety Technology by Grant with No.IVSTSKL-202311Key Projects of Science and Technology Research Programme of Chongqing Municipal Education Commission by Grant with No.KJZD-K202301505+1 种基金Cooperation Project between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to the Chinese Academy of Sciences in 2021 by Grant with No.HZ2021015Chongqing Graduate Student Research Innovation Program by Grant with No.CYS240801.
文摘Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory footprint.To tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model compression.An integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training disturbances.Additionally,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning systems.The product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators.Finally,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.
基金supported by the National Natural Science Foundation of China(No.62201508)the Zhejiang Provincial Natural Science Foundation of China(Nos.LZ21F010001 and LQ23F010004)the State Key Laboratory of Millimeter Waves of Southeast University,China(No.K202212).
文摘Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-division multiplexing-based visible light communication(OFDM-VLC)system is presented.In order to analyze the effect of the resolution of ADC on NHS OFDM-VLC,a quantized mathematical model of NHS OFDM-VLC is established.Based on the proposed quantized model,a closed-form bit error rate(BER)expression is derived.The theoretical analysis and simulation results both confirm the effectiveness of the obtained BER formula in high-resolution ADC.In addition,channel coding is helpful in compensating for the BER performance loss due to the utilization of lower resolution ADC.
文摘The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.
文摘We consider a relativistic two-fluid model of superfluidity,in which the superfluid is described by an order parameter that is a complex scalar field satisfying the nonlinear Klein-Gordon equation(NLKG).The coupling to the normal fluid is introduced via a covariant current-current interaction,which results in the addition of an effective potential,whose imaginary part describes particle transfer between superfluid and normal fluid.Quantized vorticity arises in a class of singular solutions and the related vortex dynamics is incorporated in the modified NLKG,facilitating numerical analysis which is usually very complicated in the phenomenology of vortex filaments.The dual transformation to a string theory description(Kalb-Ramond)of quantum vorticity,the Magnus force,and the mutual friction between quantized vortices and normal fluid are also studied.
基金The National Natural Science Foundation of China(No.60474049,60835001)Specialized Research Fund for Doctoral Program of Higher Education(No.20090092120027)
文摘In order to overcome data-quantization, networked-induced delay, network packet dropouts and wrong sequences in the nonlinear networked control system, a novel nonlinear networked control system model is built by the T-S fuzzy method. Two time-varying quantizers are added in the model. The key analysis steps in the method are to construct an improved interval-delay-dependent Lyapunov functional and to introduce the free-weighting matrix. By making use of the parallel distributed compensation technology and the convexity of the matrix function, the improved criteria of the stabilization and stability are obtained. Simulation experiments show that the parameters of the controllers and quantizers satisfying a certain performance can be obtained by solving a set of LMIs. The application of the nonlinear mass-spring system is provided to show that the proposed method is effective.
文摘Formal state space models of quantum control systems are deduced and a scheme to establish formal state space models via quantization could been obtained for quantum control systems is proposed. State evolution of quantum control systems must accord with Schrdinger equations, so it is foremost to obtain Hamiltonian operators of systems. There are corresponding relations between operators of quantum systems and corresponding physical quantities of classical systems, such as momentum, energy and Hamiltonian, so Schrdinger equation models of corresponding quantum control systems via quantization could been obtained from classical control systems, and then establish formal state space models through the suitable transformation from Schrdinger equations for these quantum control systems. This method provides a new kind of path for modeling in quantum control.
基金This work was supported by the National Natural Science Foundation of China(62076025).
文摘This paper addresses a modified auxiliary model stochastic gradient recursive parameter identification algorithm(M-AM-SGRPIA)for a class of single input single output(SISO)linear output error models with multi-threshold quantized observations.It proves the convergence of the designed algorithm.A pattern-moving-based system dynamics description method with hybrid metrics is proposed for a kind of practical single input multiple output(SIMO)or SISO nonlinear systems,and a SISO linear output error model with multi-threshold quantized observations is adopted to approximate the unknown system.The system input design is accomplished using the measurement technology of random repeatability test,and the probabilistic characteristic of the explicit metric value is employed to estimate the implicit metric value of the pattern class variable.A modified auxiliary model stochastic gradient recursive algorithm(M-AM-SGRA)is designed to identify the model parameters,and the contraction mapping principle proves its convergence.Two numerical examples are given to demonstrate the feasibility and effectiveness of the achieved identification algorithm.
文摘A half-harmonic oscillator, which gets its name because the position coordinate is strictly positive, has been quantized and determined that it was a physically correct quantization. This positive result was found using affine quantization (AQ). The main purpose of this paper is to compare results of this new quantization procedure with those of canonical quantization (CQ). Using Ashtekar-like classical variables and CQ, we quantize the same toy model. While these two quantizations lead to different results, they both would reduce to the same classical Hamiltonian if ħ→ 0. Since these two quantizations have differing results, only one of the quantizations can be physically correct. Two brief sections also illustrate how AQ can correctly help quantum gravity and the quantization of most field theory problems.
文摘Using the Faddeev-Jackiw (FJ) quantization method, this paper treats the CP^1nonlinear sigma model with ChernSimons term. The generalized FJ brackets are obtained in the framework of this quantization method, which agree with the results obtained by using the Dirac's method.
文摘At the present time,the Industrial Internet of Things(IIoT)has swiftly evolved and emerged,and picture data that is collected by terminal devices or IoT nodes are tied to the user's private data.The use of image sensors as an automa-tion tool for the IIoT is increasingly becoming more common.Due to the fact that this organisation transfers an enormous number of photographs at any one time,one of the most significant issues that it has is reducing the total quantity of data that is sent and,as a result,the available bandwidth,without compromising the image quality.Image compression in the sensor,on the other hand,expedites the transfer of data while simultaneously reducing bandwidth use.The traditional method of protecting sensitive data is rendered less effective in an environment dominated by IoT owing to the involvement of third parties.The image encryp-tion model provides a safe and adaptable method to protect the confidentiality of picture transformation and storage inside an IIoT system.This helps to ensure that image datasets are kept safe.The Linde–Buzo–Gray(LBG)methodology is an example of a vector quantization algorithm that is extensively used and a rela-tively new form of picture reduction known as vector quantization(VQ).As a result,the purpose of this research is to create an artificial humming bird optimi-zation approach that combines LBG-enabled codebook creation and encryption(AHBO-LBGCCE)for use in an IIoT setting.In the beginning,the AHBO-LBGCCE method used the LBG model in conjunction with the AHBO algorithm in order to construct the VQ.The Burrows-Wheeler Transform(BWT)model is used in order to accomplish codebook compression.In addition,the Blowfish algorithm is used in order to carry out the encryption procedure so that security may be attained.A comprehensive experimental investigation is carried out in order to verify the effectiveness of the proposed algorithm in comparison to other algorithms.The experimental values ensure that the suggested approach and the outcomes are examined in a variety of different perspectives in order to further enhance them.
文摘The so-called “global polytropic model” is based on the assumption of hydrostatic equilibrium for the solar system, or for a planet’s system of statellites (like the Jovian system), described by the Lane-Emden differential equation. A polytropic sphere of polytropic index?n?and radius?R1?represents the central component?S1?(Sun or planet) of a polytropic configuration with further components the polytropic spherical shells?S2,?S3,?..., defined by the pairs of radi (R1,?R2), (R2,?R3),?..., respectively.?R1,?R2,?R3,?..., are the roots of the real part Re(θ) of the complex Lane-Emden function?θ. Each polytropic shell is assumed to be an appropriate place for a planet, or a planet’s satellite, to be “born” and “live”. This scenario has been studied numerically for the cases of the solar and the Jovian systems. In the present paper, the Lane-Emden differential equation is solved numerically in the complex plane by using the Fortran code DCRKF54 (modified Runge-Kutta-Fehlberg code of fourth and fifth order for solving initial value problems in the complex plane along complex paths). We include in our numerical study some trans-Neptunian objects.
文摘The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.