In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic...In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic scalar dynamic structures, and the theoretical and experimental deterministic vector dynamic structures have been developed to compute the exact solution for deterministic chaos of the exponential pulsons and oscillons that is governed by the nonstationary three-dimensional Navier-Stokes equations. To explore properties of the kinetic energy, rectangular, diagonal, and triangular summations of a matrix of the kinetic energy and general terms of various sums have been used in the current paper to develop quantization of the kinetic energy of deterministic chaos. Nested structures of a cumulative energy pulson, an energy pulson of propagation, an internal energy oscillon, a diagonal energy oscillon, and an external energy oscillon have been established. In turn, the energy pulsons and oscillons include group pulsons of propagation, internal group oscillons, diagonal group oscillons, and external group oscillons. Sequentially, the group pulsons and oscillons contain wave pulsons of propagation, internal wave oscillons, diagonal wave oscillons, and external wave oscillons. Consecutively, the wave pulsons and oscillons are composed of elementary pulsons of propagation, internal elementary oscillons, diagonal elementary oscillons, and external elementary oscillons. Topology, periodicity, and integral properties of the exponential pulsons and oscillons have been studied using the novel method of the inhomogeneous Fourier expansions via eigenfunctions in coordinates and time. Symbolic computations of the exact expansions have been performed using the experimental and theoretical programming in Maple. Results of the symbolic computations have been justified by probe visualizations.展开更多
An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aper...An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aperture. Also, the gain drop and sidelobe degradation due to random amplitude and phase errors and element (or T/R module) failures are investigated.展开更多
The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to...The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to energy levels. From them the quantized geometric mean heat capacity equations are expressed in sequence. Among them the five quantized geometric heat capacity equations, fit the best to the experimental heat capacity data of metal atoms at constant pressure. In the derivations we assume that the triad spin composed of an electron, its proton and its neutron in a metal cluster become a basic unit of thermal excitation. Boltzmann constant (kB) is found to be an average specific heat of an energy level in a metal cluster. And then the constant (kK) is found to be an average specific heat of a photon in a metal cluster. The core triad spin made of free neutrons may exist as the second one additional energy level. The energy levels are grouped according to the forms of four spins throughout two axes. Planck constant is theoretically obtained with the ratio of the internal energy of metal (U) to total isotherm number (N) through Equipartition theorem.展开更多
We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our man...We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.展开更多
We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it ...We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it should be possible to measure arbitrarily small intervals of space and time, but we ignore if that is true or not. It is thus more realistic to consider an extremely small “quantum of length” of yet unknown value <em>a</em>. It is only required to be a universal constant for all inertial frames, like<em> c</em> and <em>h</em>. This yields a logically consistent theory and accounts for elementary particles by means of four new quantum numbers. They define “particle states” in terms of modulations of wave functions at the smallest possible scale in space-time. The resulting classification of elementary particles accounts also for dark matter. Antiparticles are redefined, without needing negative energy states and recently observed “anomalies” can be explained.展开更多
Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-div...Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-division multiplexing-based visible light communication(OFDM-VLC)system is presented.In order to analyze the effect of the resolution of ADC on NHS OFDM-VLC,a quantized mathematical model of NHS OFDM-VLC is established.Based on the proposed quantized model,a closed-form bit error rate(BER)expression is derived.The theoretical analysis and simulation results both confirm the effectiveness of the obtained BER formula in high-resolution ADC.In addition,channel coding is helpful in compensating for the BER performance loss due to the utilization of lower resolution ADC.展开更多
This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 bac...This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.展开更多
In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed...In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.展开更多
According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Sch...According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Schwarzschild metric does not exist physically, and Einstein’s theory of gravity is compatible with the traditional quantum theory in essence;the quantized gravitational space is just the spin network which consists of infinite quantized loops linking and intersecting each other, and that whether the particle is in spin eigenstate depends on the translational track of its spin vector in gravitational space.展开更多
A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Mu...A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Multi-Step Amplitude Quantization(MSAQ)" on peak side-lobe level, aperture efficiency, normalized input power and sidelobe degradation with tolerance are given. It is shown that, under the same conditions, the method presented in this paper is better than the MSAQ.展开更多
A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates...A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates and the other contains the remaining coordinates. Three inequalities based on the characteristics of the sums and variances of a vector and its two subvectors were introduced to reject those codewords which are impossible to be the nearest codeword. The simulation results show that the proposed algorithm is faster than the improved equal average eaual variance nearest neighbor search (EENNS) algorithm.展开更多
The aim of this work is to study the Berezin quantization of a Gaussian state. The result is another Gaussian state that depends on a quantum parameter α, which describes the relationship between the classical and qu...The aim of this work is to study the Berezin quantization of a Gaussian state. The result is another Gaussian state that depends on a quantum parameter α, which describes the relationship between the classical and quantum vision. The compression parameter λ>0 is associated to the harmonic oscillator semigroup.展开更多
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati...The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.展开更多
It is well known that the representations over an arbitrary configuration space related to a physical system of the Heisenberg algebra allow to distinguish the simply and non simply-connected manifolds [arXiv:quant-ph...It is well known that the representations over an arbitrary configuration space related to a physical system of the Heisenberg algebra allow to distinguish the simply and non simply-connected manifolds [arXiv:quant-ph/9908.014, arXiv:hep-th/0608.023]. In the light of this classification, the dynamics of a quantum particle on the line is studied in the framework of the conventional quantization scheme as well as that of the enhanced quantization recently introduced by J. R. Klauder [arXiv:quant-ph/1204.2870]. The quantum action functional restricted to the phase space coherent states is obtained from the enhanced quantization procedure, showing the coexistence of classical and quantum theories, a fundamental advantage offered by this new approach. The example of the one dimensional harmonic oscillator is given. Next, the spectrum of a free particle on the two-sphere is recognized from the covariant diffeomorphic representations of the momentum operator in the configuration space. Our results based on simple models also point out the already-known link between interaction and topology at quantum level.展开更多
In this paper,we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation(QIM) scheme. Instead of a fixed quantization step-size,we apply a step-size adapted...In this paper,we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation(QIM) scheme. Instead of a fixed quantization step-size,we apply a step-size adapted to image content in each 8×8 block to make a balance of robust extraction and transparent embedding.The modified step-size is determined by contrast masking thresholds of Watson’s perceptual model.From a normalized crossed-correlation value between the original watermark and the detected watermark,we could observe that our method is robust to attacks of additive white Gaussian noise(AWGN),Salt and Pepper noise and Joint Photographic Experts Group(JPEG) compression than the original QIM.By taking into account the contrast insensitivity and visible thresholds of human visual system,the suggested improvement achieves a maximum embedding strength and an appropriate quantization step-size which is consistent with local values of a host signal.展开更多
This paper presents a new wavelet transform image coding method. On the basis of a hierarchical wavelet decomposition of images, entropy constrained vector quantization is employed to encode the wavelet coefficients...This paper presents a new wavelet transform image coding method. On the basis of a hierarchical wavelet decomposition of images, entropy constrained vector quantization is employed to encode the wavelet coefficients at all the high frequency bands with展开更多
Learning Vector Quantization(LVQ)originally proposed by Kohonen(1989)is aneurally-inspired classifier which pays attention to approximating the optimal Bayes decisionboundaries associated with a classification task.Wi...Learning Vector Quantization(LVQ)originally proposed by Kohonen(1989)is aneurally-inspired classifier which pays attention to approximating the optimal Bayes decisionboundaries associated with a classification task.With respect to several defects of LVQ2 algorithmstudied in this paper,some‘soft’competition schemes such as‘majority voting’scheme andcredibility calculation are proposed for improving the ability of classification as well as the learningspeed.Meanwhile,the probabilities of winning are introduced into the corrections for referencevectors in the‘soft’competition.In contrast with the conventional sequential learning technique,a novel parallel learning technique is developed to perform LVQ2 procedure.Experimental resultsof speech recognition show that these new approaches can lead to better performance as comparedwith the conventional展开更多
Improved gray-scale (IGS) quantization is a known method for re-quantizing digital gray-scale images for data compression while producing halftones by adding a level of randomness to improve visual quality of the resu...Improved gray-scale (IGS) quantization is a known method for re-quantizing digital gray-scale images for data compression while producing halftones by adding a level of randomness to improve visual quality of the resultant images. In this paper, first, analyzing the IGS quantizing operations reveals the capability of conserving a DC signal level of a source image through the quantization. Then, a complete procedure for producing a multi-level halftone image by IGS quantization that can achieve the DC conservation is presented. Also, the procedure uses the scanning of source pixels in an order such that geometric patterns can be prevented from occurring in the resulting halftone image. Next, the performance of the multi-level IGS halftoning is evaluated by experiments conducted on 8-bit gray-scale test images in comparison with the halftoning by error diffusion. The experimental result demonstrates that a signal level to be quantized in the IGS halftoning varies more randomly than that in the error diffusion halftoning, but not entirely randomly. Also, visual quality of the resulting halftone images was measured by subjective evaluations of viewers. The result indicates that for 3 or more-bit, in other words, 8 or more-level halftones, the IGS halftoning achieves image quality comparable to that by the error diffusion.展开更多
The quantum object is in general considered as displaying both wave and particle nature. By particle is understood an item localized in a very small volume of the space, and which cannot be simultaneously in two disjo...The quantum object is in general considered as displaying both wave and particle nature. By particle is understood an item localized in a very small volume of the space, and which cannot be simultaneously in two disjoint regions of the space. By wave, to the contrary, is understood a distributed item, occupying in some cases two or more disjoint regions of the space. The quantum formalism did not explain until today the so-called “collapse” of the wave-function, i.e. the shrinking of the wave-function to one small region of the space, when a macroscopic object is encountered. This seems to happen in “which-way” experiments. A very appealing explanation for this behavior is the idea of a particle, localized in some limited part of the wave-function. The present article challenges the concept of particle. It proves in the base of a variant of the Tan, Walls and Collett experiment, that this concept leads to a situation in which the particle has to be simultaneously in two places distant from one another—situation that contradicts the very definition of a particle. Another argument is based on a modified version of the Afshar experiment, showing that the concept of particle is problematic. The concept of particle makes additional difficulties when the wave-function passes through fields. An unexpected possibility to solve these difficulties seems to arise from the cavity quantum electrodynamics studies done recently by S. Savasta and his collaborators. It involves virtual particles. One of these studies is briefly described here. Though, experimental results are needed, so that it is too soon to conclude whether it speaks in favor, or against the concept of particle.展开更多
The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth...The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth varies with different video sequences/formats.This paper proposes an adaptive information-based variable quantization matrix(AIVQM)developed for different video formats having variable energy levels.The quantization method is adapted based on video sequence using statistical analysis,improving bit budget,quality and complexity reduction.Further,to have precise control over bit rate and quality,a multi-constraint prune algorithm is proposed in the second stage of the AI-VQM technique for pre-calculating K numbers of paths.The same should be handy to selfadapt and choose one of the K-path automatically in dynamically changing bandwidth availability as per requirement after extensive testing of the proposed algorithm in the multi-constraint environment for multiple paths and evaluating the performance based on peak signal to noise ratio(PSNR),bit-budget and time complexity for different videos a noticeable improvement in rate-distortion(RD)performance is achieved.Using the proposed AIVQM technique,more feasible and efficient video sequences are achieved with less loss in PSNR than the variable quantization method(VQM)algorithm with approximately a rise of 10%–20%based on different video sequences/formats.展开更多
文摘In previous works, the theoretical and experimental deterministic scalar kinematic structures, the theoretical and experimental deterministic vector kinematic structures, the theoretical and experimental deterministic scalar dynamic structures, and the theoretical and experimental deterministic vector dynamic structures have been developed to compute the exact solution for deterministic chaos of the exponential pulsons and oscillons that is governed by the nonstationary three-dimensional Navier-Stokes equations. To explore properties of the kinetic energy, rectangular, diagonal, and triangular summations of a matrix of the kinetic energy and general terms of various sums have been used in the current paper to develop quantization of the kinetic energy of deterministic chaos. Nested structures of a cumulative energy pulson, an energy pulson of propagation, an internal energy oscillon, a diagonal energy oscillon, and an external energy oscillon have been established. In turn, the energy pulsons and oscillons include group pulsons of propagation, internal group oscillons, diagonal group oscillons, and external group oscillons. Sequentially, the group pulsons and oscillons contain wave pulsons of propagation, internal wave oscillons, diagonal wave oscillons, and external wave oscillons. Consecutively, the wave pulsons and oscillons are composed of elementary pulsons of propagation, internal elementary oscillons, diagonal elementary oscillons, and external elementary oscillons. Topology, periodicity, and integral properties of the exponential pulsons and oscillons have been studied using the novel method of the inhomogeneous Fourier expansions via eigenfunctions in coordinates and time. Symbolic computations of the exact expansions have been performed using the experimental and theoretical programming in Maple. Results of the symbolic computations have been justified by probe visualizations.
文摘An aperture design technique using multi-step amplitude quantization for two-dimensional solid-state active phased arrays to achieve low sidelobe is described. It can be applied to antennas with arbitrary complex aperture. Also, the gain drop and sidelobe degradation due to random amplitude and phase errors and element (or T/R module) failures are investigated.
文摘The quantization thermal excitation isotherms based on the maximum triad spin number (G) of each energy level for metal cluster were derived as a function of temperature by expanding the binomial theorems according to energy levels. From them the quantized geometric mean heat capacity equations are expressed in sequence. Among them the five quantized geometric heat capacity equations, fit the best to the experimental heat capacity data of metal atoms at constant pressure. In the derivations we assume that the triad spin composed of an electron, its proton and its neutron in a metal cluster become a basic unit of thermal excitation. Boltzmann constant (kB) is found to be an average specific heat of an energy level in a metal cluster. And then the constant (kK) is found to be an average specific heat of a photon in a metal cluster. The core triad spin made of free neutrons may exist as the second one additional energy level. The energy levels are grouped according to the forms of four spins throughout two axes. Planck constant is theoretically obtained with the ratio of the internal energy of metal (U) to total isotherm number (N) through Equipartition theorem.
文摘We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.
文摘We justify and extend the standard model of elementary particle physics by generalizing the theory of relativity and quantum mechanics. The usual assumption that space and time are continuous implies, indeed, that it should be possible to measure arbitrarily small intervals of space and time, but we ignore if that is true or not. It is thus more realistic to consider an extremely small “quantum of length” of yet unknown value <em>a</em>. It is only required to be a universal constant for all inertial frames, like<em> c</em> and <em>h</em>. This yields a logically consistent theory and accounts for elementary particles by means of four new quantum numbers. They define “particle states” in terms of modulations of wave functions at the smallest possible scale in space-time. The resulting classification of elementary particles accounts also for dark matter. Antiparticles are redefined, without needing negative energy states and recently observed “anomalies” can be explained.
基金supported by the National Natural Science Foundation of China(No.62201508)the Zhejiang Provincial Natural Science Foundation of China(Nos.LZ21F010001 and LQ23F010004)the State Key Laboratory of Millimeter Waves of Southeast University,China(No.K202212).
文摘Quantization noise caused by analog-to-digital converter(ADC)gives rise to the reliability performance degradation of communication systems.In this paper,a quantized non-Hermitian symmetry(NHS)orthogonal frequency-division multiplexing-based visible light communication(OFDM-VLC)system is presented.In order to analyze the effect of the resolution of ADC on NHS OFDM-VLC,a quantized mathematical model of NHS OFDM-VLC is established.Based on the proposed quantized model,a closed-form bit error rate(BER)expression is derived.The theoretical analysis and simulation results both confirm the effectiveness of the obtained BER formula in high-resolution ADC.In addition,channel coding is helpful in compensating for the BER performance loss due to the utilization of lower resolution ADC.
基金supported by the National Natural Science Foundation of China(Nos.62276210,82201148 and 62376215)the Key Research and Development Project of Shaanxi Province(No.2025CY-YBXM-044)+3 种基金the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(Nos.2022RC069 and 2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)the Ningbo Top Medical and Health Research Program(No.2023030716).
文摘This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.
文摘In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.
文摘According to the formula of translational motion of vector along an infinitesimal closed curve in gravitational space, this article shows that the space and time both are quantized;the called center singularity of Schwarzschild metric does not exist physically, and Einstein’s theory of gravity is compatible with the traditional quantum theory in essence;the quantized gravitational space is just the spin network which consists of infinite quantized loops linking and intersecting each other, and that whether the particle is in spin eigenstate depends on the translational track of its spin vector in gravitational space.
文摘A low sidelobe aperture design method of multi-step amplitude quantization with pedestal is proposed, and general analysis and formulas are described. The computation results compared with our previous method "Multi-Step Amplitude Quantization(MSAQ)" on peak side-lobe level, aperture efficiency, normalized input power and sidelobe degradation with tolerance are given. It is shown that, under the same conditions, the method presented in this paper is better than the MSAQ.
文摘A fast encoding algorithm was presented which made full use of two characteristics of a vector, its sum and variance. In this paper, a vector was separated into two subvectors, one is the first half of the coordinates and the other contains the remaining coordinates. Three inequalities based on the characteristics of the sums and variances of a vector and its two subvectors were introduced to reject those codewords which are impossible to be the nearest codeword. The simulation results show that the proposed algorithm is faster than the improved equal average eaual variance nearest neighbor search (EENNS) algorithm.
文摘The aim of this work is to study the Berezin quantization of a Gaussian state. The result is another Gaussian state that depends on a quantum parameter α, which describes the relationship between the classical and quantum vision. The compression parameter λ>0 is associated to the harmonic oscillator semigroup.
文摘The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.
文摘It is well known that the representations over an arbitrary configuration space related to a physical system of the Heisenberg algebra allow to distinguish the simply and non simply-connected manifolds [arXiv:quant-ph/9908.014, arXiv:hep-th/0608.023]. In the light of this classification, the dynamics of a quantum particle on the line is studied in the framework of the conventional quantization scheme as well as that of the enhanced quantization recently introduced by J. R. Klauder [arXiv:quant-ph/1204.2870]. The quantum action functional restricted to the phase space coherent states is obtained from the enhanced quantization procedure, showing the coexistence of classical and quantum theories, a fundamental advantage offered by this new approach. The example of the one dimensional harmonic oscillator is given. Next, the spectrum of a free particle on the two-sphere is recognized from the covariant diffeomorphic representations of the momentum operator in the configuration space. Our results based on simple models also point out the already-known link between interaction and topology at quantum level.
基金supports of China NNSF(Grant No.60472063. 60325310)GDNSF/GDCNLF(04020074/ CN200402)
文摘In this paper,we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation(QIM) scheme. Instead of a fixed quantization step-size,we apply a step-size adapted to image content in each 8×8 block to make a balance of robust extraction and transparent embedding.The modified step-size is determined by contrast masking thresholds of Watson’s perceptual model.From a normalized crossed-correlation value between the original watermark and the detected watermark,we could observe that our method is robust to attacks of additive white Gaussian noise(AWGN),Salt and Pepper noise and Joint Photographic Experts Group(JPEG) compression than the original QIM.By taking into account the contrast insensitivity and visible thresholds of human visual system,the suggested improvement achieves a maximum embedding strength and an appropriate quantization step-size which is consistent with local values of a host signal.
文摘This paper presents a new wavelet transform image coding method. On the basis of a hierarchical wavelet decomposition of images, entropy constrained vector quantization is employed to encode the wavelet coefficients at all the high frequency bands with
文摘Learning Vector Quantization(LVQ)originally proposed by Kohonen(1989)is aneurally-inspired classifier which pays attention to approximating the optimal Bayes decisionboundaries associated with a classification task.With respect to several defects of LVQ2 algorithmstudied in this paper,some‘soft’competition schemes such as‘majority voting’scheme andcredibility calculation are proposed for improving the ability of classification as well as the learningspeed.Meanwhile,the probabilities of winning are introduced into the corrections for referencevectors in the‘soft’competition.In contrast with the conventional sequential learning technique,a novel parallel learning technique is developed to perform LVQ2 procedure.Experimental resultsof speech recognition show that these new approaches can lead to better performance as comparedwith the conventional
文摘Improved gray-scale (IGS) quantization is a known method for re-quantizing digital gray-scale images for data compression while producing halftones by adding a level of randomness to improve visual quality of the resultant images. In this paper, first, analyzing the IGS quantizing operations reveals the capability of conserving a DC signal level of a source image through the quantization. Then, a complete procedure for producing a multi-level halftone image by IGS quantization that can achieve the DC conservation is presented. Also, the procedure uses the scanning of source pixels in an order such that geometric patterns can be prevented from occurring in the resulting halftone image. Next, the performance of the multi-level IGS halftoning is evaluated by experiments conducted on 8-bit gray-scale test images in comparison with the halftoning by error diffusion. The experimental result demonstrates that a signal level to be quantized in the IGS halftoning varies more randomly than that in the error diffusion halftoning, but not entirely randomly. Also, visual quality of the resulting halftone images was measured by subjective evaluations of viewers. The result indicates that for 3 or more-bit, in other words, 8 or more-level halftones, the IGS halftoning achieves image quality comparable to that by the error diffusion.
文摘The quantum object is in general considered as displaying both wave and particle nature. By particle is understood an item localized in a very small volume of the space, and which cannot be simultaneously in two disjoint regions of the space. By wave, to the contrary, is understood a distributed item, occupying in some cases two or more disjoint regions of the space. The quantum formalism did not explain until today the so-called “collapse” of the wave-function, i.e. the shrinking of the wave-function to one small region of the space, when a macroscopic object is encountered. This seems to happen in “which-way” experiments. A very appealing explanation for this behavior is the idea of a particle, localized in some limited part of the wave-function. The present article challenges the concept of particle. It proves in the base of a variant of the Tan, Walls and Collett experiment, that this concept leads to a situation in which the particle has to be simultaneously in two places distant from one another—situation that contradicts the very definition of a particle. Another argument is based on a modified version of the Afshar experiment, showing that the concept of particle is problematic. The concept of particle makes additional difficulties when the wave-function passes through fields. An unexpected possibility to solve these difficulties seems to arise from the cavity quantum electrodynamics studies done recently by S. Savasta and his collaborators. It involves virtual particles. One of these studies is briefly described here. Though, experimental results are needed, so that it is too soon to conclude whether it speaks in favor, or against the concept of particle.
文摘The high-efficiency video coder(HEVC)is one of the most advanced techniques used in growing real-time multimedia applications today.However,they require large bandwidth for transmission through bandwidth,and bandwidth varies with different video sequences/formats.This paper proposes an adaptive information-based variable quantization matrix(AIVQM)developed for different video formats having variable energy levels.The quantization method is adapted based on video sequence using statistical analysis,improving bit budget,quality and complexity reduction.Further,to have precise control over bit rate and quality,a multi-constraint prune algorithm is proposed in the second stage of the AI-VQM technique for pre-calculating K numbers of paths.The same should be handy to selfadapt and choose one of the K-path automatically in dynamically changing bandwidth availability as per requirement after extensive testing of the proposed algorithm in the multi-constraint environment for multiple paths and evaluating the performance based on peak signal to noise ratio(PSNR),bit-budget and time complexity for different videos a noticeable improvement in rate-distortion(RD)performance is achieved.Using the proposed AIVQM technique,more feasible and efficient video sequences are achieved with less loss in PSNR than the variable quantization method(VQM)algorithm with approximately a rise of 10%–20%based on different video sequences/formats.