In this paper performances of wavelet transform domain (WTD) adaptive equalizers based on the least mean ̄square (LMS) algorithm are analyzed. The optimum Wiener solution, the condition of convergence, the minimum ...In this paper performances of wavelet transform domain (WTD) adaptive equalizers based on the least mean ̄square (LMS) algorithm are analyzed. The optimum Wiener solution, the condition of convergence, the minimum mean square error (MSE) and the steady state excess MSE of the WTD adaptive equalizer are obtained. Constant and time varying convergence factor adaptive algorithms are studied respectively. Computational complexities of WTD LMS equalizers are given. The equalizer in WTD shows much better convergence performance than that of the conventional in time domain.展开更多
A novel wavelet network based adaptive equalizer (WNBAE) is presented and the structure and stochastic gradient learning algorithm is given. The proposed WNBAE has better performance than that of the conventional lin...A novel wavelet network based adaptive equalizer (WNBAE) is presented and the structure and stochastic gradient learning algorithm is given. The proposed WNBAE has better performance than that of the conventional linear transversal equalizer based on the LMS and the RLS algorithms, as well as that of the decision feedback equalizer based on the RLS algorithm, especially for MQAM digital communication reception systems over the nonlinear channels. In addition, it outperforms the BP neural network based adaptive equalizer slightly. However, it has a slow convergence rate and a high computational complexity. Several simulations are performed to evaluate the behavior of the WNBAE.展开更多
A variable step-size parameter is usually used to accelerate the convergence speed of a blind adaptive equalizer with N1 + N2 -1 coefficients where N1 and N2 are odd values. In this paper we show that improved equaliz...A variable step-size parameter is usually used to accelerate the convergence speed of a blind adaptive equalizer with N1 + N2 -1 coefficients where N1 and N2 are odd values. In this paper we show that improved equalization performance is achieved when using two blind adaptive equalizers connected in series where the first and second blind adaptive equalizer have N1 and N2 coefficients respectively compared with the case where a single blind adaptive equalizer is applied with N1 + N2 -1 coefficients. It should be pointed out that the same algorithm (cost function) is used for updating the filter taps for the different equalizers and that a fixed step-size parameter is used. Simulation results show that for the low signal to noise ratio (SNR) environment and for the case where the convergence speed is slow due to the channel characteristics, the new method has a faster convergence speed with a factor of approximately two while leaving the system with approximately the same or lower residual intersymbol interference (ISI).展开更多
In this paper a closed-form approximated expression is proposed for the Intersymbol Interference (ISI) as a function of time valid during the entire stages of the non-blind adaptive deconvolution process and is suitab...In this paper a closed-form approximated expression is proposed for the Intersymbol Interference (ISI) as a function of time valid during the entire stages of the non-blind adaptive deconvolution process and is suitable for the noisy, real and two independent quadrature carrier input case. The obtained expression is applicable for type of channels where the resulting ISI as a function of time can be described with an exponential model having a single time constant. Based on this new expression for the ISI as a function of time, the convergence time (or number of iteration number required for convergence) of the non-blind adaptive equalizer can be calculated. Up to now, the equalizer’s performance (convergence time and ISI as a function of time) could be obtained only via simulation when the channel coefficients were known. The new proposed expression for the ISI as a function of time is based on the knowledge of the initial ISI and channel power (which is measurable) and eliminates the need to carry out any more the above mentioned simulation. Simulation results indicate a high correlation between the simulated and calculated ISI (based on our proposed expression for the ISI as a function of time) during the whole deconvolution process for the high as well as for the low signal to noise ratio (SNR) condition.展开更多
Up to now, the Mean Square Error (MSE) criteria, the residual Inter-Symbol Interference (ISI) and the Bit-Error-Rate (BER) were used to analyze the equalization performance of a blind adaptive equalizer in its converg...Up to now, the Mean Square Error (MSE) criteria, the residual Inter-Symbol Interference (ISI) and the Bit-Error-Rate (BER) were used to analyze the equalization performance of a blind adaptive equalizer in its convergence state. In this paper, we propose an additional tool (additional to the ISI, MSE and BER) for analyzing the equalization performance in the convergence region based on the Maximum Time Interval Error (MTIE) criterion that is used for the specification of clock stability requirements in telecommunications standards. This new tool preserves the short term statistical information unlike the already known tools (BER, ISI, MSE) that lack this information. Simulation results will show that the equalization performance of a blind adaptive equalizer obtained in the convergence region for two different channels is seen to be approximately the same from the residual ISI and MSE point of view while this is not the case with our new proposed tool. Thus, our new proposed tool might be considered as a more sensitive tool compared to the ISI and MSE method.展开更多
Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap ze...Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap zero-forcing (ZF) and minimum mean-square error (MMSE) equalizers in oversampled OFDM systems. Theoretical analysis and simulation results show that oversampling not only reduces the noise at equalizer output but also helps mitigate ill effects of spectral nulls. One tap equalizers therefore yield improved symbol-error-rate (SER) performance with the increase in oversampling rate, but at the expense of increased system bandwidth and modest complexity requirements.展开更多
Single-Carrier (SC) transmission with the same bandwidth as Multi-Carrier (MC) transmission (such as OFDM) may have far shorter symbol duration and is considered to be more robust against time selective fading. In thi...Single-Carrier (SC) transmission with the same bandwidth as Multi-Carrier (MC) transmission (such as OFDM) may have far shorter symbol duration and is considered to be more robust against time selective fading. In this paper, we proposed the novel equalization and signal separation schemes in time domain for short block length transmission, i.e., Block Linear Equalization (BLE) and Block Nonlinear Equalization (BNLE) on MIMO frequency selective fading channels. The proposed BLE uses the MMSE based inverse matrix in time domain and the BNLE utilizes the QRD-M (QR Decomposition with M algorithm) with appropriate receiver complexity. We compared the computational complexity among the conventional SC-FDE (Frequency Domain Equalization) scheme and the proposed equalizers. We also used the Low-Density Parity Check (LDPC) decoder concatenated to the proposed BLE and BNLE.展开更多
This paper proposes two nonlinear blind equalizers: the nonlinear constant modulus algorithm (NCMA) and the nonlinear modified constant modulus algorithm (NCMA) by applying a nonlinear transfer function (NTF) into con...This paper proposes two nonlinear blind equalizers: the nonlinear constant modulus algorithm (NCMA) and the nonlinear modified constant modulus algorithm (NCMA) by applying a nonlinear transfer function (NTF) into constant modulus algorithm (CMA) and modified constant modulus algorithm (MCMA), respectively. The effect of the NTF on CMA and MCMA is theoretically analyzed, which implies that the NTF can make their decision regions much sharper so that the proposed two nonlinear blind equalizers are more robust against the convergency error compared to their linear counterparts. The embedded single layer in NCMA and NMCMA simultaneously guarantees a comparably speedy convergency. On 16-quadrature amplitude modulation (QAM) symbols, computer simulations show that NCMA achieves an 8dB lower convergency mean square error (MSE) than CMA, and NMCMA achieves a 15dB lower convergency MSE than MCMA.展开更多
This paper develops an efficient pseudo-random number generator for validation of digital communication channels and secure disc. Drives. Simulation results validates the effectiveness of the random number generator.
Ultrafine-grained(UFG)pure titanium was produced by equal channel angular pressing for 4 passes,followed by rotatory swaging at room temperature.The strain-controlled low-cycle fatigue tests of UFG and coarse-grained(...Ultrafine-grained(UFG)pure titanium was produced by equal channel angular pressing for 4 passes,followed by rotatory swaging at room temperature.The strain-controlled low-cycle fatigue tests of UFG and coarse-grained(CG)pure titanium were conducted by Instron electro-hydraulic servo fatigue testing machine in the strain amplitude range of 0.5%—1.1%at room temperature.Transmission electron microscope(TEM)and scanning electron microscope were used to investigate the microstructure and fracture surface of UFG pure titanium after fatigue tests.Results show that UFG pure titanium exhibits a longer low-cycle fatigue life,compared with the CG pure titanium.For example,at a total strain amplitude of 0.5%,UFG and CG pure titanium has fatigue life of 10850 and 4820 cycles,respectively.Significant cyclic softening occurs in UFG pure titanium,except in the case of a total strain amplitude of 0.5%.Hysteresis loop area is increased rapidly with the increase in strain amplitude.The fracture surface shows that the fatigue crack is initiated from the specimen surface.A series of fatigue striations and many microcracks exist in the propagation region.With the increase in strain amplitude,the predominant failure mode is transformed from ductile failure into quasi-cleavage failure.Dislocation slip is the main plastic deformation mechanism of UFG pure titanium during low-cycle fatigue deformation.展开更多
Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud ar...Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.展开更多
In view of the high computational complexity of traditional linear equalization algorithms in Orthogonal Time Frequency Space(OTFS)systems,a minimum mean square error(MMSE)channel equalization algorithm based on Matri...In view of the high computational complexity of traditional linear equalization algorithms in Orthogonal Time Frequency Space(OTFS)systems,a minimum mean square error(MMSE)channel equalization algorithm based on Matrix Chunking Lower and Upper Triangular Decomposition(CLU)is proposed.The proposed algorithm derives the structural properties of the chunked MMSE equalization matrix by leveraging the block diagonal structure of the Cyclic Prefix OTFS(CP-OTFS)time-domain channel matrix and the quasi-band structure of its constituent block matrices.On this basis,triangular decomposition combined with forward and backward substitution is used to avoid matrix inversion.This approach significantly reduces the complexity of the MMSE algorithm without sacrificing its performance.展开更多
Medical imaging is essential in modern health care,allowing accurate diagnosis and effective treatment planning.These images,however,often demonstrate low contrast,noise,and brightness distortion that reduce their dia...Medical imaging is essential in modern health care,allowing accurate diagnosis and effective treatment planning.These images,however,often demonstrate low contrast,noise,and brightness distortion that reduce their diagnostic reliability.This review presents a structured and comprehensive analysis of advanced histogram equalization(HE)-based techniques for medical image enhancement.Our review methodology encompasses:(1)classical HE approaches and related limitations in medical domains;(2)adaptive schemes like Adaptive Histogram Equalization(AHE)and Contrast Limited Adaptive Histogrma Equalization(CLAHE)and their advance variants;(3)brightnesspreserving schemes like BBHE and MMBEBHE and related algorithms;(4)dynamic and recursive histogram equalization methods incorporating DHE and RMSHE;(5)fuzzy logic-based enhancement methodologies addressing uncertainty and noise in medical images;and(6)hybrid optimization methodologies through the application of metaheuristic algorithms(World Cup Optimization,Particle Swarm Optimization,Genetic Algorithms,along with histogram-based methodologies.)There is also a comparative discussion given based on contrast improvement,image brightness preservation,noise management,and computational efficiency.Such advancements have better capabilities of improving image quality,which is more important for improved diagnosis and image analysis.展开更多
Agile lithology identification can assist mining by providing important information in the exploration and production of mineral resources.This study proposes a new lithology recognition procedure using video-logging ...Agile lithology identification can assist mining by providing important information in the exploration and production of mineral resources.This study proposes a new lithology recognition procedure using video-logging of boreholes with an endoscope,applied to six production blocks in a limestone quarry.Images are automatically extracted from the videos and the lithology is classified into three classes based on clay content,i.e.massive limestone,brecciated limestone,and high amount of clay.The image quality is evaluated with a gray pixel intensity threshold and three no-reference image quality metrics,i.e.perception-based image quality evaluator,natural image quality evaluator,and blind/referenceless image spatial quality evaluator.After removing low-quality images,7583 images are retained and used for developing lithology classification models using six optimized classification techniques.The contrast-limited adaptive histogram equalization(CLAHE)technique is used to improve image quality.Ten color characteristics involving three percentiles of red,green and blue pixel intensities,together with color counting and five texture characteristics-correlation,entropy,homogeneity,contrast and energy-are used as inputs.Bayesian optimized light gradient boosting machine model performs best,with an overall accuracy of 88.04%,and a precision on the classes of massive limestone,brecciated limestone and high amount of clay of 90.72%,83.52%and 85.29%,respectively,for the testing set.The feature importance scores show that the color counting is the most significant parameter for the development of the classification model.Compared with previous image-based methodologies,this study provides a more flexible and cheaper procedure to identify lithology.展开更多
Women’s rights and gender equality,as essential pillars of social progress and human development,have become a shared pursuit of the international community.Chinese President Xi Jinping on 13 October 2025 delivered a...Women’s rights and gender equality,as essential pillars of social progress and human development,have become a shared pursuit of the international community.Chinese President Xi Jinping on 13 October 2025 delivered a keynote address at the opening ceremony of the Global Leaders’Meeting on Women in Beijing.展开更多
The 200 Gbit/s passive optical network(PON)is most likely to be the next-generation scheme following 50G PON.The costeffective direct detection(DD)system is the economical choice.However,larger-capacity DD systems wil...The 200 Gbit/s passive optical network(PON)is most likely to be the next-generation scheme following 50G PON.The costeffective direct detection(DD)system is the economical choice.However,larger-capacity DD systems will face much more serious power fading caused by chromatic dispersion(CD)combined with square-law DD and thereby significantly increases the complexity of equalization algorithms.In this paper,a 200 Gbit/s Nyquist 4-level pulse amplitude modulation(PAM4)single side-band(SSB)modulation-DD downlink scheme is designed,and a low complexity quadratic-nonlinear equalizer is proposed for this system.The computational complexity of the quadratic nonlinear equalizer is about 28%of that of the conventional Volterra nonlinear equalizer,while still exhibiting excellent nonlinear equalization ability.Simulation results for the 200 Gbit/s system with 20 km fiber transmission show that it can achieve a power budget of 29 dB,while a 30.4 dB power budget is obtained in the 50 Gbit/s experimental transmission.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
China’s healthcare system faces increasing challenges,including surging medical costs,resource allocation imbalances favoring large hospitals,and ineffective referral mechanisms.The lack of a unified strategy integra...China’s healthcare system faces increasing challenges,including surging medical costs,resource allocation imbalances favoring large hospitals,and ineffective referral mechanisms.The lack of a unified strategy integrating standardized coverage with personalized payment compounds these issues.To this end,this study proposes a risk-sharing reform strategy that combines equal coverage for the same disease(ECSD)with an individualized out-of-pocket(I-OOP)model.Specifically,the study employs a Markov model to capture patient transitions across health states and care levels.The findings show that ECSD and I-OOP enhance equity by standardizing disease coverage while tailoring costs to patient income and facility type.This approach alleviates demand on high-tier hospitals,promoting primary care utilization and enabling balanced resource distribution.The study’s findings provide a reference for policymakers and healthcare administrators by presenting a scalable framework that is aligned with China’s development goals with the aim of fostering an efficient,sustainable healthcare system that is adaptable to regional needs.展开更多
To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illuminat...To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illumination is processed by contrast-limited adaptive histogram equalization(CLAHE),adaptive complementary gamma function(ACG),and adaptive detail preserving S-curve(ADPS),respectively,to obtain three components.Then,the fusion-relevant features,exposure,and color contrast are selected as the weight maps.Subsequently,these components and weight maps are fused through multi-scale to generate enhanced illumination.Finally,the enhanced images are obtained by multiplying the enhanced illumination and reflectance.Compared with existing approaches,this proposed method achieves an average increase of 0.81%and 2.89%in the structural similarity index measurement(SSIM)and peak signal-to-noise ratio(PSNR),and a decrease of 6.17%and 32.61%in the natural image quality evaluator(NIQE)and gradient magnitude similarity deviation(GMSD),respectively.展开更多
文摘In this paper performances of wavelet transform domain (WTD) adaptive equalizers based on the least mean ̄square (LMS) algorithm are analyzed. The optimum Wiener solution, the condition of convergence, the minimum mean square error (MSE) and the steady state excess MSE of the WTD adaptive equalizer are obtained. Constant and time varying convergence factor adaptive algorithms are studied respectively. Computational complexities of WTD LMS equalizers are given. The equalizer in WTD shows much better convergence performance than that of the conventional in time domain.
文摘A novel wavelet network based adaptive equalizer (WNBAE) is presented and the structure and stochastic gradient learning algorithm is given. The proposed WNBAE has better performance than that of the conventional linear transversal equalizer based on the LMS and the RLS algorithms, as well as that of the decision feedback equalizer based on the RLS algorithm, especially for MQAM digital communication reception systems over the nonlinear channels. In addition, it outperforms the BP neural network based adaptive equalizer slightly. However, it has a slow convergence rate and a high computational complexity. Several simulations are performed to evaluate the behavior of the WNBAE.
文摘A variable step-size parameter is usually used to accelerate the convergence speed of a blind adaptive equalizer with N1 + N2 -1 coefficients where N1 and N2 are odd values. In this paper we show that improved equalization performance is achieved when using two blind adaptive equalizers connected in series where the first and second blind adaptive equalizer have N1 and N2 coefficients respectively compared with the case where a single blind adaptive equalizer is applied with N1 + N2 -1 coefficients. It should be pointed out that the same algorithm (cost function) is used for updating the filter taps for the different equalizers and that a fixed step-size parameter is used. Simulation results show that for the low signal to noise ratio (SNR) environment and for the case where the convergence speed is slow due to the channel characteristics, the new method has a faster convergence speed with a factor of approximately two while leaving the system with approximately the same or lower residual intersymbol interference (ISI).
文摘In this paper a closed-form approximated expression is proposed for the Intersymbol Interference (ISI) as a function of time valid during the entire stages of the non-blind adaptive deconvolution process and is suitable for the noisy, real and two independent quadrature carrier input case. The obtained expression is applicable for type of channels where the resulting ISI as a function of time can be described with an exponential model having a single time constant. Based on this new expression for the ISI as a function of time, the convergence time (or number of iteration number required for convergence) of the non-blind adaptive equalizer can be calculated. Up to now, the equalizer’s performance (convergence time and ISI as a function of time) could be obtained only via simulation when the channel coefficients were known. The new proposed expression for the ISI as a function of time is based on the knowledge of the initial ISI and channel power (which is measurable) and eliminates the need to carry out any more the above mentioned simulation. Simulation results indicate a high correlation between the simulated and calculated ISI (based on our proposed expression for the ISI as a function of time) during the whole deconvolution process for the high as well as for the low signal to noise ratio (SNR) condition.
文摘Up to now, the Mean Square Error (MSE) criteria, the residual Inter-Symbol Interference (ISI) and the Bit-Error-Rate (BER) were used to analyze the equalization performance of a blind adaptive equalizer in its convergence state. In this paper, we propose an additional tool (additional to the ISI, MSE and BER) for analyzing the equalization performance in the convergence region based on the Maximum Time Interval Error (MTIE) criterion that is used for the specification of clock stability requirements in telecommunications standards. This new tool preserves the short term statistical information unlike the already known tools (BER, ISI, MSE) that lack this information. Simulation results will show that the equalization performance of a blind adaptive equalizer obtained in the convergence region for two different channels is seen to be approximately the same from the residual ISI and MSE point of view while this is not the case with our new proposed tool. Thus, our new proposed tool might be considered as a more sensitive tool compared to the ISI and MSE method.
文摘Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap zero-forcing (ZF) and minimum mean-square error (MMSE) equalizers in oversampled OFDM systems. Theoretical analysis and simulation results show that oversampling not only reduces the noise at equalizer output but also helps mitigate ill effects of spectral nulls. One tap equalizers therefore yield improved symbol-error-rate (SER) performance with the increase in oversampling rate, but at the expense of increased system bandwidth and modest complexity requirements.
文摘Single-Carrier (SC) transmission with the same bandwidth as Multi-Carrier (MC) transmission (such as OFDM) may have far shorter symbol duration and is considered to be more robust against time selective fading. In this paper, we proposed the novel equalization and signal separation schemes in time domain for short block length transmission, i.e., Block Linear Equalization (BLE) and Block Nonlinear Equalization (BNLE) on MIMO frequency selective fading channels. The proposed BLE uses the MMSE based inverse matrix in time domain and the BNLE utilizes the QRD-M (QR Decomposition with M algorithm) with appropriate receiver complexity. We compared the computational complexity among the conventional SC-FDE (Frequency Domain Equalization) scheme and the proposed equalizers. We also used the Low-Density Parity Check (LDPC) decoder concatenated to the proposed BLE and BNLE.
文摘This paper proposes two nonlinear blind equalizers: the nonlinear constant modulus algorithm (NCMA) and the nonlinear modified constant modulus algorithm (NCMA) by applying a nonlinear transfer function (NTF) into constant modulus algorithm (CMA) and modified constant modulus algorithm (MCMA), respectively. The effect of the NTF on CMA and MCMA is theoretically analyzed, which implies that the NTF can make their decision regions much sharper so that the proposed two nonlinear blind equalizers are more robust against the convergency error compared to their linear counterparts. The embedded single layer in NCMA and NMCMA simultaneously guarantees a comparably speedy convergency. On 16-quadrature amplitude modulation (QAM) symbols, computer simulations show that NCMA achieves an 8dB lower convergency mean square error (MSE) than CMA, and NMCMA achieves a 15dB lower convergency MSE than MCMA.
文摘This paper develops an efficient pseudo-random number generator for validation of digital communication channels and secure disc. Drives. Simulation results validates the effectiveness of the random number generator.
基金Natural Science Foundation of Shaanxi Province (2023-JC-YB-312)。
文摘Ultrafine-grained(UFG)pure titanium was produced by equal channel angular pressing for 4 passes,followed by rotatory swaging at room temperature.The strain-controlled low-cycle fatigue tests of UFG and coarse-grained(CG)pure titanium were conducted by Instron electro-hydraulic servo fatigue testing machine in the strain amplitude range of 0.5%—1.1%at room temperature.Transmission electron microscope(TEM)and scanning electron microscope were used to investigate the microstructure and fracture surface of UFG pure titanium after fatigue tests.Results show that UFG pure titanium exhibits a longer low-cycle fatigue life,compared with the CG pure titanium.For example,at a total strain amplitude of 0.5%,UFG and CG pure titanium has fatigue life of 10850 and 4820 cycles,respectively.Significant cyclic softening occurs in UFG pure titanium,except in the case of a total strain amplitude of 0.5%.Hysteresis loop area is increased rapidly with the increase in strain amplitude.The fracture surface shows that the fatigue crack is initiated from the specimen surface.A series of fatigue striations and many microcracks exist in the propagation region.With the increase in strain amplitude,the predominant failure mode is transformed from ductile failure into quasi-cleavage failure.Dislocation slip is the main plastic deformation mechanism of UFG pure titanium during low-cycle fatigue deformation.
基金supported in part by the National Science and Technology Council of Taiwan under the contract numbers NSTC 114-2221-E-019-055-MY2 and NSTC 114-2221-E-019-069.
文摘Cloud services,favored by many enterprises due to their high flexibility and easy operation,are widely used for data storage and processing.However,the high latency,together with transmission overheads of the cloud architecture,makes it difficult to quickly respond to the demands of IoT applications and local computation.To make up for these deficiencies in the cloud,fog computing has emerged as a critical role in the IoT applications.It decentralizes the computing power to various lower nodes close to data sources,so as to achieve the goal of low latency and distributed processing.With the data being frequently exchanged and shared between multiple nodes,it becomes a challenge to authorize data securely and efficiently while protecting user privacy.To address this challenge,proxy re-encryption(PRE)schemes provide a feasible way allowing an intermediary proxy node to re-encrypt ciphertext designated for different authorized data requesters without compromising any plaintext information.Since the proxy is viewed as a semi-trusted party,it should be taken to prevent malicious behaviors and reduce the risk of data leakage when implementing PRE schemes.This paper proposes a new fog-assisted identity-based PRE scheme supporting anonymous key generation,equality test,and user revocation to fulfill various IoT application requirements.Specifically,in a traditional identity-based public key architecture,the key escrow problem and the necessity of a secure channel are major security concerns.We utilize an anonymous key generation technique to solve these problems.The equality test functionality further enables a cloud server to inspect whether two candidate trapdoors contain an identical keyword.In particular,the proposed scheme realizes fine-grained user-level authorization while maintaining strong key confidentiality.To revoke an invalid user identity,we add a revocation list to the system flows to restrict access privileges without increasing additional computation cost.To ensure security,it is shown that our system meets the security notion of IND-PrID-CCA and OW-ID-CCA under the Decisional Bilinear Diffie-Hellman(DBDH)assumption.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.KY10800230005。
文摘In view of the high computational complexity of traditional linear equalization algorithms in Orthogonal Time Frequency Space(OTFS)systems,a minimum mean square error(MMSE)channel equalization algorithm based on Matrix Chunking Lower and Upper Triangular Decomposition(CLU)is proposed.The proposed algorithm derives the structural properties of the chunked MMSE equalization matrix by leveraging the block diagonal structure of the Cyclic Prefix OTFS(CP-OTFS)time-domain channel matrix and the quasi-band structure of its constituent block matrices.On this basis,triangular decomposition combined with forward and backward substitution is used to avoid matrix inversion.This approach significantly reduces the complexity of the MMSE algorithm without sacrificing its performance.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under grant No.(IFPDP-261-22).
文摘Medical imaging is essential in modern health care,allowing accurate diagnosis and effective treatment planning.These images,however,often demonstrate low contrast,noise,and brightness distortion that reduce their diagnostic reliability.This review presents a structured and comprehensive analysis of advanced histogram equalization(HE)-based techniques for medical image enhancement.Our review methodology encompasses:(1)classical HE approaches and related limitations in medical domains;(2)adaptive schemes like Adaptive Histogram Equalization(AHE)and Contrast Limited Adaptive Histogrma Equalization(CLAHE)and their advance variants;(3)brightnesspreserving schemes like BBHE and MMBEBHE and related algorithms;(4)dynamic and recursive histogram equalization methods incorporating DHE and RMSHE;(5)fuzzy logic-based enhancement methodologies addressing uncertainty and noise in medical images;and(6)hybrid optimization methodologies through the application of metaheuristic algorithms(World Cup Optimization,Particle Swarm Optimization,Genetic Algorithms,along with histogram-based methodologies.)There is also a comparative discussion given based on contrast improvement,image brightness preservation,noise management,and computational efficiency.Such advancements have better capabilities of improving image quality,which is more important for improved diagnosis and image analysis.
基金the DigiEcoQuarry project,funded by the European Union's Horizon 2020 research and innovation program under Grant Agreement No.101003750supported by the China Scholarship Council(Grant No.202006370006).
文摘Agile lithology identification can assist mining by providing important information in the exploration and production of mineral resources.This study proposes a new lithology recognition procedure using video-logging of boreholes with an endoscope,applied to six production blocks in a limestone quarry.Images are automatically extracted from the videos and the lithology is classified into three classes based on clay content,i.e.massive limestone,brecciated limestone,and high amount of clay.The image quality is evaluated with a gray pixel intensity threshold and three no-reference image quality metrics,i.e.perception-based image quality evaluator,natural image quality evaluator,and blind/referenceless image spatial quality evaluator.After removing low-quality images,7583 images are retained and used for developing lithology classification models using six optimized classification techniques.The contrast-limited adaptive histogram equalization(CLAHE)technique is used to improve image quality.Ten color characteristics involving three percentiles of red,green and blue pixel intensities,together with color counting and five texture characteristics-correlation,entropy,homogeneity,contrast and energy-are used as inputs.Bayesian optimized light gradient boosting machine model performs best,with an overall accuracy of 88.04%,and a precision on the classes of massive limestone,brecciated limestone and high amount of clay of 90.72%,83.52%and 85.29%,respectively,for the testing set.The feature importance scores show that the color counting is the most significant parameter for the development of the classification model.Compared with previous image-based methodologies,this study provides a more flexible and cheaper procedure to identify lithology.
文摘Women’s rights and gender equality,as essential pillars of social progress and human development,have become a shared pursuit of the international community.Chinese President Xi Jinping on 13 October 2025 delivered a keynote address at the opening ceremony of the Global Leaders’Meeting on Women in Beijing.
基金ZTE Industry-University-Institute Cooperation Funds under Grant No.HC-CN-20230105001National Natural Science Foundation of China under Grant No.62001045。
文摘The 200 Gbit/s passive optical network(PON)is most likely to be the next-generation scheme following 50G PON.The costeffective direct detection(DD)system is the economical choice.However,larger-capacity DD systems will face much more serious power fading caused by chromatic dispersion(CD)combined with square-law DD and thereby significantly increases the complexity of equalization algorithms.In this paper,a 200 Gbit/s Nyquist 4-level pulse amplitude modulation(PAM4)single side-band(SSB)modulation-DD downlink scheme is designed,and a low complexity quadratic-nonlinear equalizer is proposed for this system.The computational complexity of the quadratic nonlinear equalizer is about 28%of that of the conventional Volterra nonlinear equalizer,while still exhibiting excellent nonlinear equalization ability.Simulation results for the 200 Gbit/s system with 20 km fiber transmission show that it can achieve a power budget of 29 dB,while a 30.4 dB power budget is obtained in the 50 Gbit/s experimental transmission.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金The National Natural Science Foundation of China(No.72071042)。
文摘China’s healthcare system faces increasing challenges,including surging medical costs,resource allocation imbalances favoring large hospitals,and ineffective referral mechanisms.The lack of a unified strategy integrating standardized coverage with personalized payment compounds these issues.To this end,this study proposes a risk-sharing reform strategy that combines equal coverage for the same disease(ECSD)with an individualized out-of-pocket(I-OOP)model.Specifically,the study employs a Markov model to capture patient transitions across health states and care levels.The findings show that ECSD and I-OOP enhance equity by standardizing disease coverage while tailoring costs to patient income and facility type.This approach alleviates demand on high-tier hospitals,promoting primary care utilization and enabling balanced resource distribution.The study’s findings provide a reference for policymakers and healthcare administrators by presenting a scalable framework that is aligned with China’s development goals with the aim of fostering an efficient,sustainable healthcare system that is adaptable to regional needs.
基金supported by the National Key R&D Program of China(No.2022YFB3205101)NSAF(No.U2230116)。
文摘To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illumination is processed by contrast-limited adaptive histogram equalization(CLAHE),adaptive complementary gamma function(ACG),and adaptive detail preserving S-curve(ADPS),respectively,to obtain three components.Then,the fusion-relevant features,exposure,and color contrast are selected as the weight maps.Subsequently,these components and weight maps are fused through multi-scale to generate enhanced illumination.Finally,the enhanced images are obtained by multiplying the enhanced illumination and reflectance.Compared with existing approaches,this proposed method achieves an average increase of 0.81%and 2.89%in the structural similarity index measurement(SSIM)and peak signal-to-noise ratio(PSNR),and a decrease of 6.17%and 32.61%in the natural image quality evaluator(NIQE)and gradient magnitude similarity deviation(GMSD),respectively.