Detecting coupling pattern between elements in a complex system is a basic task in data-driven analysis. The trajectory for each specific element is a cooperative result of its intrinsic dynamic, its couplings with ot...Detecting coupling pattern between elements in a complex system is a basic task in data-driven analysis. The trajectory for each specific element is a cooperative result of its intrinsic dynamic, its couplings with other elements, and the environment. It is subsequently composed of many components, only some of which take part in the couplings. In this paper we present a framework to detect the component correlation pattern. Firstly, the interested trajectories are decomposed into components by using decomposing methods such as the Fourier expansion and the Wavelet transformation. Secondly, the cross-correlations between the components are calculated, resulting into a component cross-correlation matrix(network).Finally, the dominant structure in the network is identified to characterize the coupling pattern in the system. Several deterministic dynamical models turn out to be characterized with rich structures such as the clustering of the components. The pattern of correlation between respiratory(RESP) and ECG signals is composed of five sub-clusters that are mainly formed by the components in ECG signal. Interestingly, only 7 components from RESP(scattered in four sub-clusters) take part in the realization of coupling between the two signals.展开更多
To address the issue of low measurement accuracy caused by noise interference in the acquisition of low fluid flow rate signals with ultrasonic Doppler flow meters,a novel signal processing algorithm that combines ens...To address the issue of low measurement accuracy caused by noise interference in the acquisition of low fluid flow rate signals with ultrasonic Doppler flow meters,a novel signal processing algorithm that combines ensemble empirical mode decomposition(EEMD)and cross-correlation algorithm was proposed.Firstly,a fast Fourier transform(FFT)spectrum analysis was utilized to ascertain the frequency range of the signal.Secondly,data acquisition was conducted at an appropriate sampling frequency,and the acquired Doppler flow rate signal was then decomposed into a series of intrinsic mode functions(IMFs)by EEMD.Subsequently,these decomposed IMFs were recombined based on their energy entropy,and then the noise of the recombined Doppler flow rate signal was removed by cross-correlation filtering.Finally,an ideal ultrasonic Doppler flow rate signal was extracted.Simulation and experimental verification show that the proposed Doppler flow signal processing method can effectively enhance the signal-to-noise ratio(SNR)and extend the lower limit of measurement of the ultrasonic Doppler flow meter.展开更多
The Galileo E1 open service (OS) and the global positioning system (GPS) L1C are intending to use the multiplexed binary offset carrier (MBOC) modulation in E1/L1 band, including both pilot and data components. ...The Galileo E1 open service (OS) and the global positioning system (GPS) L1C are intending to use the multiplexed binary offset carrier (MBOC) modulation in E1/L1 band, including both pilot and data components. The impact of data and pilot codes cross-correlation on the distortion of the discriminator function (i.e., the S-curve) is investigated, when only the pilot (or data) components of MBOC signals are tracked. It is shown that the modulation schemes and the receiver configuration (e.g., the correlator spacing) strongly affect the S-curve bias. In this paper, two methods are proposed to optimize the data/pilot code pairs of Galileo E1 OS and GPS L1C. The optimization goal is to obtain the minimum average S-curve bias when tracking only the pilot components a the specific correlator spacing. Figures of merit, such as S-curve bias, correlation loss and code tracking variance have been adopted for analyzing and comparing the un-optimized and optimized code pairs. Simulation results show that the optimized data/pilot code pairs could significantly mitigate the intra-channel codes cross-correlation, and then improve the code tracking performance of MBOC signals.展开更多
The elastic thickness parameter was estimated using the mobile correlation technique between the observed isostatic disturbance and the gravity disturbance calculated through direct gravimetric modeling. We computed t...The elastic thickness parameter was estimated using the mobile correlation technique between the observed isostatic disturbance and the gravity disturbance calculated through direct gravimetric modeling. We computed the vertical flexure value of the crust for a specific elastic thickness using a given topographic dataset. The gravity disturbance due to the topography was determined after the calculation. A grid of values for the elastic thickness parameter was generated. Then, a moving correlation was performed between the observed gravity data(representing actual surface data) and the calculated data from the forward modeling. The optimum elastic thickness of the particular point corresponded to the highest correlation coefficient. The methodology was tested on synthetic data and showed that the synthetic depth closely matched the original depth, including the elastic thickness value. To validate the results, the described procedure was applied to a real dataset from the Barreirinhas Basin, situated in the northeastern region of Brazil. The results show that the obtained crustal depth is highly correlated with the depth from known models. Additionally, we noted that the elastic thickness behaves as expected, decreasing from the continent towards the ocean. Based on the results, this method has the potential to be employed as a direct estimate of crustal depth and elastic thickness for any region.展开更多
The temperature change and rate of CO2 change are correlated with a time lag, as reported in a previous paper. The correlation was investigated by calculating a correlation coefficient r of these changes for selected ...The temperature change and rate of CO2 change are correlated with a time lag, as reported in a previous paper. The correlation was investigated by calculating a correlation coefficient r of these changes for selected ENSO events in this study. Annual periodical increases and decreases in the CO2 concentration were considered, with a regular pattern of minimum values in August and maximum values in May each year. An increased deviation in CO2 and temperature was found in response to the occurrence of El Niño, but the increase in CO2 lagged behind the change in temperature by 5 months. This pattern was not observed for La Niña events. An increase in global CO2 emissions and a subsequent increase in global temperature proposed by IPCC were not observed, but an increase in global temperature, an increase in soil respiration, and a subsequent increase in global CO2 emissions were noticed. This natural process can be clearly detected during periods of increasing temperature specifically during El Niño events. The results cast strong doubts that anthropogenic CO2 is the cause of global warming.展开更多
This document presents a framework for recognizing people by palm vein distribution analysis using cross-correlation based signatures to obtain descriptors. Haar wavelets are useful in reducing the number of features ...This document presents a framework for recognizing people by palm vein distribution analysis using cross-correlation based signatures to obtain descriptors. Haar wavelets are useful in reducing the number of features while maintaining high recognition rates. This experiment achieved 97.5% of individuals classified correctly with two levels of Haar wavelets. This study used twelve-version of RGB and NIR (near infrared) wavelength images per individual. One hundred people were studied;therefore 4,800 instances compose the complete database. A Multilayer Perceptron (MLP) was trained to improve the recognition rate in a k-fold cross-validation test with k = 10. Classification results using MLP neural network were obtained using Weka (open source machine learning software).展开更多
In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab C...In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab Computers,Materials&Continua,2023,Vol.77,No.2,pp.2031–2047.DOI:10.32604/cmc.2023.043687,URL:https://www.techscience.com/cmc/v77n2/54831,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,ST42DE,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation...The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation of a patient in the emergency department(ED),Code Crimson activation facilitates rapid decisionmaking by multi-disciplinary specialists for definitive haemorrhage control in operating theatre(OT)and/or interventional radiology(IR)suite.Once this decision has been made,there may still be various factors that lead to delay in transporting the patient from ED to OT/IR.Red Blanket protocol identifies and addresses these factors and processes which cause delay,and aims to facilitate rapid and safe transport of the haemodynamically unstable patient from ED to OT,while minimizing delay in resuscitation during the transfer.The two processes,Code Crimson and Red Blanket,complement each other.It would be ideal to merge the two processes into a single protocol rather than having two separate workflows.Introducing these quality improvement strategies and coor-dinated processes within the trauma framework of the hospitals/healthcare systems will help in further improving the multi-disciplinary care for the complex trauma patients requiring rapid and definitive haemorrhage control.展开更多
Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition d...Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.展开更多
Multilevel coding(MLC)is a commonly used polar coded modulation scheme,but challenging to implement in engineering due to its high complexity and long decoding delay for high-order modulations.To address these limitat...Multilevel coding(MLC)is a commonly used polar coded modulation scheme,but challenging to implement in engineering due to its high complexity and long decoding delay for high-order modulations.To address these limitations,a novel two-level serially concatenated MLC scheme,in which the bitlevels with similar reliability are bundled and transmitted together,is proposed.The proposed scheme hierarchically protects the two bit-level sets:the bitlevel sets at the higher level are sufficiently reliable and do not require excessive resources for protection,whereas only the bit-level sets at the lower level are encoded by polar codes.The proposed scheme has the advantages of low power consumption,low delay and high reliability.Moreover,an optimized constellation signal labeling rule that can enhance the performance is proposed.Finally,the superiority of the proposed scheme is validated through the theoretical analysis and simulation results.Compared with the bit interleaving coding modulation(BICM)scheme,under 256-quadrature amplitude modulation(QAM),the proposed scheme attains a performance gain of 1.0 dB while reducing the decoding complexity by 54.55%.展开更多
In this paper,we first generalize the constant dimension and orbit codes over finite fields to the constant rank and orbit codes over finite chain rings.Then we provide a relationship between constant rank codes over ...In this paper,we first generalize the constant dimension and orbit codes over finite fields to the constant rank and orbit codes over finite chain rings.Then we provide a relationship between constant rank codes over finite chain rings and constant dimension codes over the residue fields.In particular,we prove that an orbit submodule code over a finite chain ring is a constant rank code.Finally,for special finite chain ring F_(q)+γF_(q),we define a Gray mapφfrom(F_(q)+γF_(q))^(n)to F^(2n)_(q),and by using cyclic codes over F_(q)+γF_(q),we obtain a method of constructing an optimum distance constant dimension code over F_(q).展开更多
Neuroscience (also known as neurobiology) is a science that studies the structure, function, development, pharmacology and pathology of the nervous system. In recent years, C. Cotardo has introduced coding theory into...Neuroscience (also known as neurobiology) is a science that studies the structure, function, development, pharmacology and pathology of the nervous system. In recent years, C. Cotardo has introduced coding theory into neuroscience, proposing the concept of combinatorial neural codes. And it was further studied in depth using algebraic methods by C. Curto. In this paper, we construct a class of combinatorial neural codes with special properties based on classical combinatorial structures such as orthogonal Latin rectangle, disjoint Steiner systems, groupable designs and transversal designs. These neural codes have significant weight distribution properties and large minimum distances, and are thus valuable for potential applications in information representation and neuroscience. This study provides new ideas for the construction method and property analysis of combinatorial neural codes, and enriches the study of algebraic coding theory.展开更多
National Fire codes,mandated by government authorities to tackle technical challenges in fire prevention and control,establish fundamental standards for construction practices.International collaboration in fire prote...National Fire codes,mandated by government authorities to tackle technical challenges in fire prevention and control,establish fundamental standards for construction practices.International collaboration in fire protection technologies has opened avenues for China to access a wealth of documents and codes,which are crucial in crafting regulations and developing a robust,scientific framework for fire code formulation.However,the translation of these codes into Chinese has been inadequate,thereby diminishing the benefits of technological exchange and collaborative learning.This underscores the necessity for comprehensive research into code translation,striving for higher-quality translations guided by established translation theories.In this study,we translated the initial segment of the NFPA 1 Fire Code into Chinese and examined both the source text and target text through the lens of Translation Shift Theory,a concept introduced by Catford.The conclusion culminated in identifying four key shifts across various linguistic levels:lexis,sentences,and groups,to ensure an accurate and precise translation of fire codes.This study offers a through and lucid explanation of how the translator integrates Catford’s theories to solve technical challenges in NFPA 1 Fire Code translation,and establish essential standards for construction translation practices.展开更多
The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based ...The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based on the encoding structure,three theorems are proved,two related to the relationship between the length and rate of the polar code,and one related to the relationship between frozen-bit positions,information-bit positions,and codewords.With these three theorems,polar codes can be quickly reconstruced.In addition,to detect the dual vectors of codewords,the statistical characteristics of the log-likelihood ratio are analyzed,and then the information-and frozen-bit positions are distinguished based on the minimumerror decision criterion.The bit rate is obtained.The correctness of the theorems and effectiveness of the proposed algorithm are validated through simulations.The proposed algorithm exhibits robustness to noise and a reasonable computational complexity.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
Landslides significantly threaten lives and infrastructure, especially in seismically active regions. This study conducts a probabilistic analysis of seismic landslide runout behavior, leveraging a large-deformation f...Landslides significantly threaten lives and infrastructure, especially in seismically active regions. This study conducts a probabilistic analysis of seismic landslide runout behavior, leveraging a large-deformation finite-element (LDFE) model that accounts for the three-dimensional (3D) spatial variability and cross-correlation in soil strength — a reflection of natural soils' inherent properties. LDFE model results are validated by comparing them against previous studies, followed by an examination of the effects of univariable, uncorrelated bivariable, and cross-correlated bivariable random fields on landslide runout behavior. The study's findings reveal that integrating variability in both friction angle and cohesion within uncorrelated bivariable random fields markedly influences runout distances when compared with univariable random fields. Moreover, the cross-correlation of soil cohesion and friction angle dramatically affects runout behavior, with positive correlations enlarging and negative correlations reducing runout distances. Transitioning from two-dimensional (2D) to 3D analyses, a more realistic representation of sliding surface, landslide velocity, runout distance and final deposit morphology is achieved. The study highlights that 2D random analyses substantially underestimate the mean value and overestimate the variability of runout distance, underscoring the importance of 3D modeling in accurately predicting landslide behavior. Overall, this work emphasizes the essential role of understanding 3D cross-correlation in soil strength for landslide hazard assessment and mitigation strategies.展开更多
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 11875042 and 11505114)the Shanghai Project for Construction of Top Disciplines (Grant No. USST-SYS-01)。
文摘Detecting coupling pattern between elements in a complex system is a basic task in data-driven analysis. The trajectory for each specific element is a cooperative result of its intrinsic dynamic, its couplings with other elements, and the environment. It is subsequently composed of many components, only some of which take part in the couplings. In this paper we present a framework to detect the component correlation pattern. Firstly, the interested trajectories are decomposed into components by using decomposing methods such as the Fourier expansion and the Wavelet transformation. Secondly, the cross-correlations between the components are calculated, resulting into a component cross-correlation matrix(network).Finally, the dominant structure in the network is identified to characterize the coupling pattern in the system. Several deterministic dynamical models turn out to be characterized with rich structures such as the clustering of the components. The pattern of correlation between respiratory(RESP) and ECG signals is composed of five sub-clusters that are mainly formed by the components in ECG signal. Interestingly, only 7 components from RESP(scattered in four sub-clusters) take part in the realization of coupling between the two signals.
基金supported by National Natural Science Foundation of China(No.61973234)Tianjin Science and Technology Plan Project(No.22YDTPJC00090)。
文摘To address the issue of low measurement accuracy caused by noise interference in the acquisition of low fluid flow rate signals with ultrasonic Doppler flow meters,a novel signal processing algorithm that combines ensemble empirical mode decomposition(EEMD)and cross-correlation algorithm was proposed.Firstly,a fast Fourier transform(FFT)spectrum analysis was utilized to ascertain the frequency range of the signal.Secondly,data acquisition was conducted at an appropriate sampling frequency,and the acquired Doppler flow rate signal was then decomposed into a series of intrinsic mode functions(IMFs)by EEMD.Subsequently,these decomposed IMFs were recombined based on their energy entropy,and then the noise of the recombined Doppler flow rate signal was removed by cross-correlation filtering.Finally,an ideal ultrasonic Doppler flow rate signal was extracted.Simulation and experimental verification show that the proposed Doppler flow signal processing method can effectively enhance the signal-to-noise ratio(SNR)and extend the lower limit of measurement of the ultrasonic Doppler flow meter.
基金National Basic Research Program of China(No.2010CB731805)
文摘The Galileo E1 open service (OS) and the global positioning system (GPS) L1C are intending to use the multiplexed binary offset carrier (MBOC) modulation in E1/L1 band, including both pilot and data components. The impact of data and pilot codes cross-correlation on the distortion of the discriminator function (i.e., the S-curve) is investigated, when only the pilot (or data) components of MBOC signals are tracked. It is shown that the modulation schemes and the receiver configuration (e.g., the correlator spacing) strongly affect the S-curve bias. In this paper, two methods are proposed to optimize the data/pilot code pairs of Galileo E1 OS and GPS L1C. The optimization goal is to obtain the minimum average S-curve bias when tracking only the pilot components a the specific correlator spacing. Figures of merit, such as S-curve bias, correlation loss and code tracking variance have been adopted for analyzing and comparing the un-optimized and optimized code pairs. Simulation results show that the optimized data/pilot code pairs could significantly mitigate the intra-channel codes cross-correlation, and then improve the code tracking performance of MBOC signals.
文摘The elastic thickness parameter was estimated using the mobile correlation technique between the observed isostatic disturbance and the gravity disturbance calculated through direct gravimetric modeling. We computed the vertical flexure value of the crust for a specific elastic thickness using a given topographic dataset. The gravity disturbance due to the topography was determined after the calculation. A grid of values for the elastic thickness parameter was generated. Then, a moving correlation was performed between the observed gravity data(representing actual surface data) and the calculated data from the forward modeling. The optimum elastic thickness of the particular point corresponded to the highest correlation coefficient. The methodology was tested on synthetic data and showed that the synthetic depth closely matched the original depth, including the elastic thickness value. To validate the results, the described procedure was applied to a real dataset from the Barreirinhas Basin, situated in the northeastern region of Brazil. The results show that the obtained crustal depth is highly correlated with the depth from known models. Additionally, we noted that the elastic thickness behaves as expected, decreasing from the continent towards the ocean. Based on the results, this method has the potential to be employed as a direct estimate of crustal depth and elastic thickness for any region.
文摘The temperature change and rate of CO2 change are correlated with a time lag, as reported in a previous paper. The correlation was investigated by calculating a correlation coefficient r of these changes for selected ENSO events in this study. Annual periodical increases and decreases in the CO2 concentration were considered, with a regular pattern of minimum values in August and maximum values in May each year. An increased deviation in CO2 and temperature was found in response to the occurrence of El Niño, but the increase in CO2 lagged behind the change in temperature by 5 months. This pattern was not observed for La Niña events. An increase in global CO2 emissions and a subsequent increase in global temperature proposed by IPCC were not observed, but an increase in global temperature, an increase in soil respiration, and a subsequent increase in global CO2 emissions were noticed. This natural process can be clearly detected during periods of increasing temperature specifically during El Niño events. The results cast strong doubts that anthropogenic CO2 is the cause of global warming.
文摘This document presents a framework for recognizing people by palm vein distribution analysis using cross-correlation based signatures to obtain descriptors. Haar wavelets are useful in reducing the number of features while maintaining high recognition rates. This experiment achieved 97.5% of individuals classified correctly with two levels of Haar wavelets. This study used twelve-version of RGB and NIR (near infrared) wavelength images per individual. One hundred people were studied;therefore 4,800 instances compose the complete database. A Multilayer Perceptron (MLP) was trained to improve the recognition rate in a k-fold cross-validation test with k = 10. Classification results using MLP neural network were obtained using Weka (open source machine learning software).
文摘In the article“Deep Learning-Enhanced Brain Tumor Prediction via Entropy-Coded BPSO in CIELAB Color Space”by Mudassir Khalil,Muhammad Imran Sharif,Ahmed Naeem,Muhammad Umar Chaudhry,Hafiz Tayyab Rauf,Adham E.Ragab Computers,Materials&Continua,2023,Vol.77,No.2,pp.2031–2047.DOI:10.32604/cmc.2023.043687,URL:https://www.techscience.com/cmc/v77n2/54831,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,ST42DE,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
文摘The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation of a patient in the emergency department(ED),Code Crimson activation facilitates rapid decisionmaking by multi-disciplinary specialists for definitive haemorrhage control in operating theatre(OT)and/or interventional radiology(IR)suite.Once this decision has been made,there may still be various factors that lead to delay in transporting the patient from ED to OT/IR.Red Blanket protocol identifies and addresses these factors and processes which cause delay,and aims to facilitate rapid and safe transport of the haemodynamically unstable patient from ED to OT,while minimizing delay in resuscitation during the transfer.The two processes,Code Crimson and Red Blanket,complement each other.It would be ideal to merge the two processes into a single protocol rather than having two separate workflows.Introducing these quality improvement strategies and coor-dinated processes within the trauma framework of the hospitals/healthcare systems will help in further improving the multi-disciplinary care for the complex trauma patients requiring rapid and definitive haemorrhage control.
基金Supported in part by the National Key R&D Program of China(No.2020YFA0712300)NSFC(No.61872353)。
文摘Fraction repetition(FR)codes are integral in distributed storage systems(DSS)with exact repair-by-transfer,while pliable fraction repetition codes are vital for DSSs in which both the per-node storage and repetition degree can easily be adjusted simultaneously.This paper introduces a new type of pliable FR codes,called absolute balanced pliable FR(ABPFR)codes,in which the access balancing in DSS is considered.Additionally,the equivalence between pliable FR codes and resolvable transversal packings in combinatorial design theory is presented.Then constructions of pliable FR codes and ABPFR codes based on resolvable transversal packings are presented.
基金supported by the External Cooperation Program of Science and Technology of Fujian Province,China(2024I0016)the Fundamental Research Funds for the Central Universities(ZQN-1005).
文摘Multilevel coding(MLC)is a commonly used polar coded modulation scheme,but challenging to implement in engineering due to its high complexity and long decoding delay for high-order modulations.To address these limitations,a novel two-level serially concatenated MLC scheme,in which the bitlevels with similar reliability are bundled and transmitted together,is proposed.The proposed scheme hierarchically protects the two bit-level sets:the bitlevel sets at the higher level are sufficiently reliable and do not require excessive resources for protection,whereas only the bit-level sets at the lower level are encoded by polar codes.The proposed scheme has the advantages of low power consumption,low delay and high reliability.Moreover,an optimized constellation signal labeling rule that can enhance the performance is proposed.Finally,the superiority of the proposed scheme is validated through the theoretical analysis and simulation results.Compared with the bit interleaving coding modulation(BICM)scheme,under 256-quadrature amplitude modulation(QAM),the proposed scheme attains a performance gain of 1.0 dB while reducing the decoding complexity by 54.55%.
基金Supported by Research Funds of Hubei Province(D20144401,Q20174503)。
文摘In this paper,we first generalize the constant dimension and orbit codes over finite fields to the constant rank and orbit codes over finite chain rings.Then we provide a relationship between constant rank codes over finite chain rings and constant dimension codes over the residue fields.In particular,we prove that an orbit submodule code over a finite chain ring is a constant rank code.Finally,for special finite chain ring F_(q)+γF_(q),we define a Gray mapφfrom(F_(q)+γF_(q))^(n)to F^(2n)_(q),and by using cyclic codes over F_(q)+γF_(q),we obtain a method of constructing an optimum distance constant dimension code over F_(q).
文摘Neuroscience (also known as neurobiology) is a science that studies the structure, function, development, pharmacology and pathology of the nervous system. In recent years, C. Cotardo has introduced coding theory into neuroscience, proposing the concept of combinatorial neural codes. And it was further studied in depth using algebraic methods by C. Curto. In this paper, we construct a class of combinatorial neural codes with special properties based on classical combinatorial structures such as orthogonal Latin rectangle, disjoint Steiner systems, groupable designs and transversal designs. These neural codes have significant weight distribution properties and large minimum distances, and are thus valuable for potential applications in information representation and neuroscience. This study provides new ideas for the construction method and property analysis of combinatorial neural codes, and enriches the study of algebraic coding theory.
基金Hangzhou Philosophy and Social Science Planning Program(24JD15)。
文摘National Fire codes,mandated by government authorities to tackle technical challenges in fire prevention and control,establish fundamental standards for construction practices.International collaboration in fire protection technologies has opened avenues for China to access a wealth of documents and codes,which are crucial in crafting regulations and developing a robust,scientific framework for fire code formulation.However,the translation of these codes into Chinese has been inadequate,thereby diminishing the benefits of technological exchange and collaborative learning.This underscores the necessity for comprehensive research into code translation,striving for higher-quality translations guided by established translation theories.In this study,we translated the initial segment of the NFPA 1 Fire Code into Chinese and examined both the source text and target text through the lens of Translation Shift Theory,a concept introduced by Catford.The conclusion culminated in identifying four key shifts across various linguistic levels:lexis,sentences,and groups,to ensure an accurate and precise translation of fire codes.This study offers a through and lucid explanation of how the translator integrates Catford’s theories to solve technical challenges in NFPA 1 Fire Code translation,and establish essential standards for construction translation practices.
基金supported by the National Natural Science Foundation of China(62371465)Taishan Scholar Project of Shandong Province(ts201511020)the Chinese National Key Laboratory of Science and Technology on Information System Security(6142111190404).
文摘The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based on the encoding structure,three theorems are proved,two related to the relationship between the length and rate of the polar code,and one related to the relationship between frozen-bit positions,information-bit positions,and codewords.With these three theorems,polar codes can be quickly reconstruced.In addition,to detect the dual vectors of codewords,the statistical characteristics of the log-likelihood ratio are analyzed,and then the information-and frozen-bit positions are distinguished based on the minimumerror decision criterion.The bit rate is obtained.The correctness of the theorems and effectiveness of the proposed algorithm are validated through simulations.The proposed algorithm exhibits robustness to noise and a reasonable computational complexity.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
基金supported by the National Natural Science Foundation of China(Grant No.U22A20596)the Shenzhen Science and Technology Program(Grant No.GJHZ20220913142605010)the Jinan Lead Researcher Project(Grant No.202333051).
文摘Landslides significantly threaten lives and infrastructure, especially in seismically active regions. This study conducts a probabilistic analysis of seismic landslide runout behavior, leveraging a large-deformation finite-element (LDFE) model that accounts for the three-dimensional (3D) spatial variability and cross-correlation in soil strength — a reflection of natural soils' inherent properties. LDFE model results are validated by comparing them against previous studies, followed by an examination of the effects of univariable, uncorrelated bivariable, and cross-correlated bivariable random fields on landslide runout behavior. The study's findings reveal that integrating variability in both friction angle and cohesion within uncorrelated bivariable random fields markedly influences runout distances when compared with univariable random fields. Moreover, the cross-correlation of soil cohesion and friction angle dramatically affects runout behavior, with positive correlations enlarging and negative correlations reducing runout distances. Transitioning from two-dimensional (2D) to 3D analyses, a more realistic representation of sliding surface, landslide velocity, runout distance and final deposit morphology is achieved. The study highlights that 2D random analyses substantially underestimate the mean value and overestimate the variability of runout distance, underscoring the importance of 3D modeling in accurately predicting landslide behavior. Overall, this work emphasizes the essential role of understanding 3D cross-correlation in soil strength for landslide hazard assessment and mitigation strategies.