Watermarking system based on quantization index modulation (QIM) is increasingly popular in high payload applications,but it is inherently fragile against amplitude scaling attacks.In order to resist desynchronizati...Watermarking system based on quantization index modulation (QIM) is increasingly popular in high payload applications,but it is inherently fragile against amplitude scaling attacks.In order to resist desynchronization attacks of QIM digital watermarking,a low density parity check (LDPC) code-aided QIM watermarking algorithm is proposed,and the performance of QIM watermarking system can be improved by incorporating LDPC code with message passing estimation/detection framework.Using the theory of iterative estimation and decoding,the watermark signal is decoded by the proposed algorithm through iterative estimation of amplitude scaling parameters and decoding of watermark.The performance of the proposed algorithm is closer to the dirty paper Shannon limit than that of repetition code aided algorithm when the algorithm is attacked by the additive white Gaussian noise.For constant amplitude scaling attacks,the proposed algorithm can obtain the accurate estimation of amplitude scaling parameters.The simulation result shows that the algorithm can obtain similar performance compared to the algorithm without desynchronization.展开更多
Many websites use verification codes to prevent users from using the machine automatically to register,login,malicious vote or irrigate but it brought great burden to the enterprises involved in internet marketing as ...Many websites use verification codes to prevent users from using the machine automatically to register,login,malicious vote or irrigate but it brought great burden to the enterprises involved in internet marketing as entering the verification code manually.Improving the verification code security system needs the identification method as the corresponding testing system.We propose an anisotropic heat kernel equation group which can generate a heat source scale space during the kernel evolution based on infinite heat source axiom,design a multi-step anisotropic verification code identification algorithm which includes core procedure of building anisotropic heat kernel,settingwave energy information parameters,combing outverification codccharacters and corresponding peripheral procedure of gray scaling,binarizing,denoising,normalizing,segmenting and identifying,give out the detail criterion and parameter set.Actual test show the anisotropic heat kernel identification algorithm can be used on many kinds of verification code including text characters,mathematical,Chinese,voice,3D,programming,video,advertising,it has a higher rate of 25%and 50%than neural network and context matching algorithm separately for Yahoo site,49%and 60%for Captcha site,20%and 52%for Baidu site,60%and 65%for 3DTakers site,40%,and 51%.for MDP site.展开更多
We study the detailed malicious code propagating process in scale-free networks with link weights that denotes traffic between two nodes. It is found that the propagating velocity reaches a peak rapidly then decays in...We study the detailed malicious code propagating process in scale-free networks with link weights that denotes traffic between two nodes. It is found that the propagating velocity reaches a peak rapidly then decays in a power-law form, which is different from the well-known result in unweighted network case. Simulation results show that the nodes with larger strength are preferential to be infected, but the hierarchical dynamics are not clearly found. The simulation results also show that larger dispersion of weight of networks leads to slower propagating, which indicates that malicious code propagates more quickly in unweighted scale-free networks than in weighted scale-free networks under the same condition. These results show that not only the topology of networks but also the link weights affect the malicious propagating process.展开更多
AIM: To determine the association of unstable pelvic ring injuries with trauma code status.METHODS: A retrospective review of all pelvic ring injuries at a single academic center from July 2010 to June 2013 was perfor...AIM: To determine the association of unstable pelvic ring injuries with trauma code status.METHODS: A retrospective review of all pelvic ring injuries at a single academic center from July 2010 to June 2013 was performed. The trauma registry was used to identify level 1 and level 2 trauma codes for each injury. The computed tomography scans in all patients were classified as stable or unstable using the Abbreviated Injury Scale. Pelvic injury classifications in level 1 and level 2 groups were compared. Patient disposition at discharge in level 1 and level 2 groups were also compared.RESULTS: There were 108 level 1 and 130 level 2 blunt trauma admissions. In the level 1 group, 67% of pelvic injuries were classified as stable fracture patterns and 33% were classified as unstable. In the level 2 group, 62% of pelvic injuries were classified as stable fracture patterns and 38% were classified as unstable. level 1 trauma code was not associated with odds of having an unstable fracture pattern(OR = 0.83, 95%CI: 0.48-1.41, P = 0.485). In the level 1 group with unstable pelvic injuries, 33% were discharged to home, 36% to a rehabilitation facility, and 32% died. In the level 2 group with unstable pelvic injuries, 65% were discharged to home, 31% to a rehabilitation facility, and 4% died. For those with unstable pelvic fractures(n = 85), assignment of a level 2 trauma code was associated with reduced odds of death(OR = 0.07, 95%CI: 0.01-0.35, P = 0.001) as compared to being discharged to home.CONCLUSION: Trauma code level assignment is not correlated with severity of pelvic injury. Because an unstable pelvis can lead to hemodynamic instability, these injuries may be undertriaged.展开更多
Numerical experiments carried out systematically using the Lee Model code unveil insightful and practical wide-ranging scaling laws for plasma focus machines for nuclear fusion energy as well as other applications. An...Numerical experiments carried out systematically using the Lee Model code unveil insightful and practical wide-ranging scaling laws for plasma focus machines for nuclear fusion energy as well as other applications. An essential feature of the numerical experiments is the fitting of a measured current waveform to the computed waveform to calibrate the model for the particular machine, thus providing a reliable and rigorous determination of the all-important pinch current. The thermodynamics and radiation properties of the resulting plasma are then reliably determined. This paper provides an overview of the recently published scaling laws for neutron (Yn) and neon soft x-ray, SXR (Ysxr) yields: Yn = 3.2x1011 Ipinch4.5;Yn = 1.8x1010 Ipeak3.8;Ipeak (0.3 to 5.7), Ipinch (0.2 to 2.4) in MA. Yn^E02.0 at tens of kJ to Yn^E00.84 at MJ level (up to 25MJ) and Ysxr = 8.3x103 Ipinch3.6;Ysxr = 6x102 Ipeak3.2;Ipeak (0.1 to 2.4), Ipinch (0.07 to1.3) in MA. Ysxr^E01.6 (kJ range) to Ysxr^E00.8 (towards MJ).展开更多
Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm wi...Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.展开更多
In this paper, we investigate the weighted iterative decoding to improve the performance of turbo-polar code. First of all, a minimum weighted mean square error criterion is proposed to optimize the scaling factors(SF...In this paper, we investigate the weighted iterative decoding to improve the performance of turbo-polar code. First of all, a minimum weighted mean square error criterion is proposed to optimize the scaling factors(SFs). Secondly, for two typical iterative algorithms,such as soft cancellation(SCAN) and belief propagation(BP) decoding, genie-aided decoders are proposed as the ideal reference of the practical decoding. Guided by this optimization framework, the optimal SFs of SCAN or BP decoders are obtained. The bit error rate performance of turbo-polar code with the optimal SFs can achieve 0.3 dB or 0.7 dB performance gains over the standard SCAN or BP decoding respectively.展开更多
This paper discovers the secret codes of the scale distribution from Planck scale to the Sun-scale by using Planck length derived from three fundamental physics constants, i.e., gravitation constant G, light speed c a...This paper discovers the secret codes of the scale distribution from Planck scale to the Sun-scale by using Planck length derived from three fundamental physics constants, i.e., gravitation constant G, light speed c and Plank constant, and starting from the global consideration of treating the whole universe as a well unified entity in all scales. According to this symmetric scale distribution law of different scale regions: (a) we naturally give a possibility overcoming the difficulty of the desert effect between the grand unification scale and electroweak unification scale relevant to quarks and leptons, and it is really surprising to discover that the scales of quarks & electrons, protons & neutrons and atoms again all sequentially locate at the predicted points of the scale space;(b) we closely uncover the scale of the cells, which is the basic unit constructing the human bodies;(c) even the average height of human being is naturally deduced;(d) further, it is very surprising that the scales of the celestial bodies tightly related to us human beings, including the earth and the sun, also exactly fall at the predicted points in scale space in order. Therefore, all these scales with 105n cm order (n = 0, 1, 2, …, 9) above just give a proof of very key anthropic principle for whole mankind (which just makes the anthropic principle be reduced as anthropic theorem), i.e., matter stratums (a) and (b) are inorganic and organic bases of constructing human being respectively;matter stratum (c) is just human being;matter stratums (d) are the living environments of human being. Namely, everything is for or relevant to the existence of human being. Consequently, the experimentally checked scale ladder of well-known matter levels just coincides with the scale ladder predicted by the deduced distribution law. From Planck scale to the Sun scale, people may systematically build up the exact scientific theories corresponding ten matter stratums, may set up the different sciences among the different cross stratums, further, can systematically understand all the different sciences and their relations in the deepest way up to now.展开更多
This study proposes a simple scaling factor approach to improve the performance of parallel-concatenated convolutional code (PCCC) and serial concatenated convolutional code (SCCC) systems based on suboptimal soft-inp...This study proposes a simple scaling factor approach to improve the performance of parallel-concatenated convolutional code (PCCC) and serial concatenated convolutional code (SCCC) systems based on suboptimal soft-input soft-output (SISO) decoders. Fixed and adaptive scaling factors were estimated to mitigate both the optimistic nature of a posteriori information and the correlation between intrinsic and extrinsic information produced by soft-output Viterbi (SOVA) decoders. The scaling factors could be computed off-line to reduce processing time and implementation complexity. The simulation results show a significant improvement in terms of bit-error rate (BER) over additive white Gaussian noise and Rayleigh fading channel. The convergence properties of the suggested iterative scheme are assessed using the extrinsic information transfer (EXIT) chart analysis technique.展开更多
The polar codes defined by the kernel matrix are a class of codes with low coding-decoding complexity and can achieve the Shannon limit. In this paper, a novel method to construct the 2<sup>n</sup>-dimensi...The polar codes defined by the kernel matrix are a class of codes with low coding-decoding complexity and can achieve the Shannon limit. In this paper, a novel method to construct the 2<sup>n</sup>-dimensional kernel matrix is proposed, that is based on primitive BCH codes that make use of the interception, the direct sum and adding a row and a column. For ensuring polarization of the kernel matrix, a solution is also put forward when the partial distances of the constructed kernel matrix exceed their upper bound. And the lower bound of exponent of the 2<sup>n</sup>-dimensional kernel matrix is obtained. The lower bound of exponent of our constructed kernel matrix is tighter than Gilbert-Varshamov (G-V) type, and the scaling exponent is better in the case of 16-dimensional.展开更多
It is common to assume that structures are designed in view of 50 year life cycle as per Euro-Code 2 and other codes. In special cases, structures are designed in view of longer life cycle, such as bridges, important ...It is common to assume that structures are designed in view of 50 year life cycle as per Euro-Code 2 and other codes. In special cases, structures are designed in view of longer life cycle, such as bridges, important infrastructure facilities, important religious structures or in case of extended returning period of seismic event or floods. Beside issues of durability and maintenance aspects, this involves also the need to cover the probability of exceeding characteristic design live loads during the extended period, while keeping the same levels of the accepted risk that were assumed by the various codes, as good enough for the standard 50 year life cycle. Bearing in mind that design procedures, formulations, materials characteristic strengths and partial safety factors are used for these structures as per the existing codes, scaling of partial safety factors, or alternatively an additional "compensating" factor is required. A simplified approach and procedure to arrive at a reasonable calibration of the code safety factors based on 50 years to compensate for an extended life cycle, based upon structural reliability considerations, is proposed.展开更多
基金National Natural Science Foundation of China(No.61272432)Qingdao Science and Technology Development Plan(No.12-1-4-6-(10)-jch)
文摘Watermarking system based on quantization index modulation (QIM) is increasingly popular in high payload applications,but it is inherently fragile against amplitude scaling attacks.In order to resist desynchronization attacks of QIM digital watermarking,a low density parity check (LDPC) code-aided QIM watermarking algorithm is proposed,and the performance of QIM watermarking system can be improved by incorporating LDPC code with message passing estimation/detection framework.Using the theory of iterative estimation and decoding,the watermark signal is decoded by the proposed algorithm through iterative estimation of amplitude scaling parameters and decoding of watermark.The performance of the proposed algorithm is closer to the dirty paper Shannon limit than that of repetition code aided algorithm when the algorithm is attacked by the additive white Gaussian noise.For constant amplitude scaling attacks,the proposed algorithm can obtain the accurate estimation of amplitude scaling parameters.The simulation result shows that the algorithm can obtain similar performance compared to the algorithm without desynchronization.
基金The national natural science foundation(61273290,61373147)Xiamen Scientific Plan Project(2014S0048,3502Z20123037)+1 种基金Fujian Scientific Plan Project(2013HZ0004-1)FuJian provincial education office A-class project(-JA13238)
文摘Many websites use verification codes to prevent users from using the machine automatically to register,login,malicious vote or irrigate but it brought great burden to the enterprises involved in internet marketing as entering the verification code manually.Improving the verification code security system needs the identification method as the corresponding testing system.We propose an anisotropic heat kernel equation group which can generate a heat source scale space during the kernel evolution based on infinite heat source axiom,design a multi-step anisotropic verification code identification algorithm which includes core procedure of building anisotropic heat kernel,settingwave energy information parameters,combing outverification codccharacters and corresponding peripheral procedure of gray scaling,binarizing,denoising,normalizing,segmenting and identifying,give out the detail criterion and parameter set.Actual test show the anisotropic heat kernel identification algorithm can be used on many kinds of verification code including text characters,mathematical,Chinese,voice,3D,programming,video,advertising,it has a higher rate of 25%and 50%than neural network and context matching algorithm separately for Yahoo site,49%and 60%for Captcha site,20%and 52%for Baidu site,60%and 65%for 3DTakers site,40%,and 51%.for MDP site.
基金Supported by the National Natural Science Foundation of China (90204012, 60573036) and the Natural Science Foundation of Hebei Province (F2006000177)
文摘We study the detailed malicious code propagating process in scale-free networks with link weights that denotes traffic between two nodes. It is found that the propagating velocity reaches a peak rapidly then decays in a power-law form, which is different from the well-known result in unweighted network case. Simulation results show that the nodes with larger strength are preferential to be infected, but the hierarchical dynamics are not clearly found. The simulation results also show that larger dispersion of weight of networks leads to slower propagating, which indicates that malicious code propagates more quickly in unweighted scale-free networks than in weighted scale-free networks under the same condition. These results show that not only the topology of networks but also the link weights affect the malicious propagating process.
文摘AIM: To determine the association of unstable pelvic ring injuries with trauma code status.METHODS: A retrospective review of all pelvic ring injuries at a single academic center from July 2010 to June 2013 was performed. The trauma registry was used to identify level 1 and level 2 trauma codes for each injury. The computed tomography scans in all patients were classified as stable or unstable using the Abbreviated Injury Scale. Pelvic injury classifications in level 1 and level 2 groups were compared. Patient disposition at discharge in level 1 and level 2 groups were also compared.RESULTS: There were 108 level 1 and 130 level 2 blunt trauma admissions. In the level 1 group, 67% of pelvic injuries were classified as stable fracture patterns and 33% were classified as unstable. In the level 2 group, 62% of pelvic injuries were classified as stable fracture patterns and 38% were classified as unstable. level 1 trauma code was not associated with odds of having an unstable fracture pattern(OR = 0.83, 95%CI: 0.48-1.41, P = 0.485). In the level 1 group with unstable pelvic injuries, 33% were discharged to home, 36% to a rehabilitation facility, and 32% died. In the level 2 group with unstable pelvic injuries, 65% were discharged to home, 31% to a rehabilitation facility, and 4% died. For those with unstable pelvic fractures(n = 85), assignment of a level 2 trauma code was associated with reduced odds of death(OR = 0.07, 95%CI: 0.01-0.35, P = 0.001) as compared to being discharged to home.CONCLUSION: Trauma code level assignment is not correlated with severity of pelvic injury. Because an unstable pelvis can lead to hemodynamic instability, these injuries may be undertriaged.
文摘Numerical experiments carried out systematically using the Lee Model code unveil insightful and practical wide-ranging scaling laws for plasma focus machines for nuclear fusion energy as well as other applications. An essential feature of the numerical experiments is the fitting of a measured current waveform to the computed waveform to calibrate the model for the particular machine, thus providing a reliable and rigorous determination of the all-important pinch current. The thermodynamics and radiation properties of the resulting plasma are then reliably determined. This paper provides an overview of the recently published scaling laws for neutron (Yn) and neon soft x-ray, SXR (Ysxr) yields: Yn = 3.2x1011 Ipinch4.5;Yn = 1.8x1010 Ipeak3.8;Ipeak (0.3 to 5.7), Ipinch (0.2 to 2.4) in MA. Yn^E02.0 at tens of kJ to Yn^E00.84 at MJ level (up to 25MJ) and Ysxr = 8.3x103 Ipinch3.6;Ysxr = 6x102 Ipeak3.2;Ipeak (0.1 to 2.4), Ipinch (0.07 to1.3) in MA. Ysxr^E01.6 (kJ range) to Ysxr^E00.8 (towards MJ).
文摘Considering that the hardware implementation of the normalized minimum sum(NMS)decoding algorithm for low-density parity-check(LDPC)code is difficult due to the uncertainty of scale factor,an NMS decoding algorithm with variable scale factor is proposed for the near-earth space LDPC codes(8177,7154)in the consultative committee for space data systems(CCSDS)standard.The shift characteristics of field programmable gate array(FPGA)is used to optimize the quantization data of check nodes,and finally the function of LDPC decoder is realized.The simulation and experimental results show that the designed FPGA-based LDPC decoder adopts the scaling factor in the NMS decoding algorithm to improve the decoding performance,simplify the hardware structure,accelerate the convergence speed and improve the error correction ability.
基金supported by the National Natural Science Foundation of China(No.61671080)the National Natural Science Foundation of China(No.61771066)Nokia Beijing Bell Lab
文摘In this paper, we investigate the weighted iterative decoding to improve the performance of turbo-polar code. First of all, a minimum weighted mean square error criterion is proposed to optimize the scaling factors(SFs). Secondly, for two typical iterative algorithms,such as soft cancellation(SCAN) and belief propagation(BP) decoding, genie-aided decoders are proposed as the ideal reference of the practical decoding. Guided by this optimization framework, the optimal SFs of SCAN or BP decoders are obtained. The bit error rate performance of turbo-polar code with the optimal SFs can achieve 0.3 dB or 0.7 dB performance gains over the standard SCAN or BP decoding respectively.
文摘This paper discovers the secret codes of the scale distribution from Planck scale to the Sun-scale by using Planck length derived from three fundamental physics constants, i.e., gravitation constant G, light speed c and Plank constant, and starting from the global consideration of treating the whole universe as a well unified entity in all scales. According to this symmetric scale distribution law of different scale regions: (a) we naturally give a possibility overcoming the difficulty of the desert effect between the grand unification scale and electroweak unification scale relevant to quarks and leptons, and it is really surprising to discover that the scales of quarks & electrons, protons & neutrons and atoms again all sequentially locate at the predicted points of the scale space;(b) we closely uncover the scale of the cells, which is the basic unit constructing the human bodies;(c) even the average height of human being is naturally deduced;(d) further, it is very surprising that the scales of the celestial bodies tightly related to us human beings, including the earth and the sun, also exactly fall at the predicted points in scale space in order. Therefore, all these scales with 105n cm order (n = 0, 1, 2, …, 9) above just give a proof of very key anthropic principle for whole mankind (which just makes the anthropic principle be reduced as anthropic theorem), i.e., matter stratums (a) and (b) are inorganic and organic bases of constructing human being respectively;matter stratum (c) is just human being;matter stratums (d) are the living environments of human being. Namely, everything is for or relevant to the existence of human being. Consequently, the experimentally checked scale ladder of well-known matter levels just coincides with the scale ladder predicted by the deduced distribution law. From Planck scale to the Sun scale, people may systematically build up the exact scientific theories corresponding ten matter stratums, may set up the different sciences among the different cross stratums, further, can systematically understand all the different sciences and their relations in the deepest way up to now.
文摘This study proposes a simple scaling factor approach to improve the performance of parallel-concatenated convolutional code (PCCC) and serial concatenated convolutional code (SCCC) systems based on suboptimal soft-input soft-output (SISO) decoders. Fixed and adaptive scaling factors were estimated to mitigate both the optimistic nature of a posteriori information and the correlation between intrinsic and extrinsic information produced by soft-output Viterbi (SOVA) decoders. The scaling factors could be computed off-line to reduce processing time and implementation complexity. The simulation results show a significant improvement in terms of bit-error rate (BER) over additive white Gaussian noise and Rayleigh fading channel. The convergence properties of the suggested iterative scheme are assessed using the extrinsic information transfer (EXIT) chart analysis technique.
文摘The polar codes defined by the kernel matrix are a class of codes with low coding-decoding complexity and can achieve the Shannon limit. In this paper, a novel method to construct the 2<sup>n</sup>-dimensional kernel matrix is proposed, that is based on primitive BCH codes that make use of the interception, the direct sum and adding a row and a column. For ensuring polarization of the kernel matrix, a solution is also put forward when the partial distances of the constructed kernel matrix exceed their upper bound. And the lower bound of exponent of the 2<sup>n</sup>-dimensional kernel matrix is obtained. The lower bound of exponent of our constructed kernel matrix is tighter than Gilbert-Varshamov (G-V) type, and the scaling exponent is better in the case of 16-dimensional.
文摘It is common to assume that structures are designed in view of 50 year life cycle as per Euro-Code 2 and other codes. In special cases, structures are designed in view of longer life cycle, such as bridges, important infrastructure facilities, important religious structures or in case of extended returning period of seismic event or floods. Beside issues of durability and maintenance aspects, this involves also the need to cover the probability of exceeding characteristic design live loads during the extended period, while keeping the same levels of the accepted risk that were assumed by the various codes, as good enough for the standard 50 year life cycle. Bearing in mind that design procedures, formulations, materials characteristic strengths and partial safety factors are used for these structures as per the existing codes, scaling of partial safety factors, or alternatively an additional "compensating" factor is required. A simplified approach and procedure to arrive at a reasonable calibration of the code safety factors based on 50 years to compensate for an extended life cycle, based upon structural reliability considerations, is proposed.