3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safe...3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.展开更多
Computer-aided surgical navigation technology helps and guides doctors to complete the operation smoothly,which simulates the whole surgical environment with computer technology,and then visualizes the whole operation...Computer-aided surgical navigation technology helps and guides doctors to complete the operation smoothly,which simulates the whole surgical environment with computer technology,and then visualizes the whole operation link in three dimensions.At present,common image-guided surgical techniques such as computed tomography(CT)and X-ray imaging(X-ray)will cause radiation damage to the human body during the imaging process.To address this,we propose a novel Extended Kalman filter-based model that tracks the puncture needle-point using an ultrasound probe.To address the limitations of Kalman filteringmethods based on position and velocity,our method of Kalman filtering uses the position and relative velocity of the puncture needle-point instead,and the ultrasonic probe is controlled by a Proportional Integral(PI)controller in X-axis direction and Proportional Derivative(PD)controller in the Y-axis direction.The motion of the ultrasonic probe can be servo-controlled by whether the image information of the puncture needle-point can be detected by the ultrasonic image so that the ultrasonic probe can track the puncture needle-point in real time.The experiment results show that this method has better tracking performance.展开更多
Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting th...Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting the readability formula of Italian, known as GULPEASE, shows that of the two terms that determine the readability index G—the semantic index , proportional to the number of characters per word, and the syntactic index GF, proportional to the reciprocal of the number of words per sentence—GF is dominant because GC is, in practice, constant for any author throughout seven centuries of Italian Literature. Each author can modulate the length of sentences more freely than he can do with the length of words, and in different ways from author to author. For any author, any couple of text variables can be modelled by a linear relationship y = mx, but with different slope m from author to author, except for the relationship between characters and words, which is unique for all. The most important relationship found in the paper is that between the short-term memory capacity, described by Miller’s “7 ? 2 law” (i.e., the number of “chunks” that an average person can hold in the short-term memory ranges from 5 to 9), and the word interval, a new random variable defined as the average number of words between two successive punctuation marks. The word interval can be converted into a time interval through the average reading speed. The word interval spreads in the same range as Miller’s law, and the time interval is spread in the same range of short-term memory response times. The connection between the word interval (and time interval) and short-term memory appears, at least empirically, justified and natural, however, to be further investigated. Technical and scientific writings (papers, essays, etc.) ask more to their readers because words are on the average longer, the readability index G is lower, word and time intervals are longer. Future work done on ancient languages, such as the classical Greek and Latin Literatures (or modern languages Literatures), could bring us an insight into the short-term memory required to their well-educated ancient readers.展开更多
This paper is intended to report on the progresses made during the Dragon-4 project Three and Four-Dimensional Topographic Measurement and Validation(ID:32278),sub-project Multi-baseline SAR Processing for 3 D/4 D Rec...This paper is intended to report on the progresses made during the Dragon-4 project Three and Four-Dimensional Topographic Measurement and Validation(ID:32278),sub-project Multi-baseline SAR Processing for 3 D/4 D Reconstruction(ID:322782).The work here reported focuses on two important aspects of SAR remote sensing of tropical forests,namely the retrieval of forest biomass and the assessment of effects due to changing weather conditions.Recent studies have shown that by using SAR tomography the backscattered power at 30 m layer above the ground is linearly correlated to the forest Above Ground Biomass(AGB).However,the two parameters that determine this linear relationship might vary for different tropical forest sites.For purpose of solving this problem,we investigate the possibility of using Li DAR derived AGB to help training the two parameters.Experimental results obtained by processing data from the Tropi SAR campaign support the feasibility of the proposed concept.This analysis is complemented by an assessment of the impact of changing weather conditions on tomographic imaging,for which we simulate BIOMASS repeat pass tomography using ground-based Tropi SCAT data with a revisit time of 3 days and rainy days included.The resulting backscattered power variation at 30 m is within 1.5 d B.For this forest site,this error is translated into an AGB error of about 50~80 t/hm^(2),which is 20%or less of forest AGB.展开更多
Several chronic disorders including type 2 diabetes(T2D),obesity,heart disease and cancer are preceded by a state of chronic lowgrade inflammation.Biomarkers for the early assessment of chronic disorders encompass acu...Several chronic disorders including type 2 diabetes(T2D),obesity,heart disease and cancer are preceded by a state of chronic lowgrade inflammation.Biomarkers for the early assessment of chronic disorders encompass acute phase proteins(APP),cytokines and chemokines,pro-inflammatory enzymes,lipids and oxidative stress mediators.These substances enter saliva through the blood flow and,in some cases。展开更多
In Europe, computation of displacement demand for seismic assessment of existing buildings is essentially based on a simplified formulation of the N2 method as prescribed by Eurocode 8(EC8). However, a lack of accurac...In Europe, computation of displacement demand for seismic assessment of existing buildings is essentially based on a simplified formulation of the N2 method as prescribed by Eurocode 8(EC8). However, a lack of accuracy of the N2 method in certain conditions has been pointed out by several studies. This paper addresses the assessment of effectiveness of the N2 method in seismic displacement demand determination in non-linear domain. The objective of this work is to investigate the accuracy of the N2 method through comparison with displacement demands computed using non-linear timehistory analysis(NLTHA). Results show that the original N2 method may lead to overestimation or underestimation of displacement demand predictions. This may affect results of mechanical model-based assessment of seismic vulnerability at an urban scale. Hence, the second part of this paper addresses an improvement of the N2 method formula by empirical evaluation of NLTHA results based on EC8 ground-classes. This task is formulated as a mathematical programming problem in which coefficients are obtained by minimizing the overall discrepancy between NLTHA and modified formula results. Various settings of the mathematical programming problem have been solved using a global optimization metaheuristic. An extensive comparison between the original N2 method formulation and optimized formulae highlights benefits of the strategy.展开更多
In this paper, a data-driven prognostic model capable to deal with different sources of uncertainty is proposed. The main novelty factor is the application of a mathematical framework, namely a Random Fuzzy Variable (...In this paper, a data-driven prognostic model capable to deal with different sources of uncertainty is proposed. The main novelty factor is the application of a mathematical framework, namely a Random Fuzzy Variable (RFV) approach, for the representation and propagation of the different uncertainty sources affecting </span><span style="font-family:Verdana;">Prognostic Health Management (PHM) applications: measurement, future and model uncertainty. </span><span style="font-family:Verdana;">In this way, it is possible to deal not only with measurement noise and model parameters uncertainty due to the stochastic nature of the degradation process, but also with systematic effects, such as systematic errors in the measurement process, incomplete knowledge of the degradation process, subjective belief about model parameters. Furthermore, the low analytical complexity of the employed prognostic model allows to easily propagate the measurement and parameters uncertainty into the RUL forecast, with no need of extensive Monte Carlo loops, so that low requirements in terms of computation power are needed. The model has been applied to two real application cases, showing high accuracy output, resulting in a potential</span></span><span style="font-family:Verdana;">ly</span><span style="font-family:Verdana;"> effective tool for predictive maintenance in different industrial sectors.展开更多
We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any...We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.展开更多
The statistical theory of language translation is used to compare how a literary character speaks to different audiences by diversifying two important linguistic communication channels: the “sentences channel” and t...The statistical theory of language translation is used to compare how a literary character speaks to different audiences by diversifying two important linguistic communication channels: the “sentences channel” and the “interpunctions channel”. The theory can “measure” how the author shapes a character speaking to different audiences, by modulating deep-language parameters. To show its power, we have applied the theory to the literary corpus of Maria Valtorta, an Italian mystic of the XX-century. The likeness index , ranging from 0 to 1, allows to “measure” how two linguistic channels are similar, therefore implying that a character speaks to different audiences in the same way. A 6-dB difference between the signal-to-noise ratios of two channels already gives I<sub>L</sub> ≈ 0.5, a threshold below which the two channels depend very little on each other, therefore implying that the character addresses different audiences differently. In conclusion, multiple linguistic channels can describe the “fine tuning” that a literary author uses to diversify characters or distinguish the behavior of the same character in different situations. The theory can be applied to literary corpora written in any alphabetical language.展开更多
A multi-dimensional mathematical theory applied to texts belonging to the classical Greek Literature spanning eight centuries reveals interesting connections between them. By studying words, sentences, and interpuncti...A multi-dimensional mathematical theory applied to texts belonging to the classical Greek Literature spanning eight centuries reveals interesting connections between them. By studying words, sentences, and interpunctions in texts, the theory defines deep-language variables and linguistic channels. These mathematical entities are due to writer’s unconscious design and can reveal connections between texts far beyond writer’s awareness. The analysis, based on 3,225,839 words contained in 118,952 sentences, shows that ancient Greek writers, and their readers, were not significantly different from modern writers/readers. Their sentences were processed by a short-term memory modelled with two independent processing units in series, just like modern readers do. In a society in which people were used to memorize information more often than modern people do, the ancient writers wrote almost exactly, mathematically speaking, as modern writers do and for readers of similar characteristics. Since meaning is not considered by the theory, any text of any alphabetical language can be studied exactly with the same mathematical/statistical tools and comparisons are possible, regardless of different languages and epochs of writing.展开更多
The adipose tissue is a crucial energy reservoir that can undergo significant changes during aging,impacting the pathogenesis of metabolic disorders,including obesity.1 Obesity affects individuals of all ages,with dif...The adipose tissue is a crucial energy reservoir that can undergo significant changes during aging,impacting the pathogenesis of metabolic disorders,including obesity.1 Obesity affects individuals of all ages,with different implications for each stage of life.It is thought to be a state of accelerated aging,prompting the introduction of the term"adipaging",for which obesity and aging share key biological hallmarks strictly related to a dysfunctional adipose tissue.展开更多
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.
基金supported by the Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘Computer-aided surgical navigation technology helps and guides doctors to complete the operation smoothly,which simulates the whole surgical environment with computer technology,and then visualizes the whole operation link in three dimensions.At present,common image-guided surgical techniques such as computed tomography(CT)and X-ray imaging(X-ray)will cause radiation damage to the human body during the imaging process.To address this,we propose a novel Extended Kalman filter-based model that tracks the puncture needle-point using an ultrasound probe.To address the limitations of Kalman filteringmethods based on position and velocity,our method of Kalman filtering uses the position and relative velocity of the puncture needle-point instead,and the ultrasonic probe is controlled by a Proportional Integral(PI)controller in X-axis direction and Proportional Derivative(PD)controller in the Y-axis direction.The motion of the ultrasonic probe can be servo-controlled by whether the image information of the puncture needle-point can be detected by the ultrasonic image so that the ultrasonic probe can track the puncture needle-point in real time.The experiment results show that this method has better tracking performance.
文摘Statistics of languages are usually calculated by counting characters, words, sentences, word rankings. Some of these random variables are also the main “ingredients” of classical readability formulae. Revisiting the readability formula of Italian, known as GULPEASE, shows that of the two terms that determine the readability index G—the semantic index , proportional to the number of characters per word, and the syntactic index GF, proportional to the reciprocal of the number of words per sentence—GF is dominant because GC is, in practice, constant for any author throughout seven centuries of Italian Literature. Each author can modulate the length of sentences more freely than he can do with the length of words, and in different ways from author to author. For any author, any couple of text variables can be modelled by a linear relationship y = mx, but with different slope m from author to author, except for the relationship between characters and words, which is unique for all. The most important relationship found in the paper is that between the short-term memory capacity, described by Miller’s “7 ? 2 law” (i.e., the number of “chunks” that an average person can hold in the short-term memory ranges from 5 to 9), and the word interval, a new random variable defined as the average number of words between two successive punctuation marks. The word interval can be converted into a time interval through the average reading speed. The word interval spreads in the same range as Miller’s law, and the time interval is spread in the same range of short-term memory response times. The connection between the word interval (and time interval) and short-term memory appears, at least empirically, justified and natural, however, to be further investigated. Technical and scientific writings (papers, essays, etc.) ask more to their readers because words are on the average longer, the readability index G is lower, word and time intervals are longer. Future work done on ancient languages, such as the classical Greek and Latin Literatures (or modern languages Literatures), could bring us an insight into the short-term memory required to their well-educated ancient readers.
文摘This paper is intended to report on the progresses made during the Dragon-4 project Three and Four-Dimensional Topographic Measurement and Validation(ID:32278),sub-project Multi-baseline SAR Processing for 3 D/4 D Reconstruction(ID:322782).The work here reported focuses on two important aspects of SAR remote sensing of tropical forests,namely the retrieval of forest biomass and the assessment of effects due to changing weather conditions.Recent studies have shown that by using SAR tomography the backscattered power at 30 m layer above the ground is linearly correlated to the forest Above Ground Biomass(AGB).However,the two parameters that determine this linear relationship might vary for different tropical forest sites.For purpose of solving this problem,we investigate the possibility of using Li DAR derived AGB to help training the two parameters.Experimental results obtained by processing data from the Tropi SAR campaign support the feasibility of the proposed concept.This analysis is complemented by an assessment of the impact of changing weather conditions on tomographic imaging,for which we simulate BIOMASS repeat pass tomography using ground-based Tropi SCAT data with a revisit time of 3 days and rainy days included.The resulting backscattered power variation at 30 m is within 1.5 d B.For this forest site,this error is translated into an AGB error of about 50~80 t/hm^(2),which is 20%or less of forest AGB.
基金(partially)supported by the Italian Ministry of Health(Ricerca Corrente 2022-Fondazione IRCCS CàGranda Ospedale Maggiore Policlinico)by the Italian Ministry of Health(Ricerca Finalizzata-GR-2019-12370172)。
文摘Several chronic disorders including type 2 diabetes(T2D),obesity,heart disease and cancer are preceded by a state of chronic lowgrade inflammation.Biomarkers for the early assessment of chronic disorders encompass acute phase proteins(APP),cytokines and chemokines,pro-inflammatory enzymes,lipids and oxidative stress mediators.These substances enter saliva through the blood flow and,in some cases。
文摘In Europe, computation of displacement demand for seismic assessment of existing buildings is essentially based on a simplified formulation of the N2 method as prescribed by Eurocode 8(EC8). However, a lack of accuracy of the N2 method in certain conditions has been pointed out by several studies. This paper addresses the assessment of effectiveness of the N2 method in seismic displacement demand determination in non-linear domain. The objective of this work is to investigate the accuracy of the N2 method through comparison with displacement demands computed using non-linear timehistory analysis(NLTHA). Results show that the original N2 method may lead to overestimation or underestimation of displacement demand predictions. This may affect results of mechanical model-based assessment of seismic vulnerability at an urban scale. Hence, the second part of this paper addresses an improvement of the N2 method formula by empirical evaluation of NLTHA results based on EC8 ground-classes. This task is formulated as a mathematical programming problem in which coefficients are obtained by minimizing the overall discrepancy between NLTHA and modified formula results. Various settings of the mathematical programming problem have been solved using a global optimization metaheuristic. An extensive comparison between the original N2 method formulation and optimized formulae highlights benefits of the strategy.
文摘In this paper, a data-driven prognostic model capable to deal with different sources of uncertainty is proposed. The main novelty factor is the application of a mathematical framework, namely a Random Fuzzy Variable (RFV) approach, for the representation and propagation of the different uncertainty sources affecting </span><span style="font-family:Verdana;">Prognostic Health Management (PHM) applications: measurement, future and model uncertainty. </span><span style="font-family:Verdana;">In this way, it is possible to deal not only with measurement noise and model parameters uncertainty due to the stochastic nature of the degradation process, but also with systematic effects, such as systematic errors in the measurement process, incomplete knowledge of the degradation process, subjective belief about model parameters. Furthermore, the low analytical complexity of the employed prognostic model allows to easily propagate the measurement and parameters uncertainty into the RUL forecast, with no need of extensive Monte Carlo loops, so that low requirements in terms of computation power are needed. The model has been applied to two real application cases, showing high accuracy output, resulting in a potential</span></span><span style="font-family:Verdana;">ly</span><span style="font-family:Verdana;"> effective tool for predictive maintenance in different industrial sectors.
文摘We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.
文摘The statistical theory of language translation is used to compare how a literary character speaks to different audiences by diversifying two important linguistic communication channels: the “sentences channel” and the “interpunctions channel”. The theory can “measure” how the author shapes a character speaking to different audiences, by modulating deep-language parameters. To show its power, we have applied the theory to the literary corpus of Maria Valtorta, an Italian mystic of the XX-century. The likeness index , ranging from 0 to 1, allows to “measure” how two linguistic channels are similar, therefore implying that a character speaks to different audiences in the same way. A 6-dB difference between the signal-to-noise ratios of two channels already gives I<sub>L</sub> ≈ 0.5, a threshold below which the two channels depend very little on each other, therefore implying that the character addresses different audiences differently. In conclusion, multiple linguistic channels can describe the “fine tuning” that a literary author uses to diversify characters or distinguish the behavior of the same character in different situations. The theory can be applied to literary corpora written in any alphabetical language.
文摘A multi-dimensional mathematical theory applied to texts belonging to the classical Greek Literature spanning eight centuries reveals interesting connections between them. By studying words, sentences, and interpunctions in texts, the theory defines deep-language variables and linguistic channels. These mathematical entities are due to writer’s unconscious design and can reveal connections between texts far beyond writer’s awareness. The analysis, based on 3,225,839 words contained in 118,952 sentences, shows that ancient Greek writers, and their readers, were not significantly different from modern writers/readers. Their sentences were processed by a short-term memory modelled with two independent processing units in series, just like modern readers do. In a society in which people were used to memorize information more often than modern people do, the ancient writers wrote almost exactly, mathematically speaking, as modern writers do and for readers of similar characteristics. Since meaning is not considered by the theory, any text of any alphabetical language can be studied exactly with the same mathematical/statistical tools and comparisons are possible, regardless of different languages and epochs of writing.
文摘The adipose tissue is a crucial energy reservoir that can undergo significant changes during aging,impacting the pathogenesis of metabolic disorders,including obesity.1 Obesity affects individuals of all ages,with different implications for each stage of life.It is thought to be a state of accelerated aging,prompting the introduction of the term"adipaging",for which obesity and aging share key biological hallmarks strictly related to a dysfunctional adipose tissue.