Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few wo...Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few works have been payed on the estimation of the two degra-dation functions.To learn the two functions from image pairs to be fused,we propose a Dirichletnetwork,where both functions are properly constrained.Specifically,the spatial response function isconstrained with positivity,while the Dirichlet distribution along with a total variation is imposedon the point spread function.To the best of our knowledge,the neural network and the Dirichlet regularization are exclusively investigated,for the first time,to estimate the degradation functions.Both image degradation and fusion experiments demonstrate the effectiveness and superiority of theproposed Dirichlet network.展开更多
Non-line-of-sight(NLOS)imaging has emerged as a prominent technique for reconstructing obscured objects from images that undergo multiple diffuse reflections.This imaging method has garnered significant attention in d...Non-line-of-sight(NLOS)imaging has emerged as a prominent technique for reconstructing obscured objects from images that undergo multiple diffuse reflections.This imaging method has garnered significant attention in diverse domains,including remote sensing,rescue operations,and intelligent driving,due to its wide-ranging potential applications.Nevertheless,accurately modeling the incident light direction,which carries energy and is captured by the detector amidst random diffuse reflection directions,poses a considerable challenge.This challenge hinders the acquisition of precise forward and inverse physical models for NLOS imaging,which are crucial for achieving high-quality reconstructions.In this study,we propose a point spread function(PSF)model for the NLOS imaging system utilizing ray tracing with random angles.Furthermore,we introduce a reconstruction method,termed the physics-constrained inverse network(PCIN),which establishes an accurate PSF model and inverse physical model by leveraging the interplay between PSF constraints and the optimization of a convolutional neural network.The PCIN approach initializes the parameters randomly,guided by the constraints of the forward PSF model,thereby obviating the need for extensive training data sets,as required by traditional deep-learning methods.Through alternating iteration and gradient descent algorithms,we iteratively optimize the diffuse reflection angles in the PSF model and the neural network parameters.The results demonstrate that PCIN achieves efficient data utilization by not necessitating a large number of actual ground data groups.Moreover,the experimental findings confirm that the proposed method effectively restores the hidden object features with high accuracy.展开更多
Based on the point spread function (PSF) theory, the side-lobe extension direction of the impulse response in bistatic synthetic aperture radar (BSAR) is analyzed in detail; in addition, the corresponding autofocu...Based on the point spread function (PSF) theory, the side-lobe extension direction of the impulse response in bistatic synthetic aperture radar (BSAR) is analyzed in detail; in addition, the corresponding autofocus in BSAR should be considered along iso-range direction, not the traditional azimuth resolution (AR) direction. The conclusion is verified by the computer simulation.展开更多
In this paper the progress of document image Point Spread Function (PSF) estimation will be presented. At the beginning of the paper, an overview of PSF estimation methods will be introduced and the reason why knife...In this paper the progress of document image Point Spread Function (PSF) estimation will be presented. At the beginning of the paper, an overview of PSF estimation methods will be introduced and the reason why knife-edge input PSF estimation method is chosen will be explained. Then in the next section, the knife-edge input PSF estimation method will be detailed. After that, a simulation experiment is performed in order to verify the implemented PSF estimation method. Based on the simulation experiment, in next section we propose a procedure that makes automatic PSF estimation possible. A real document image is firstly taken as an example to illustrate the procedure and then be restored with the estimated PSF and Lucy-Richardson deconvolution method, and its OCR accuracy before and after deconvolution will be compared. Finally, we conclude the paper with the outlook for the future work.展开更多
AIM: To describe the characteristics of modulation transfer function (MTF) of anterior corneal surface, and obtain the the normal reference range of MTF at different spatial frequencies and optical zones of the anteri...AIM: To describe the characteristics of modulation transfer function (MTF) of anterior corneal surface, and obtain the the normal reference range of MTF at different spatial frequencies and optical zones of the anterior corneal surface in myopes. METHODS: Four hundred eyes from 200 patients were examined under SIRIUS corneal topography system. Phoenis analysis software was applied to simulate the MTF curves of anterior corneal surface at vertical and horizontal meridians at the 3, 4, 5, 6, 7mm optical zones of cornea. The MTF values at spatial frequencies of 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55 and 60 cycles/degree (c/d) were selected. RESULTS: The MTF curve of anterior corneal surface decreased rapidly from low to intermediate frequency (0-15cpd) at various optical zones of cornea, the value decreased to 0 slowly at higher frequency (>15cpd). With the increase of the optical zones of cornea, MTF curve decreased gradually. 3) In the range of 3 mm- 6 mm optical zones of the cornea, the MTF values measured at horizontal meridian were greater than the corresponding values at horizontal meridian of each spatial frequency, the difference was statistically significant (P<0.05). At 7 mm optical zones of cornea, the MTF values measured at horizontal meridian were less than the corresponding values at vertical meridian at 10-60 spatial frequencies (cpd), and the difference was statistically significant in 25, 30, 35, 40, 45, 50 cpd(P<0.05). CONCLUSION: MTF can be used to describe the imaging quality of optical systems at anterior corneal surface objectively in detail.展开更多
A point spread function(PSF) for the blurring component in positron emission tomography(PET) is studied. The PSF matrix is derived from the single photon incidence response function. A statistical iterative recons...A point spread function(PSF) for the blurring component in positron emission tomography(PET) is studied. The PSF matrix is derived from the single photon incidence response function. A statistical iterative reconstruction(IR) method based on the system matrix containing the PSF is developed. More specifically, the gamma photon incidence upon a crystal array is simulated by Monte Carlo(MC) simulation, and then the single photon incidence response functions are calculated. Subsequently, the single photon incidence response functions are used to compute the coincidence blurring factor according to the physical process of PET coincidence detection. Through weighting the ordinary system matrix response by the coincidence blurring factors, the IR system matrix containing the PSF is finally established. By using this system matrix, the image is reconstructed by an ordered subset expectation maximization(OSEM) algorithm. The experimental results show that the proposed system matrix can substantially improve the image radial resolution, contrast,and noise property. Furthermore, the simulated single gamma-ray incidence response function depends only on the crystal configuration, so the method could be extended to any PET scanner with the same detector crystal configuration.展开更多
The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimat...The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimation purposes, but rarely for software defects analysis. Sigma, in statistics, measures how far a process deviates from its goal. Six Sigma focuses on reducing variations within processes, because such variations may lead to an inconsistency in achieving projects’ specifications which represent “defects”, which mean not meeting customers’ satisfaction. Six Sigma provides two methodologies to solve organizations’ problems: “Define-Measure-Analyze-Improve-Control” process cycle (DMAIC) and Design of Six Sigma (DFSS). The DMAIC focuses on improving the existed processes, while the DFSS focuses on redesigning the existing processes and developing new processes. This paper presents an approach to provide an analysis of ISBSG repository based on Six Sigma measurements. It investigates the use of the ISBSG data repository with some of the related Six Sigma measurement aspects, including Sigma defect measurement and software defect estimation. This study presents the dataset preparation consisting of two levels of data preparations, and then analyzed the quality-related data fields in the ISBSG MS-Excel data extract (Release 12 - 2013). It also presents an analysis of the extracted dataset of software projects. This study has found that the ISBSG MS-Excel data extract has a high ratio of missing data within the data fields of “Total Number of Defects” variable, which represents a serious challenge when the ISBSG dataset is being used for software defect estimation.展开更多
A set of point spread functions (PSF) has been obtained by means of Monte-Carlo simulation for asmall gamma camera with a pinhole collimator of various hole diameters. The FOV (field of view) of the camera isexpended ...A set of point spread functions (PSF) has been obtained by means of Monte-Carlo simulation for asmall gamma camera with a pinhole collimator of various hole diameters. The FOV (field of view) of the camera isexpended from 45 mm to 70 mm in diameter. The position dependence of the variances of PSF is presented, and theacceptance for the 140 kev gamma rays is explored. A phantom of 70 mm in diameter was experimentally imaged inthe camera with effective FOV of only 45 mm in diameter.展开更多
In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the em...In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function. Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution. So we can obtain the maximum empirical likelihood estimation (MELE) of parameters. Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.展开更多
The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivi...The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivity patterns representing economies or diseconomies of scale and these patterns served as a basis for investigating the characteristics of the corresponding projects. Three groups of projects related to the three different productivity patterns, characterized by domain, team size, elapsed time and rigor of verification and validation carried out during development, were found to be statistically significant. Within each project group, the variations in test effort can be explained, in addition to functional size, by 1) the processes executed during development, and 2) the processes adopted for testing. Portfolios of estimation models were built using combinations of the three independent variables. Performance of the estimation models built using the function point method innovated by the Common Software Measurement International Consortium (COSMIC) known as COSMIC Function Points, and the one advocated by the International Function Point Users Group (IFPUG) known as IFPUG Function Points, were compared to evaluate the impact of these respective sizing methods on test effort estimation.展开更多
This study was to assess quantitatively the accuracy of ^(18)F-FDG PET/CT images reconstructed by TOF+PSF and TOF only, considering the noise-matching concept to minimize probable bias in evaluating algorithm performa...This study was to assess quantitatively the accuracy of ^(18)F-FDG PET/CT images reconstructed by TOF+PSF and TOF only, considering the noise-matching concept to minimize probable bias in evaluating algorithm performance caused by noise. PET images of similar noise level were considered. Measurements were made on an inhouse phantom with hot inserts of Φ10–37 mm, and oncological images of 14 patients were analyzed. The PET images were reconstructed using the OSEM, OSEM+TOF and OSEM+TOF+PSF algorithms. Optimal reconstruction parameters including iteration, subset, and FWHM of post-smoothing filter were chosen for both the phantom and patient data. In terms of quantitative accuracy, the recovery coefficient(RC) was calculated for the phantom PET images. The signal-to-noise ratio(SNR),lesion-to-background ratio(LBR), and SUV_(max)were evaluated from the phantom and clinical data. The smallest hot insert(Ф10 mm) with 2:1 activity concentration ratio could be detected in the PET image reconstructed using the TOF and TOF+PSF algorithms, but not the OSEM algorithm. The relative difference for SNR between the TOF+PSF and OSEM showed significantly higher values for smaller sizes, while SNR change was smaller for Ф22–37 mm inserts both 2:1 and 4:1 activity concentration ratio. In the clinical study, SNR gains were 1.6 ± 0.53 and 2.7 ± 0.74 for the TOF and TOF+PSF, while the relative difference of contrast was 17 ± 1.05 and 41.5 ± 1.85% for the TOF only and TOF+PSF, respectively. The impact of TOF+PSF is more significant than that of TOF reconstruction, in smaller inserts with low activity concentration ratio. In the clinical PET/CT images, the use of the TOF+PSF algorithm resulted in better SNR and contrast for lesions, and the highest SUV_(max)was also seen for images reconstructed with the TOF+PSF algorithm.展开更多
Detection of small pulmonary nodules is the goal of lung cancer screening. Computer-aided detection (CAD) systems are recommended to use in lung cancer computed tomography (CT) screening to increase the accuracy of no...Detection of small pulmonary nodules is the goal of lung cancer screening. Computer-aided detection (CAD) systems are recommended to use in lung cancer computed tomography (CT) screening to increase the accuracy of nodule detection. Size and density of lung nodules are primary factors in determining the risk of malignancy. Therefore, purpose of this study is to apply computer-simulated virtual nodules based on the point spread function (PSF) measured in same scanner (maintaining spatial resolution condition) to assess the CAD system performance dependence on nodule size and density. Virtual nodules with density differences between lung background and nodule density (ΔCT) values (200, 300 and 400 HU) and different sizes (4 to 8 mm) were generated and fused on clinical images. CAD detection was performed and free-response receiver operating characteristic (FROC) curves were obtained. Results show that both density and size of virtual nodules can affect detection efficiency. Detailed results are possible to use for quantitative analysis of a CAD system performance. This study suggests that PSF-based virtual nodules could be effectively used to assess the lung cancer CT screening CAD system performance dependence on nodule size and density.展开更多
Point spread function(PSF)engineering has been pivotal in the remarkable progress made in high-resolution imaging in the last decades.However,the diversity in PSF structures attainable through existing engineering met...Point spread function(PSF)engineering has been pivotal in the remarkable progress made in high-resolution imaging in the last decades.However,the diversity in PSF structures attainable through existing engineering methods is limited.Here,we report universal PSF engineering,demonstrating a method to synthesize an arbitrary set of spatially varying 3D PSFs between the input and output volumes of a spatially incoherent diffractive processor composed of cascaded transmissive surfaces.We rigorously analyze the PSF engineering capabilities of such diffractive processors within the diffraction limit of light and provide numerical demonstrations of unique imaging capabilities,such as snapshot 3D multispectral imaging without involving any spectral filters,axial scanning or digital reconstruction steps,which is enabled by the spatial and spectral engineering of 3D PSFs.Our framework and analysis would be important for future advancements in computational imaging,sensing,and diffractive processing of 3D optical information.展开更多
Subpixel localization techniques for estimating the positions of point-like images captured by pixelated image sensors have been widely used in diverse optical measurement fields.With unavoidable imaging noise,there i...Subpixel localization techniques for estimating the positions of point-like images captured by pixelated image sensors have been widely used in diverse optical measurement fields.With unavoidable imaging noise,there is a precision limit(PL)when estimating the target positions on image sensors,which depends on the detected photon count,noise,point spread function(PSF)radius,and PSF’s intra-pixel position.Previous studies have clearly reported the effects of the first three parameters on the PL but have neglected the intra-pixel position information.Here,we develop a localization PL analysis framework for revealing the effect of the intra-pixel position of small PSFs.To accurately estimate the PL in practical applications,we provide effective PSF(e PSF)modeling approaches and apply the Cramér–Rao lower bound.Based on the characteristics of small PSFs,we first derive simplified equations for finding the best PL and the best intra-pixel region for an arbitrary small PSF;we then verify these equations on real PSFs.Next,we use the typical Gaussian PSF to perform a further analysis and find that the final optimum of the PL is achieved at the pixel boundaries when the Gaussian radius is as small as possible,indicating that the optimum is ultimately limited by light diffraction.Finally,we apply the maximum likelihood method.Its combination with e PSF modeling allows us to successfully reach the PL in experiments,making the above theoretical analysis effective.This work provides a new perspective on combining image sensor position control with PSF engineering to make full use of information theory,thereby paving the way for thoroughly understanding and achieving the final optimum of the PL in optical localization.展开更多
An X-ray pinhole camera has been used to determine the transverse beam size and emittance on the diagnostic beam line of the storage ring at SSRF since2009.The performance of the beam size measurement is determined by...An X-ray pinhole camera has been used to determine the transverse beam size and emittance on the diagnostic beam line of the storage ring at SSRF since2009.The performance of the beam size measurement is determined by the width of the point spread function of the X-ray pinhole camera.Beam-based calibration was carried in 2012 out by varying the beam size at the source point and measuring the image size.However,this calibration method requires special beam conditions.In order to overcome this limitation,the pinhole camera was upgraded and an X-ray quasi-monochromator was installed.A novel experimental method was introduced by combining the pinhole camera with the monochromator to calibrate the point spread function.The point spread function can be accurately resolved by adjusting the angle of the monochromator and measuring the image size.The X-ray spectrum can also be obtained.In this work,the X-ray quasi-monochromator and the novel beam-based calibration method will be presented in detail.展开更多
Effects of many medical procedures appear after a time lag, when a significant change occurs in subjects’ failure rate. This paper focuses on the detection and estimation of such changes which is important for the ev...Effects of many medical procedures appear after a time lag, when a significant change occurs in subjects’ failure rate. This paper focuses on the detection and estimation of such changes which is important for the evaluation and comparison of treatments and prediction of their effects. Unlike the classical change-point model, measurements may still be identically distributed, and the change point is a parameter of their common survival function. Some of the classical change-point detection techniques can still be used but the results are different. Contrary to the classical model, the maximum likelihood estimator of a change point appears consistent, even in presence of nuisance parameters. However, a more efficient procedure can be derived from Kaplan-Meier estimation of the survival function followed by the least-squares estimation of the change point. Strong consistency of these estimation schemes is proved. The finite-sample properties are examined by a Monte Carlo study. Proposed methods are applied to a recent clinical trial of the treatment program for strong drug dependence.展开更多
长春市是国家新型城镇化综合试点城市,识别长春市中心城区功能,针对当前存在的问题提出对策建议,对城市空间的优化与协调具有重要意义。以兴趣点(Point of Interest,POI)数据及开放街道地图(Open Street Map,OSM)数据为基础,结合核密度...长春市是国家新型城镇化综合试点城市,识别长春市中心城区功能,针对当前存在的问题提出对策建议,对城市空间的优化与协调具有重要意义。以兴趣点(Point of Interest,POI)数据及开放街道地图(Open Street Map,OSM)数据为基础,结合核密度分析、实地调查验证等方法,识别长春市中心城区城市功能类型。结果表明:单一功能区中,商业功能区数量最多,居住功能区最少;主导—混合功能区中的商业主导功能区及交通主导功能区形成对商业聚集区与轨道交通系统的重要补充;细分—混合功能区特征显示功能混合程度从市中心向周边逐渐加大。经验证,城市功能识别结果符合长春市实际,由此提出对策建议:未来长春市中心城区应注重多中心发展格局,并加强绿地空间和公共服务设施建设。展开更多
基金the Postdoctoral ScienceFoundation of China(No.2023M730156)the NationalNatural Foundation of China(No.62301012).
文摘Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few works have been payed on the estimation of the two degra-dation functions.To learn the two functions from image pairs to be fused,we propose a Dirichletnetwork,where both functions are properly constrained.Specifically,the spatial response function isconstrained with positivity,while the Dirichlet distribution along with a total variation is imposedon the point spread function.To the best of our knowledge,the neural network and the Dirichlet regularization are exclusively investigated,for the first time,to estimate the degradation functions.Both image degradation and fusion experiments demonstrate the effectiveness and superiority of theproposed Dirichlet network.
基金supported by the Instrument Developing Project of the Chinese Academy of Sciences (Grant No.YJKYYQ20190044)the National Key Research and Development Program of China (Grant No.2022YFB3903100)+1 种基金the High-level introduction of talent research start-up fund of Hefei Normal University in 2020 (Grant No.2020rcjj34)the HFIPS Director’s Fund (Grant No.YZJJ2022QN12).
文摘Non-line-of-sight(NLOS)imaging has emerged as a prominent technique for reconstructing obscured objects from images that undergo multiple diffuse reflections.This imaging method has garnered significant attention in diverse domains,including remote sensing,rescue operations,and intelligent driving,due to its wide-ranging potential applications.Nevertheless,accurately modeling the incident light direction,which carries energy and is captured by the detector amidst random diffuse reflection directions,poses a considerable challenge.This challenge hinders the acquisition of precise forward and inverse physical models for NLOS imaging,which are crucial for achieving high-quality reconstructions.In this study,we propose a point spread function(PSF)model for the NLOS imaging system utilizing ray tracing with random angles.Furthermore,we introduce a reconstruction method,termed the physics-constrained inverse network(PCIN),which establishes an accurate PSF model and inverse physical model by leveraging the interplay between PSF constraints and the optimization of a convolutional neural network.The PCIN approach initializes the parameters randomly,guided by the constraints of the forward PSF model,thereby obviating the need for extensive training data sets,as required by traditional deep-learning methods.Through alternating iteration and gradient descent algorithms,we iteratively optimize the diffuse reflection angles in the PSF model and the neural network parameters.The results demonstrate that PCIN achieves efficient data utilization by not necessitating a large number of actual ground data groups.Moreover,the experimental findings confirm that the proposed method effectively restores the hidden object features with high accuracy.
文摘Based on the point spread function (PSF) theory, the side-lobe extension direction of the impulse response in bistatic synthetic aperture radar (BSAR) is analyzed in detail; in addition, the corresponding autofocus in BSAR should be considered along iso-range direction, not the traditional azimuth resolution (AR) direction. The conclusion is verified by the computer simulation.
文摘In this paper the progress of document image Point Spread Function (PSF) estimation will be presented. At the beginning of the paper, an overview of PSF estimation methods will be introduced and the reason why knife-edge input PSF estimation method is chosen will be explained. Then in the next section, the knife-edge input PSF estimation method will be detailed. After that, a simulation experiment is performed in order to verify the implemented PSF estimation method. Based on the simulation experiment, in next section we propose a procedure that makes automatic PSF estimation possible. A real document image is firstly taken as an example to illustrate the procedure and then be restored with the estimated PSF and Lucy-Richardson deconvolution method, and its OCR accuracy before and after deconvolution will be compared. Finally, we conclude the paper with the outlook for the future work.
文摘AIM: To describe the characteristics of modulation transfer function (MTF) of anterior corneal surface, and obtain the the normal reference range of MTF at different spatial frequencies and optical zones of the anterior corneal surface in myopes. METHODS: Four hundred eyes from 200 patients were examined under SIRIUS corneal topography system. Phoenis analysis software was applied to simulate the MTF curves of anterior corneal surface at vertical and horizontal meridians at the 3, 4, 5, 6, 7mm optical zones of cornea. The MTF values at spatial frequencies of 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55 and 60 cycles/degree (c/d) were selected. RESULTS: The MTF curve of anterior corneal surface decreased rapidly from low to intermediate frequency (0-15cpd) at various optical zones of cornea, the value decreased to 0 slowly at higher frequency (>15cpd). With the increase of the optical zones of cornea, MTF curve decreased gradually. 3) In the range of 3 mm- 6 mm optical zones of the cornea, the MTF values measured at horizontal meridian were greater than the corresponding values at horizontal meridian of each spatial frequency, the difference was statistically significant (P<0.05). At 7 mm optical zones of cornea, the MTF values measured at horizontal meridian were less than the corresponding values at vertical meridian at 10-60 spatial frequencies (cpd), and the difference was statistically significant in 25, 30, 35, 40, 45, 50 cpd(P<0.05). CONCLUSION: MTF can be used to describe the imaging quality of optical systems at anterior corneal surface objectively in detail.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.Y4811H805C and 81101175)
文摘A point spread function(PSF) for the blurring component in positron emission tomography(PET) is studied. The PSF matrix is derived from the single photon incidence response function. A statistical iterative reconstruction(IR) method based on the system matrix containing the PSF is developed. More specifically, the gamma photon incidence upon a crystal array is simulated by Monte Carlo(MC) simulation, and then the single photon incidence response functions are calculated. Subsequently, the single photon incidence response functions are used to compute the coincidence blurring factor according to the physical process of PET coincidence detection. Through weighting the ordinary system matrix response by the coincidence blurring factors, the IR system matrix containing the PSF is finally established. By using this system matrix, the image is reconstructed by an ordered subset expectation maximization(OSEM) algorithm. The experimental results show that the proposed system matrix can substantially improve the image radial resolution, contrast,and noise property. Furthermore, the simulated single gamma-ray incidence response function depends only on the crystal configuration, so the method could be extended to any PET scanner with the same detector crystal configuration.
文摘The International Software Benchmarking Standards Group (ISBSG) provides to researchers and practitioners a repository of software projects’ data that has been used to date mostly for benchmarking and project estimation purposes, but rarely for software defects analysis. Sigma, in statistics, measures how far a process deviates from its goal. Six Sigma focuses on reducing variations within processes, because such variations may lead to an inconsistency in achieving projects’ specifications which represent “defects”, which mean not meeting customers’ satisfaction. Six Sigma provides two methodologies to solve organizations’ problems: “Define-Measure-Analyze-Improve-Control” process cycle (DMAIC) and Design of Six Sigma (DFSS). The DMAIC focuses on improving the existed processes, while the DFSS focuses on redesigning the existing processes and developing new processes. This paper presents an approach to provide an analysis of ISBSG repository based on Six Sigma measurements. It investigates the use of the ISBSG data repository with some of the related Six Sigma measurement aspects, including Sigma defect measurement and software defect estimation. This study presents the dataset preparation consisting of two levels of data preparations, and then analyzed the quality-related data fields in the ISBSG MS-Excel data extract (Release 12 - 2013). It also presents an analysis of the extracted dataset of software projects. This study has found that the ISBSG MS-Excel data extract has a high ratio of missing data within the data fields of “Total Number of Defects” variable, which represents a serious challenge when the ISBSG dataset is being used for software defect estimation.
基金Supported by the National Natural Science Foundation of China(10275063)
文摘A set of point spread functions (PSF) has been obtained by means of Monte-Carlo simulation for asmall gamma camera with a pinhole collimator of various hole diameters. The FOV (field of view) of the camera isexpended from 45 mm to 70 mm in diameter. The position dependence of the variances of PSF is presented, and theacceptance for the 140 kev gamma rays is explored. A phantom of 70 mm in diameter was experimentally imaged inthe camera with effective FOV of only 45 mm in diameter.
基金Supported by the National Natural Science Foundation of China (Grant Nos. 1093100211071035+2 种基金7090101671171035)Excellent Talents Program of Liaoning Educational Committee (Grant No. 2008RC15)
文摘In this paper, a semiparametric two-sample density ratio model is considered and the empirical likelihood method is applied to obtain the parameters estimation. A commonly occurring problem in computing is that the empirical likelihood function may be a concaveconvex function. Here a simple Lagrange saddle point algorithm is presented for computing the saddle point of the empirical likelihood function when the Lagrange multiplier has no explicit solution. So we can obtain the maximum empirical likelihood estimation (MELE) of parameters. Monte Carlo simulations are presented to illustrate the Lagrange saddle point algorithm.
文摘The International Software Benchmarking and Standards Group (ISBSG) data-base was used to build estimation models for estimating software functional test effort. The analysis of the data revealed three test productivity patterns representing economies or diseconomies of scale and these patterns served as a basis for investigating the characteristics of the corresponding projects. Three groups of projects related to the three different productivity patterns, characterized by domain, team size, elapsed time and rigor of verification and validation carried out during development, were found to be statistically significant. Within each project group, the variations in test effort can be explained, in addition to functional size, by 1) the processes executed during development, and 2) the processes adopted for testing. Portfolios of estimation models were built using combinations of the three independent variables. Performance of the estimation models built using the function point method innovated by the Common Software Measurement International Consortium (COSMIC) known as COSMIC Function Points, and the one advocated by the International Function Point Users Group (IFPUG) known as IFPUG Function Points, were compared to evaluate the impact of these respective sizing methods on test effort estimation.
基金supported by the Tehran University of Medical Sciences,Tehran,Iran(No.24166)the Masih Daneshvari Hospital,Shahid Beheshti University of Medical Sciences,Tehran,Iran
文摘This study was to assess quantitatively the accuracy of ^(18)F-FDG PET/CT images reconstructed by TOF+PSF and TOF only, considering the noise-matching concept to minimize probable bias in evaluating algorithm performance caused by noise. PET images of similar noise level were considered. Measurements were made on an inhouse phantom with hot inserts of Φ10–37 mm, and oncological images of 14 patients were analyzed. The PET images were reconstructed using the OSEM, OSEM+TOF and OSEM+TOF+PSF algorithms. Optimal reconstruction parameters including iteration, subset, and FWHM of post-smoothing filter were chosen for both the phantom and patient data. In terms of quantitative accuracy, the recovery coefficient(RC) was calculated for the phantom PET images. The signal-to-noise ratio(SNR),lesion-to-background ratio(LBR), and SUV_(max)were evaluated from the phantom and clinical data. The smallest hot insert(Ф10 mm) with 2:1 activity concentration ratio could be detected in the PET image reconstructed using the TOF and TOF+PSF algorithms, but not the OSEM algorithm. The relative difference for SNR between the TOF+PSF and OSEM showed significantly higher values for smaller sizes, while SNR change was smaller for Ф22–37 mm inserts both 2:1 and 4:1 activity concentration ratio. In the clinical study, SNR gains were 1.6 ± 0.53 and 2.7 ± 0.74 for the TOF and TOF+PSF, while the relative difference of contrast was 17 ± 1.05 and 41.5 ± 1.85% for the TOF only and TOF+PSF, respectively. The impact of TOF+PSF is more significant than that of TOF reconstruction, in smaller inserts with low activity concentration ratio. In the clinical PET/CT images, the use of the TOF+PSF algorithm resulted in better SNR and contrast for lesions, and the highest SUV_(max)was also seen for images reconstructed with the TOF+PSF algorithm.
文摘Detection of small pulmonary nodules is the goal of lung cancer screening. Computer-aided detection (CAD) systems are recommended to use in lung cancer computed tomography (CT) screening to increase the accuracy of nodule detection. Size and density of lung nodules are primary factors in determining the risk of malignancy. Therefore, purpose of this study is to apply computer-simulated virtual nodules based on the point spread function (PSF) measured in same scanner (maintaining spatial resolution condition) to assess the CAD system performance dependence on nodule size and density. Virtual nodules with density differences between lung background and nodule density (ΔCT) values (200, 300 and 400 HU) and different sizes (4 to 8 mm) were generated and fused on clinical images. CAD detection was performed and free-response receiver operating characteristic (FROC) curves were obtained. Results show that both density and size of virtual nodules can affect detection efficiency. Detailed results are possible to use for quantitative analysis of a CAD system performance. This study suggests that PSF-based virtual nodules could be effectively used to assess the lung cancer CT screening CAD system performance dependence on nodule size and density.
文摘Point spread function(PSF)engineering has been pivotal in the remarkable progress made in high-resolution imaging in the last decades.However,the diversity in PSF structures attainable through existing engineering methods is limited.Here,we report universal PSF engineering,demonstrating a method to synthesize an arbitrary set of spatially varying 3D PSFs between the input and output volumes of a spatially incoherent diffractive processor composed of cascaded transmissive surfaces.We rigorously analyze the PSF engineering capabilities of such diffractive processors within the diffraction limit of light and provide numerical demonstrations of unique imaging capabilities,such as snapshot 3D multispectral imaging without involving any spectral filters,axial scanning or digital reconstruction steps,which is enabled by the spatial and spectral engineering of 3D PSFs.Our framework and analysis would be important for future advancements in computational imaging,sensing,and diffractive processing of 3D optical information.
基金the support from the National Natural Science Foundation of China(51827806)the National Key Research and Development Program of China(2016YFB0501201)the Xplorer Prize funded by the Tencent Foundation。
文摘Subpixel localization techniques for estimating the positions of point-like images captured by pixelated image sensors have been widely used in diverse optical measurement fields.With unavoidable imaging noise,there is a precision limit(PL)when estimating the target positions on image sensors,which depends on the detected photon count,noise,point spread function(PSF)radius,and PSF’s intra-pixel position.Previous studies have clearly reported the effects of the first three parameters on the PL but have neglected the intra-pixel position information.Here,we develop a localization PL analysis framework for revealing the effect of the intra-pixel position of small PSFs.To accurately estimate the PL in practical applications,we provide effective PSF(e PSF)modeling approaches and apply the Cramér–Rao lower bound.Based on the characteristics of small PSFs,we first derive simplified equations for finding the best PL and the best intra-pixel region for an arbitrary small PSF;we then verify these equations on real PSFs.Next,we use the typical Gaussian PSF to perform a further analysis and find that the final optimum of the PL is achieved at the pixel boundaries when the Gaussian radius is as small as possible,indicating that the optimum is ultimately limited by light diffraction.Finally,we apply the maximum likelihood method.Its combination with e PSF modeling allows us to successfully reach the PL in experiments,making the above theoretical analysis effective.This work provides a new perspective on combining image sensor position control with PSF engineering to make full use of information theory,thereby paving the way for thoroughly understanding and achieving the final optimum of the PL in optical localization.
基金supported by the National Science Foundation of China(No.11375255)
文摘An X-ray pinhole camera has been used to determine the transverse beam size and emittance on the diagnostic beam line of the storage ring at SSRF since2009.The performance of the beam size measurement is determined by the width of the point spread function of the X-ray pinhole camera.Beam-based calibration was carried in 2012 out by varying the beam size at the source point and measuring the image size.However,this calibration method requires special beam conditions.In order to overcome this limitation,the pinhole camera was upgraded and an X-ray quasi-monochromator was installed.A novel experimental method was introduced by combining the pinhole camera with the monochromator to calibrate the point spread function.The point spread function can be accurately resolved by adjusting the angle of the monochromator and measuring the image size.The X-ray spectrum can also be obtained.In this work,the X-ray quasi-monochromator and the novel beam-based calibration method will be presented in detail.
文摘Effects of many medical procedures appear after a time lag, when a significant change occurs in subjects’ failure rate. This paper focuses on the detection and estimation of such changes which is important for the evaluation and comparison of treatments and prediction of their effects. Unlike the classical change-point model, measurements may still be identically distributed, and the change point is a parameter of their common survival function. Some of the classical change-point detection techniques can still be used but the results are different. Contrary to the classical model, the maximum likelihood estimator of a change point appears consistent, even in presence of nuisance parameters. However, a more efficient procedure can be derived from Kaplan-Meier estimation of the survival function followed by the least-squares estimation of the change point. Strong consistency of these estimation schemes is proved. The finite-sample properties are examined by a Monte Carlo study. Proposed methods are applied to a recent clinical trial of the treatment program for strong drug dependence.
文摘长春市是国家新型城镇化综合试点城市,识别长春市中心城区功能,针对当前存在的问题提出对策建议,对城市空间的优化与协调具有重要意义。以兴趣点(Point of Interest,POI)数据及开放街道地图(Open Street Map,OSM)数据为基础,结合核密度分析、实地调查验证等方法,识别长春市中心城区城市功能类型。结果表明:单一功能区中,商业功能区数量最多,居住功能区最少;主导—混合功能区中的商业主导功能区及交通主导功能区形成对商业聚集区与轨道交通系统的重要补充;细分—混合功能区特征显示功能混合程度从市中心向周边逐渐加大。经验证,城市功能识别结果符合长春市实际,由此提出对策建议:未来长春市中心城区应注重多中心发展格局,并加强绿地空间和公共服务设施建设。