Neural organoids and confocal microscopy have the potential to play an important role in microconnectome research to understand neural patterns.We present PLayer,a plug-and-play embedded neural system,which demonstrat...Neural organoids and confocal microscopy have the potential to play an important role in microconnectome research to understand neural patterns.We present PLayer,a plug-and-play embedded neural system,which demonstrates the utilization of sparse confocal microscopy layers to interpolate continuous axial resolution.With an embedded system focused on neural network pruning,image scaling,and post-processing,PLayer achieves high-performance metrics with an average structural similarity index of 0.9217 and a peak signal-to-noise ratio of 27.75 dB,all within 20 s.This represents a significant time saving of 85.71%with simplified image processing.By harnessing statistical map estimation in interpolation and incorporating the Vision Transformer–based Restorer,PLayer ensures 2D layer consistency while mitigating heavy computational dependence.As such,PLayer can reconstruct 3D neural organoid confocal data continuously under limited computational power for the wide acceptance of fundamental connectomics and pattern-related research with embedded devices.展开更多
The impact of heavy reduction on dendritic morphology was explored by combining experimental research and numerical simulation in metallurgy,including a detailed three-dimensional(3D)analysis and reconstruction of den...The impact of heavy reduction on dendritic morphology was explored by combining experimental research and numerical simulation in metallurgy,including a detailed three-dimensional(3D)analysis and reconstruction of dendritic solidification structures.Combining scanning electron microscopy and energy-dispersive scanning analysis and ANSYS simulation,the high-precision image processing software Mimics Research was utilized to conduct the extraction of dendritic morphologies.Reverse engineering software NX Imageware was employed for the 3D reconstruction of two-dimensional dendritic morphologies,restoring the dendritic characteristics in three-dimensional space.The results demonstrate that in a two-dimensional plane,dendrites connect with each other to form irregularly shaped“ring-like”structures.These dendrites have a thickness greater than 0.1 mm along the Z-axis direction,leading to the envelopment of molten steel by dendrites in a 3D space of at least 0.1 mm.This results in obstructed flow,confirming the“bridging”of dendrites in three-dimensional space,resulting in a tendency for central segregation.Dense and dispersed tiny dendrites,under the influence of heat flow direction,interconnect and continuously grow,gradually forming primary and secondary dendrites in three-dimensional space.After the completion of dendritic solidification and growth,these microdendrites appear dense and dispersed on the two-dimensional plane,providing the nuclei for the formation of new dendrites.When reduction occurs at a solid fraction of 0.46,there is a noticeable decrease in dendritic spacing,resulting in improved central segregation.展开更多
The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the a...The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the acquisition of images in real-time,motion blur,caused by camera shaking or human motion,appears.Deep learning-based intelligent control applied in vision can help us solve the problem.To this end,we propose a 3D reconstruction method for motion-blurred images using deep learning.First,we develop a BF-WGAN algorithm that combines the bilateral filtering(BF)denoising theory with a Wasserstein generative adversarial network(WGAN)to remove motion blur.The bilateral filter denoising algorithm is used to remove the noise and to retain the details of the blurred image.Then,the blurred image and the corresponding sharp image are input into the WGAN.This algorithm distinguishes the motion-blurred image from the corresponding sharp image according to the WGAN loss and perceptual loss functions.Next,we use the deblurred images generated by the BFWGAN algorithm for 3D reconstruction.We propose a threshold optimization random sample consensus(TO-RANSAC)algorithm that can remove the wrong relationship between two views in the 3D reconstructed model relatively accurately.Compared with the traditional RANSAC algorithm,the TO-RANSAC algorithm can adjust the threshold adaptively,which improves the accuracy of the 3D reconstruction results.The experimental results show that our BF-WGAN algorithm has a better deblurring effect and higher efficiency than do other representative algorithms.In addition,the TO-RANSAC algorithm yields a calculation accuracy considerably higher than that of the traditional RANSAC algorithm.展开更多
This article proposes a three-dimensional light field reconstruction method based on neural radiation field(NeRF)called Infrared NeRF for low resolution thermal infrared scenes.Based on the characteristics of the low ...This article proposes a three-dimensional light field reconstruction method based on neural radiation field(NeRF)called Infrared NeRF for low resolution thermal infrared scenes.Based on the characteristics of the low resolution thermal infrared imaging,various optimizations have been carried out to improve the speed and accuracy of thermal infrared 3D reconstruction.Firstly,inspired by Boltzmann's law of thermal radiation,distance is incorporated into the NeRF model for the first time,resulting in a nonlinear propagation of a single ray and a more accurate description of the physical property that infrared radiation intensity decreases with increasing distance.Secondly,in terms of improving inference speed,based on the phenomenon of high and low frequency distribution of foreground and background in infrared images,a multi ray non-uniform light synthesis strategy is proposed to make the model pay more attention to foreground objects in the scene,reduce the distribution of light in the background,and significantly reduce training time without reducing accuracy.In addition,compared to visible light scenes,infrared images only have a single channel,so fewer network parameters are required.Experiments using the same training data and data filtering method showed that,compared to the original NeRF,the improved network achieved an average improvement of 13.8%and 4.62%in PSNR and SSIM,respectively,while an average decreases of 46%in LPIPS.And thanks to the optimization of network layers and data filtering methods,training only takes about 25%of the original method's time to achieve convergence.Finally,for scenes with weak backgrounds,this article improves the inference speed of the model by 4-6 times compared to the original NeRF by limiting the query interval of the model.展开更多
3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safe...3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.展开更多
The Unmanned Aerial Vehicle(UAV)-assisted sensing-transmission--computing integrated system plays a vital role in emergency rescue scenarios involving damaged infrastructure.To tackle the challenges of data transmissi...The Unmanned Aerial Vehicle(UAV)-assisted sensing-transmission--computing integrated system plays a vital role in emergency rescue scenarios involving damaged infrastructure.To tackle the challenges of data transmission and enable timely rescue decision-making,we propose DWT-3DRec-an efficient wireless transmission model for 3D scene reconstruction.This model leverages MobileNetV2 to extract image and pose features,which are transmitted through a Dual-path Adaptive Noise Modulation network(DANM).Moreover,we introduce the Gumbel Channel Masking Module(GCMM),which enhances feature extraction and improves reconstruction reliability by mitigating the effects of dynamic noise.At the ground receiver,the Multi-scale Deep Source-Channel Coding for 3D Reconstruction(MDS-3DRecon)framework integrates Deep Joint Source-Channel Coding(DeepJSCC)with Cityscale Neural Radiance Fields(CityNeRF).It adopts a progressive close-view training strategy and incorporates an Adaptive Fusion Module(AFM)to achieve high-precision scene reconstruction.Experimental results demonstrate that DWT-3DRec significantly outperforms the Joint Photographic Experts Group(JPEG)standard in transmitting image and pose data,achieving an average loss as low as 0.0323 and exhibiting strong robustness across a Signal-to-Noise Ratio(SNR)range of 5--20 dB.In large-scale 3D scene reconstruction tasks,MDS-3DRecon surpasses Multum in Parvo Neural Radiance Fields(Mip-NeRF)and Bungee Neural Radiance Field(BungeeNeRF),achieving a Peak Signal-to-Noise Ratio(PSNR)of 24.921 dB and a reconstruction loss of 0.188.Ablation studies further confirm the essential roles of GCMM,DANM,and AFM in enabling highfidelity 3D reconstruction.展开更多
BACKGROUND Gastric cancer(GC)remains a significant global health challenge,with high incidence and mortality rates.Neoadjuvant chemotherapy is increasingly used to improve surgical outcomes and long-term survival in a...BACKGROUND Gastric cancer(GC)remains a significant global health challenge,with high incidence and mortality rates.Neoadjuvant chemotherapy is increasingly used to improve surgical outcomes and long-term survival in advanced cases.However,individual responses to treatment vary widely,and current imaging methods often fall short in accurately predicting efficacy.Advanced imaging techniques,such as computed tomography(CT)3D reconstruction and texture analysis,offer potential for more precise assessment of therapeutic response.AIM To explore the application value of CT 3D reconstruction volume change rate,texture feature analysis,and visual features in assessing the efficacy of neoadjuvant chemotherapy for advanced GC.METHODS A retrospective analysis was conducted on the clinical and imaging data of 97 patients with advanced GC who received S-1 plus Oxaliplatin combined chemotherapy regimen neoadjuvant chemotherapy from January 2022 to March 2024.CT texture feature analysis was performed using MaZda software,and ITK-snap software was used to measure the tumor volume change rate before and after chemotherapy.CT visual features were also evaluated.Using postoperative pathological tumor regression grade(TRG)as the gold standard,the correlation between various indicators and chemotherapy efficacy was analyzed,and a predictive model was constructed and internally validated.RESULTS The minimum misclassification rate of texture features in venous phase CT images(7.85%)was lower than in the arterial phase(13.92%).The volume change rate in the effective chemotherapy group(75.20%)was significantly higher than in the ineffective group(41.75%).There was a strong correlation between volume change rate and TRG grade(r=-0.886,P<0.001).Multivariate analysis showed that gastric wall peristalsis(OR=0.286)and thickness change rate≥40%(OR=0.265)were independent predictive factors.Receiver operating characteristic curve analysis indicated that the volume change rate[area under the curve(AUC)=0.885]was superior to the CT visual feature model(AUC=0.795).When the cutoff value was 82.56%,the sensitivity and specificity were 85.62%and 96.45%,respectively.CONCLUSION The CT 3D reconstruction volume change rate can serve as a preferred quantitative indicator for evaluating the efficacy of neoadjuvant chemotherapy in GC.Combining it with a CT visual feature predictive model can further improve the accuracy of efficacy evaluation.展开更多
This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D n...This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.展开更多
Efficient three-dimensional(3D)building reconstruction from drone imagery often faces data acquisition,storage,and computational challenges because of its reliance on dense point clouds.In this study,we introduced a n...Efficient three-dimensional(3D)building reconstruction from drone imagery often faces data acquisition,storage,and computational challenges because of its reliance on dense point clouds.In this study,we introduced a novel method for efficient and lightweight 3D building reconstruction from drone imagery using line clouds and sparse point clouds.Our approach eliminates the need to generate dense point clouds,and thus significantly reduces the computational burden by reconstructing 3D models directly from sparse data.We addressed the limitations of line clouds for plane detection and reconstruction by using a new algorithm.This algorithm projects 3D line clouds onto a 2D plane,clusters the projections to identify potential planes,and refines them using sparse point clouds to ensure an accurate and efficient model reconstruction.Extensive qualitative and quantitative experiments demonstrated the effectiveness of our method,demonstrating its superiority over existing techniques in terms of simplicity and efficiency.展开更多
Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned a...Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.展开更多
This study introduces a novel method for reconstructing the 3D model of aluminum foam using cross-sectional sequence images.Combining precision milling and image acquisition,high-qual-ity cross-sectional images are ob...This study introduces a novel method for reconstructing the 3D model of aluminum foam using cross-sectional sequence images.Combining precision milling and image acquisition,high-qual-ity cross-sectional images are obtained.Pore structures are segmented by the U-shaped network(U-Net)neural network integrated with the Canny edge detection operator,ensuring accurate pore delineation and edge extraction.The trained U-Net achieves 98.55%accuracy.The 2D data are superimposed and processed into 3D point clouds,enabling reconstruction of the pore structure and aluminum skeleton.Analysis of pore 01 shows the cross-sectional area initially increases,and then decreases with milling depth,with a uniform point distribution of 40 per layer.The reconstructed model exhibits a porosity of 77.5%,with section overlap rates between the 2D pore segmentation and the reconstructed model exceeding 96%,confirming high fidelity.Equivalent sphere diameters decrease with size,averaging 1.95 mm.Compression simulations reveal that the stress-strain curve of the 3D reconstruction model of aluminum foam exhibits fluctuations,and the stresses in the reconstruction model concentrate on thin cell walls,leading to localized deformations.This method accurately restores the aluminum foam’s complex internal structure,improving reconstruction preci-sion and simulation reliability.The approach offers a cost-efficient,high-precision technique for optimizing material performance in engineering applications.展开更多
Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propos...Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propose a training strategy that utilizes the pre-trained MICA model and self-supervised learning techniques to improve accuracy and reduce the time needed for 3D facial structure reconstruction.Our results demonstrate high accuracy,evaluated by the geometric loss function and various statistical measures.To showcase the effectiveness of the approach,we used 3D printing to create a model that covers facial wounds.The findings indicate that our method produces a model that fits well and achieves comprehensive 3D facial reconstruction.This technique has the potential to aid doctors in treating patients with facial injuries.展开更多
We present a grid-growth method to reconstruct 3D rock joints with arbitrary joint roughness and persistence.In the first step of this workflow,the joint model is divided into uniform grids.Then by adjusting the posit...We present a grid-growth method to reconstruct 3D rock joints with arbitrary joint roughness and persistence.In the first step of this workflow,the joint model is divided into uniform grids.Then by adjusting the positions of the grids,the joint morphology can be modified to construct models with desired joint roughness and persistence.Accordingly,numerous joint models with different joint roughness and persistence were built.The effects of relevant parameters(such as the number,height,slope of asperities,and the number,area of rock bridges)on the joint roughness coefficient(JRC)and joint persistence were investigated.Finally,an artificially split joint was reconstructed using the method,and the method's accuracy was evaluated by comparing the JRC of the models with that of the artificially split joint.The results showed that the proposed method can effectively control the JRC of joint models by adjusting the number,height,and slope of asperities.The method can also modify the joint persistence of joint models by adjusting the number and area of rock bridges.Additionally,the JRC of models obtained by our method agrees with that of the artificially split surface.Overall,the method demonstrated high accuracy for 3D rock joint reconstruction.展开更多
The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery s...The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.展开更多
The biomechanical relationship between the articular cartilage defect and knee osteoarthritis (OA) has not been clearly defined. This study presents a 3D knee finite element model (FEM) to determine the effect of cart...The biomechanical relationship between the articular cartilage defect and knee osteoarthritis (OA) has not been clearly defined. This study presents a 3D knee finite element model (FEM) to determine the effect of cartilage defects on the stress distribution around the defect rim. The complete knee FEM, which includes bones, articular cartilages, menisci and ligaments, is developed from computed tomography and magnetic resonance images. This FEM then is validated and used to simulate femoral cartilage defects. Based on the obtained results, it is confirmed that the 3D knee FEM is reconstructed with high-fidelity level and can faithfully predict the knee contact behavior. Cartilage defects drastically affect the stress distribution on articular cartilages. When the defect size was smaller than 1.00cm2, the stress elevation and redistribution were found undistinguishable. However, significant stress elevation and redistribution were detected due to the large defect sizes ( 1.00cm2). This alteration of stress distribution has important implications relating to the progression of cartilage defect to OA in the human knee joint.展开更多
Structure reconstruction of 3 D anatomy from biplanar X-ray images is a challenging topic. Traditionally, the elastic-model-based method was used to reconstruct 3 D shapes by deforming the control points on the elasti...Structure reconstruction of 3 D anatomy from biplanar X-ray images is a challenging topic. Traditionally, the elastic-model-based method was used to reconstruct 3 D shapes by deforming the control points on the elastic mesh. However, the reconstructed shape is not smooth because the limited control points are only distributed on the edge of the elastic mesh.Alternatively, statistical-model-based methods, which include shape-model-based and intensity-model-based methods, are introduced due to their smooth reconstruction. However, both suffer from limitations. With the shape-model-based method, only the boundary profile is considered, leading to the loss of valid intensity information. For the intensity-based-method, the computation speed is slow because it needs to calculate the intensity distribution in each iteration. To address these issues, we propose a new reconstruction method using X-ray images and a specimen’s CT data. Specifically, the CT data provides both the shape mesh and the intensity model of the vertebra. Intensity model is used to generate the deformation field from X-ray images, while the shape model is used to generate the patient specific model by applying the calculated deformation field.Experiments on the public synthetic dataset and clinical dataset show that the average reconstruction errors are 1.1 mm and1.2 mm, separately. The average reconstruction time is 3 minutes.展开更多
3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding, because it can provide 3D geometric information for robot task plan. In this investigation, a novel 3D reconst...3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding, because it can provide 3D geometric information for robot task plan. In this investigation, a novel 3D reconstruction system based on linear structured light vision sensing is developed. This system hardware consists of a MTC368-CB CCD camera, a MLH-645 laser projector and a DH-CG300 image grabbing card. This system software is developed to control the image data capture. In order to reconstruct the 3D geometric information from the captured image, a two steps rapid calibration algorithm is proposed. The 3D reconstruction experiment shows a satisfactory result.展开更多
BACKGROUND Hernia is a common condition requiring abdominal surgery.The current standard treatment for hernia is tension-free repair using meshes.Globally,more than 200 new types of meshes are licensed each year.Howev...BACKGROUND Hernia is a common condition requiring abdominal surgery.The current standard treatment for hernia is tension-free repair using meshes.Globally,more than 200 new types of meshes are licensed each year.However,their clinical applications are associated with a series of complications,such as recurrence(10%-24%)and infection(0.5%-9.0%).In contrast,3D-printed meshes have significantly reduced the postoperative complications in patients.They have also shortened operating time and minimized the loss of mesh materials.In this study,we used the myopectineal orifice(MPO)data obtained from preoperative computer tomography(CT)-based 3D reconstruction for the production of 3D-printed biologic meshes.AIM To investigate the application of multislice spiral CT-based 3D reconstruction technique in 3D-printed biologic mesh for hernia repair surgery.METHODS We retrospectively analyzed 60 patients who underwent laparoscopic tension-free repair for inguinal hernia in the Department of General Surgery of the First Hospital of Shanxi Medical University from September 2019 to December 2019.This study included 30 males and 30 females,with a mean age of 40±5.6 years.Data on the MPO were obtained from preoperative CT-based 3D reconstruction as well as from real-world intraoperative measurements for all patients.Anatomic points were set for the purpose of measurement based on the definition of MPO:A:The pubic tubercle;B:Intersection of the horizontal line extending from the summit of the inferior edge of the internal oblique and transversus abdominis and the outer edge of the rectus abdominis,C:Intersection of the horizontal line extending from the summit of the inferior edge of the internal oblique and transversus abdominis and the inguinal ligament,D:Intersection of the iliopsoas muscle and the inguinal ligament,and E:Intersection of the iliopsoas muscle and the superior pubic ramus.The distance between the points was measured.All preoperative and intraoperative data were analyzed using the t test.Differences with P<0.05 were considered significant in comparative analysis.RESULTS The distance between points AB,AC,BC,DE,and AE based on preoperative and intraoperative data was 7.576±0.212 cm vs 7.573±0.266 cm,7.627±0.212 cm vs 7.627±0.212 cm,7.677±0.229 cm vs 7.567±0.786 cm,7.589±0.204 cm vs 7.512±0.21 cm,and 7.617±0.231 cm vs 7.582±0.189 cm,respectively.All differences were not statistically significant(P>0.05).CONCLUSION The use of multislice spiral CT-based 3D reconstruction technique before hernia repair surgery allows accurate measurement of data and relationships of different anatomic sites in the MPO region.This technique can provide precise data for the production of 3D-printed biologic meshes.展开更多
With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in o...With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in outdoor hazy environments are prone to color distortion and low contrast;thus,the desired visual effect cannot be achieved and the difficulty of target detection is increased.Artificial intelligence(AI)solutions provide great help for dehazy images,which can automatically identify patterns or monitor the environment.Therefore,we propose a 3D reconstruction method of dehazed images for smart cities based on deep learning.First,we propose a fine transmission image deep convolutional regression network(FT-DCRN)dehazing algorithm that uses fine transmission image and atmospheric light value to compute dehazed image.The DCRN is used to obtain the coarse transmission image,which can not only expand the receptive field of the network but also retain the features to maintain the nonlinearity of the overall network.The fine transmission image is obtained by refining the coarse transmission image using a guided filter.The atmospheric light value is estimated according to the position and brightness of the pixels in the original hazy image.Second,we use the dehazed images generated by the FT-DCRN dehazing algorithm for 3D reconstruction.An advanced relaxed iterative fine matching based on the structure from motion(ARI-SFM)algorithm is proposed.The ARISFM algorithm,which obtains the fine matching corner pairs and reduces the number of iterations,establishes an accurate one-to-one matching corner relationship.The experimental results show that our FT-DCRN dehazing algorithm improves the accuracy compared to other representative algorithms.In addition,the ARI-SFM algorithm guarantees the precision and improves the efficiency.展开更多
This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, w...This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.展开更多
基金supported by the National Key R&D Program of China(Grant No.2021YFA1001000)the National Natural Science Foundation of China(Grant Nos.82111530212,U23A20282,and 61971255)+2 种基金the Natural Science Founda-tion of Guangdong Province(Grant No.2021B1515020092)the Shenzhen Bay Laboratory Fund(Grant No.SZBL2020090501014)the Shenzhen Science,Technology and Innovation Commission(Grant Nos.KJZD20231023094659002,JCYJ20220530142809022,and WDZC20220811170401001).
文摘Neural organoids and confocal microscopy have the potential to play an important role in microconnectome research to understand neural patterns.We present PLayer,a plug-and-play embedded neural system,which demonstrates the utilization of sparse confocal microscopy layers to interpolate continuous axial resolution.With an embedded system focused on neural network pruning,image scaling,and post-processing,PLayer achieves high-performance metrics with an average structural similarity index of 0.9217 and a peak signal-to-noise ratio of 27.75 dB,all within 20 s.This represents a significant time saving of 85.71%with simplified image processing.By harnessing statistical map estimation in interpolation and incorporating the Vision Transformer–based Restorer,PLayer ensures 2D layer consistency while mitigating heavy computational dependence.As such,PLayer can reconstruct 3D neural organoid confocal data continuously under limited computational power for the wide acceptance of fundamental connectomics and pattern-related research with embedded devices.
基金supported by Open Foundation of the State Key Laboratory of Refractories and Metallurgy(No.G201711)the National Natural Science Foundation of China(Nos.52104317 and 51874001).
文摘The impact of heavy reduction on dendritic morphology was explored by combining experimental research and numerical simulation in metallurgy,including a detailed three-dimensional(3D)analysis and reconstruction of dendritic solidification structures.Combining scanning electron microscopy and energy-dispersive scanning analysis and ANSYS simulation,the high-precision image processing software Mimics Research was utilized to conduct the extraction of dendritic morphologies.Reverse engineering software NX Imageware was employed for the 3D reconstruction of two-dimensional dendritic morphologies,restoring the dendritic characteristics in three-dimensional space.The results demonstrate that in a two-dimensional plane,dendrites connect with each other to form irregularly shaped“ring-like”structures.These dendrites have a thickness greater than 0.1 mm along the Z-axis direction,leading to the envelopment of molten steel by dendrites in a 3D space of at least 0.1 mm.This results in obstructed flow,confirming the“bridging”of dendrites in three-dimensional space,resulting in a tendency for central segregation.Dense and dispersed tiny dendrites,under the influence of heat flow direction,interconnect and continuously grow,gradually forming primary and secondary dendrites in three-dimensional space.After the completion of dendritic solidification and growth,these microdendrites appear dense and dispersed on the two-dimensional plane,providing the nuclei for the formation of new dendrites.When reduction occurs at a solid fraction of 0.46,there is a noticeable decrease in dendritic spacing,resulting in improved central segregation.
基金the National Natural Science Foundation of China under Grant 61902311in part by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(KAKENHI)under Grant JP18K18044.
文摘The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the acquisition of images in real-time,motion blur,caused by camera shaking or human motion,appears.Deep learning-based intelligent control applied in vision can help us solve the problem.To this end,we propose a 3D reconstruction method for motion-blurred images using deep learning.First,we develop a BF-WGAN algorithm that combines the bilateral filtering(BF)denoising theory with a Wasserstein generative adversarial network(WGAN)to remove motion blur.The bilateral filter denoising algorithm is used to remove the noise and to retain the details of the blurred image.Then,the blurred image and the corresponding sharp image are input into the WGAN.This algorithm distinguishes the motion-blurred image from the corresponding sharp image according to the WGAN loss and perceptual loss functions.Next,we use the deblurred images generated by the BFWGAN algorithm for 3D reconstruction.We propose a threshold optimization random sample consensus(TO-RANSAC)algorithm that can remove the wrong relationship between two views in the 3D reconstructed model relatively accurately.Compared with the traditional RANSAC algorithm,the TO-RANSAC algorithm can adjust the threshold adaptively,which improves the accuracy of the 3D reconstruction results.The experimental results show that our BF-WGAN algorithm has a better deblurring effect and higher efficiency than do other representative algorithms.In addition,the TO-RANSAC algorithm yields a calculation accuracy considerably higher than that of the traditional RANSAC algorithm.
基金Support by the Fundamental Research Funds for the Central Universities(2024300443)the National Natural Science Foundation of China(NSFC)Young Scientists Fund(62405131)。
文摘This article proposes a three-dimensional light field reconstruction method based on neural radiation field(NeRF)called Infrared NeRF for low resolution thermal infrared scenes.Based on the characteristics of the low resolution thermal infrared imaging,various optimizations have been carried out to improve the speed and accuracy of thermal infrared 3D reconstruction.Firstly,inspired by Boltzmann's law of thermal radiation,distance is incorporated into the NeRF model for the first time,resulting in a nonlinear propagation of a single ray and a more accurate description of the physical property that infrared radiation intensity decreases with increasing distance.Secondly,in terms of improving inference speed,based on the phenomenon of high and low frequency distribution of foreground and background in infrared images,a multi ray non-uniform light synthesis strategy is proposed to make the model pay more attention to foreground objects in the scene,reduce the distribution of light in the background,and significantly reduce training time without reducing accuracy.In addition,compared to visible light scenes,infrared images only have a single channel,so fewer network parameters are required.Experiments using the same training data and data filtering method showed that,compared to the original NeRF,the improved network achieved an average improvement of 13.8%and 4.62%in PSNR and SSIM,respectively,while an average decreases of 46%in LPIPS.And thanks to the optimization of network layers and data filtering methods,training only takes about 25%of the original method's time to achieve convergence.Finally,for scenes with weak backgrounds,this article improves the inference speed of the model by 4-6 times compared to the original NeRF by limiting the query interval of the model.
基金Supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘3D medical image reconstruction has significantly enhanced diagnostic accuracy,yet the reliance on densely sampled projection data remains a major limitation in clinical practice.Sparse-angle X-ray imaging,though safer and faster,poses challenges for accurate volumetric reconstruction due to limited spatial information.This study proposes a 3D reconstruction neural network based on adaptive weight fusion(AdapFusionNet)to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images.To address the issue of spatial inconsistency in multi-angle image reconstruction,an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and perform weighted fusion,thereby improving the final reconstruction quality.The reconstruction network is built on an autoencoder(AE)framework and uses orthogonal-angle X-ray images(frontal and lateral projections)as inputs.The encoder extracts 2D features,which the decoder maps into 3D space.This study utilizes a lung CT dataset to obtain complete three-dimensional volumetric data,from which digitally reconstructed radiographs(DRR)are generated at various angles to simulate X-ray images.Since real-world clinical X-ray images rarely come with perfectly corresponding 3D“ground truth,”using CT scans as the three-dimensional reference effectively supports the training and evaluation of deep networks for sparse-angle X-ray 3D reconstruction.Experiments conducted on the LIDC-IDRI dataset with simulated X-ray images(DRR images)as training data demonstrate the superior performance of AdapFusionNet compared to other fusion methods.Quantitative results show that AdapFusionNet achieves SSIM,PSNR,and MAE values of 0.332,13.404,and 0.163,respectively,outperforming other methods(SingleViewNet:0.289,12.363,0.182;AvgFusionNet:0.306,13.384,0.159).Qualitative analysis further confirms that AdapFusionNet significantly enhances the reconstruction of lung and chest contours while effectively reducing noise during the reconstruction process.The findings demonstrate that AdapFusionNet offers significant advantages in 3D reconstruction of sparse-angle X-ray images.
基金supported by the National Key Research and Development Program of China(2022YFB4500800)the Applied Basic Research Program Project of Liaoning Province(2023JH2/101300192)+2 种基金the National Natural Science Foundation of China(62032013,62072094)the Fundamental Research Funds for the Central Universities(N2416006,N2416016)Shenyang Science and Technology Plan Project(ZX20250050).
文摘The Unmanned Aerial Vehicle(UAV)-assisted sensing-transmission--computing integrated system plays a vital role in emergency rescue scenarios involving damaged infrastructure.To tackle the challenges of data transmission and enable timely rescue decision-making,we propose DWT-3DRec-an efficient wireless transmission model for 3D scene reconstruction.This model leverages MobileNetV2 to extract image and pose features,which are transmitted through a Dual-path Adaptive Noise Modulation network(DANM).Moreover,we introduce the Gumbel Channel Masking Module(GCMM),which enhances feature extraction and improves reconstruction reliability by mitigating the effects of dynamic noise.At the ground receiver,the Multi-scale Deep Source-Channel Coding for 3D Reconstruction(MDS-3DRecon)framework integrates Deep Joint Source-Channel Coding(DeepJSCC)with Cityscale Neural Radiance Fields(CityNeRF).It adopts a progressive close-view training strategy and incorporates an Adaptive Fusion Module(AFM)to achieve high-precision scene reconstruction.Experimental results demonstrate that DWT-3DRec significantly outperforms the Joint Photographic Experts Group(JPEG)standard in transmitting image and pose data,achieving an average loss as low as 0.0323 and exhibiting strong robustness across a Signal-to-Noise Ratio(SNR)range of 5--20 dB.In large-scale 3D scene reconstruction tasks,MDS-3DRecon surpasses Multum in Parvo Neural Radiance Fields(Mip-NeRF)and Bungee Neural Radiance Field(BungeeNeRF),achieving a Peak Signal-to-Noise Ratio(PSNR)of 24.921 dB and a reconstruction loss of 0.188.Ablation studies further confirm the essential roles of GCMM,DANM,and AFM in enabling highfidelity 3D reconstruction.
文摘BACKGROUND Gastric cancer(GC)remains a significant global health challenge,with high incidence and mortality rates.Neoadjuvant chemotherapy is increasingly used to improve surgical outcomes and long-term survival in advanced cases.However,individual responses to treatment vary widely,and current imaging methods often fall short in accurately predicting efficacy.Advanced imaging techniques,such as computed tomography(CT)3D reconstruction and texture analysis,offer potential for more precise assessment of therapeutic response.AIM To explore the application value of CT 3D reconstruction volume change rate,texture feature analysis,and visual features in assessing the efficacy of neoadjuvant chemotherapy for advanced GC.METHODS A retrospective analysis was conducted on the clinical and imaging data of 97 patients with advanced GC who received S-1 plus Oxaliplatin combined chemotherapy regimen neoadjuvant chemotherapy from January 2022 to March 2024.CT texture feature analysis was performed using MaZda software,and ITK-snap software was used to measure the tumor volume change rate before and after chemotherapy.CT visual features were also evaluated.Using postoperative pathological tumor regression grade(TRG)as the gold standard,the correlation between various indicators and chemotherapy efficacy was analyzed,and a predictive model was constructed and internally validated.RESULTS The minimum misclassification rate of texture features in venous phase CT images(7.85%)was lower than in the arterial phase(13.92%).The volume change rate in the effective chemotherapy group(75.20%)was significantly higher than in the ineffective group(41.75%).There was a strong correlation between volume change rate and TRG grade(r=-0.886,P<0.001).Multivariate analysis showed that gastric wall peristalsis(OR=0.286)and thickness change rate≥40%(OR=0.265)were independent predictive factors.Receiver operating characteristic curve analysis indicated that the volume change rate[area under the curve(AUC)=0.885]was superior to the CT visual feature model(AUC=0.795).When the cutoff value was 82.56%,the sensitivity and specificity were 85.62%and 96.45%,respectively.CONCLUSION The CT 3D reconstruction volume change rate can serve as a preferred quantitative indicator for evaluating the efficacy of neoadjuvant chemotherapy in GC.Combining it with a CT visual feature predictive model can further improve the accuracy of efficacy evaluation.
文摘This paper presents a novel method for reconstructing a highly accurate 3D nose model of the human from 2D images and pre-marked landmarks based on algorithmic methods.The study focuses on the reconstruction of a 3D nose model tailored for applications in healthcare and cosmetic surgery.The approach leverages advanced image processing techniques,3D Morphable Models(3DMM),and deformation techniques to overcome the limita-tions of deep learning models,particularly addressing the interpretability issues commonly encountered in medical applications.The proposed method estimates the 3D coordinates of landmark points using a 3D structure estimation algorithm.Sub-landmarks are extracted through image processing techniques and interpolation.The initial surface is generated using a 3DMM,though its accuracy remains limited.To enhance precision,deformation techniques are applied,utilizing the coordinates of 76 identified landmarks and sub-landmarks.The resulting 3D nose model is constructed based on algorithmic methods and pre-marked landmarks.Evaluation of the 3D model is conducted by comparing landmark distances and shape similarity with expert-determined ground truth on 30 Vietnamese volunteers aged 18 to 47,all of whom were either preparing for or required nasal surgery.Experimental results demonstrate a strong agreement between the reconstructed 3D model and the ground truth.The method achieved a mean landmark distance error of 0.631 mm and a shape error of 1.738 mm,demonstrating its potential for medical applications.
基金Supported by the Guangdong Major Project of Basic and Applied Basic Research (2023B0303000016)the National Natural Science Foundation of China (U21A20515)。
文摘Efficient three-dimensional(3D)building reconstruction from drone imagery often faces data acquisition,storage,and computational challenges because of its reliance on dense point clouds.In this study,we introduced a novel method for efficient and lightweight 3D building reconstruction from drone imagery using line clouds and sparse point clouds.Our approach eliminates the need to generate dense point clouds,and thus significantly reduces the computational burden by reconstructing 3D models directly from sparse data.We addressed the limitations of line clouds for plane detection and reconstruction by using a new algorithm.This algorithm projects 3D line clouds onto a 2D plane,clusters the projections to identify potential planes,and refines them using sparse point clouds to ensure an accurate and efficient model reconstruction.Extensive qualitative and quantitative experiments demonstrated the effectiveness of our method,demonstrating its superiority over existing techniques in terms of simplicity and efficiency.
基金supported by the Hunan Provin〓〓cial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the National Natural Science Foundation of China(Grant No.12372189)。
文摘Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.
基金supported by the Key Research and DevelopmentPlan in Shanxi Province of China(No.201803D421045)the Natural Science Foundation of Shanxi Province(No.2021-0302-123104)。
文摘This study introduces a novel method for reconstructing the 3D model of aluminum foam using cross-sectional sequence images.Combining precision milling and image acquisition,high-qual-ity cross-sectional images are obtained.Pore structures are segmented by the U-shaped network(U-Net)neural network integrated with the Canny edge detection operator,ensuring accurate pore delineation and edge extraction.The trained U-Net achieves 98.55%accuracy.The 2D data are superimposed and processed into 3D point clouds,enabling reconstruction of the pore structure and aluminum skeleton.Analysis of pore 01 shows the cross-sectional area initially increases,and then decreases with milling depth,with a uniform point distribution of 40 per layer.The reconstructed model exhibits a porosity of 77.5%,with section overlap rates between the 2D pore segmentation and the reconstructed model exceeding 96%,confirming high fidelity.Equivalent sphere diameters decrease with size,averaging 1.95 mm.Compression simulations reveal that the stress-strain curve of the 3D reconstruction model of aluminum foam exhibits fluctuations,and the stresses in the reconstruction model concentrate on thin cell walls,leading to localized deformations.This method accurately restores the aluminum foam’s complex internal structure,improving reconstruction preci-sion and simulation reliability.The approach offers a cost-efficient,high-precision technique for optimizing material performance in engineering applications.
文摘Research on reconstructing imperfect faces is a challenging task.In this study,we explore a data-driven approach using a pre-trained MICA(MetrIC fAce)model combined with 3D printing to address this challenge.We propose a training strategy that utilizes the pre-trained MICA model and self-supervised learning techniques to improve accuracy and reduce the time needed for 3D facial structure reconstruction.Our results demonstrate high accuracy,evaluated by the geometric loss function and various statistical measures.To showcase the effectiveness of the approach,we used 3D printing to create a model that covers facial wounds.The findings indicate that our method produces a model that fits well and achieves comprehensive 3D facial reconstruction.This technique has the potential to aid doctors in treating patients with facial injuries.
基金supported by the National Natural Science Foundation of China(Nos.12172019 and 42477210).
文摘We present a grid-growth method to reconstruct 3D rock joints with arbitrary joint roughness and persistence.In the first step of this workflow,the joint model is divided into uniform grids.Then by adjusting the positions of the grids,the joint morphology can be modified to construct models with desired joint roughness and persistence.Accordingly,numerous joint models with different joint roughness and persistence were built.The effects of relevant parameters(such as the number,height,slope of asperities,and the number,area of rock bridges)on the joint roughness coefficient(JRC)and joint persistence were investigated.Finally,an artificially split joint was reconstructed using the method,and the method's accuracy was evaluated by comparing the JRC of the models with that of the artificially split joint.The results showed that the proposed method can effectively control the JRC of joint models by adjusting the number,height,and slope of asperities.The method can also modify the joint persistence of joint models by adjusting the number and area of rock bridges.Additionally,the JRC of models obtained by our method agrees with that of the artificially split surface.Overall,the method demonstrated high accuracy for 3D rock joint reconstruction.
文摘The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.
基金the National Natural Science Foundation of China (No. 81071235)the Medicine and Engineering Interdisciplinary Fund of Shanghai Jiaotong University (No. YG2010MS26)
文摘The biomechanical relationship between the articular cartilage defect and knee osteoarthritis (OA) has not been clearly defined. This study presents a 3D knee finite element model (FEM) to determine the effect of cartilage defects on the stress distribution around the defect rim. The complete knee FEM, which includes bones, articular cartilages, menisci and ligaments, is developed from computed tomography and magnetic resonance images. This FEM then is validated and used to simulate femoral cartilage defects. Based on the obtained results, it is confirmed that the 3D knee FEM is reconstructed with high-fidelity level and can faithfully predict the knee contact behavior. Cartilage defects drastically affect the stress distribution on articular cartilages. When the defect size was smaller than 1.00cm2, the stress elevation and redistribution were found undistinguishable. However, significant stress elevation and redistribution were detected due to the large defect sizes ( 1.00cm2). This alteration of stress distribution has important implications relating to the progression of cartilage defect to OA in the human knee joint.
基金supported in part by The National Key Research and Development Program of China(2018YFC2001302)the National Natural Science Foundation of China(61976209)+1 种基金CAS International Collaboration Key Project(173211KYSB20190024)Strategic Priority Research Program of CAS(XDB32040000)。
文摘Structure reconstruction of 3 D anatomy from biplanar X-ray images is a challenging topic. Traditionally, the elastic-model-based method was used to reconstruct 3 D shapes by deforming the control points on the elastic mesh. However, the reconstructed shape is not smooth because the limited control points are only distributed on the edge of the elastic mesh.Alternatively, statistical-model-based methods, which include shape-model-based and intensity-model-based methods, are introduced due to their smooth reconstruction. However, both suffer from limitations. With the shape-model-based method, only the boundary profile is considered, leading to the loss of valid intensity information. For the intensity-based-method, the computation speed is slow because it needs to calculate the intensity distribution in each iteration. To address these issues, we propose a new reconstruction method using X-ray images and a specimen’s CT data. Specifically, the CT data provides both the shape mesh and the intensity model of the vertebra. Intensity model is used to generate the deformation field from X-ray images, while the shape model is used to generate the patient specific model by applying the calculated deformation field.Experiments on the public synthetic dataset and clinical dataset show that the average reconstruction errors are 1.1 mm and1.2 mm, separately. The average reconstruction time is 3 minutes.
文摘3D reconstruction of worn parts is the foundation for remanufacturing system based on robotic arc welding, because it can provide 3D geometric information for robot task plan. In this investigation, a novel 3D reconstruction system based on linear structured light vision sensing is developed. This system hardware consists of a MTC368-CB CCD camera, a MLH-645 laser projector and a DH-CG300 image grabbing card. This system software is developed to control the image data capture. In order to reconstruct the 3D geometric information from the captured image, a two steps rapid calibration algorithm is proposed. The 3D reconstruction experiment shows a satisfactory result.
基金Supported by the Shanxi Provincial Key Research and Development Program,No.201903D321175.
文摘BACKGROUND Hernia is a common condition requiring abdominal surgery.The current standard treatment for hernia is tension-free repair using meshes.Globally,more than 200 new types of meshes are licensed each year.However,their clinical applications are associated with a series of complications,such as recurrence(10%-24%)and infection(0.5%-9.0%).In contrast,3D-printed meshes have significantly reduced the postoperative complications in patients.They have also shortened operating time and minimized the loss of mesh materials.In this study,we used the myopectineal orifice(MPO)data obtained from preoperative computer tomography(CT)-based 3D reconstruction for the production of 3D-printed biologic meshes.AIM To investigate the application of multislice spiral CT-based 3D reconstruction technique in 3D-printed biologic mesh for hernia repair surgery.METHODS We retrospectively analyzed 60 patients who underwent laparoscopic tension-free repair for inguinal hernia in the Department of General Surgery of the First Hospital of Shanxi Medical University from September 2019 to December 2019.This study included 30 males and 30 females,with a mean age of 40±5.6 years.Data on the MPO were obtained from preoperative CT-based 3D reconstruction as well as from real-world intraoperative measurements for all patients.Anatomic points were set for the purpose of measurement based on the definition of MPO:A:The pubic tubercle;B:Intersection of the horizontal line extending from the summit of the inferior edge of the internal oblique and transversus abdominis and the outer edge of the rectus abdominis,C:Intersection of the horizontal line extending from the summit of the inferior edge of the internal oblique and transversus abdominis and the inguinal ligament,D:Intersection of the iliopsoas muscle and the inguinal ligament,and E:Intersection of the iliopsoas muscle and the superior pubic ramus.The distance between the points was measured.All preoperative and intraoperative data were analyzed using the t test.Differences with P<0.05 were considered significant in comparative analysis.RESULTS The distance between points AB,AC,BC,DE,and AE based on preoperative and intraoperative data was 7.576±0.212 cm vs 7.573±0.266 cm,7.627±0.212 cm vs 7.627±0.212 cm,7.677±0.229 cm vs 7.567±0.786 cm,7.589±0.204 cm vs 7.512±0.21 cm,and 7.617±0.231 cm vs 7.582±0.189 cm,respectively.All differences were not statistically significant(P>0.05).CONCLUSION The use of multislice spiral CT-based 3D reconstruction technique before hernia repair surgery allows accurate measurement of data and relationships of different anatomic sites in the MPO region.This technique can provide precise data for the production of 3D-printed biologic meshes.
基金supported in part by the National Natural Science Foundation of China under Grant 61902311in part by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(KAKENHI)under Grant JP18K18044.
文摘With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in outdoor hazy environments are prone to color distortion and low contrast;thus,the desired visual effect cannot be achieved and the difficulty of target detection is increased.Artificial intelligence(AI)solutions provide great help for dehazy images,which can automatically identify patterns or monitor the environment.Therefore,we propose a 3D reconstruction method of dehazed images for smart cities based on deep learning.First,we propose a fine transmission image deep convolutional regression network(FT-DCRN)dehazing algorithm that uses fine transmission image and atmospheric light value to compute dehazed image.The DCRN is used to obtain the coarse transmission image,which can not only expand the receptive field of the network but also retain the features to maintain the nonlinearity of the overall network.The fine transmission image is obtained by refining the coarse transmission image using a guided filter.The atmospheric light value is estimated according to the position and brightness of the pixels in the original hazy image.Second,we use the dehazed images generated by the FT-DCRN dehazing algorithm for 3D reconstruction.An advanced relaxed iterative fine matching based on the structure from motion(ARI-SFM)algorithm is proposed.The ARISFM algorithm,which obtains the fine matching corner pairs and reduces the number of iterations,establishes an accurate one-to-one matching corner relationship.The experimental results show that our FT-DCRN dehazing algorithm improves the accuracy compared to other representative algorithms.In addition,the ARI-SFM algorithm guarantees the precision and improves the efficiency.
基金This work was supported by Grant-in-Aid for Scientific Research (C) (No.17500119)
文摘This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.