3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Des...3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.展开更多
In this paper, the complete process of constructing 3D digital core by fullconvolutional neural network is described carefully. A large number of sandstone computedtomography (CT) images are used as training input for...In this paper, the complete process of constructing 3D digital core by fullconvolutional neural network is described carefully. A large number of sandstone computedtomography (CT) images are used as training input for a fully convolutional neural networkmodel. This model is used to reconstruct the three-dimensional (3D) digital core of Bereasandstone based on a small number of CT images. The Hamming distance together with theMinkowski functions for porosity, average volume specifi c surface area, average curvature,and connectivity of both the real core and the digital reconstruction are used to evaluate theaccuracy of the proposed method. The results show that the reconstruction achieved relativeerrors of 6.26%, 1.40%, 6.06%, and 4.91% for the four Minkowski functions and a Hammingdistance of 0.04479. This demonstrates that the proposed method can not only reconstructthe physical properties of real sandstone but can also restore the real characteristics of poredistribution in sandstone, is the ability to which is a new way to characterize the internalmicrostructure of rocks.展开更多
In this work,a three dimensional(3D)convolutional neural network(CNN)model based on image slices of various normal and pathological vocal folds is proposed for accurate and efficient prediction of glottal flows.The 3D...In this work,a three dimensional(3D)convolutional neural network(CNN)model based on image slices of various normal and pathological vocal folds is proposed for accurate and efficient prediction of glottal flows.The 3D CNN model is composed of the feature extraction block and regression block.The feature extraction block is capable of learning low dimensional features from the high dimensional image data of the glottal shape,and the regression block is employed to flatten the output from the feature extraction block and obtain the desired glottal flow data.The input image data is the condensed set of 2D image slices captured in the axial plane of the 3D vocal folds,where these glottal shapes are synthesized based on the equations of normal vibration modes.The output flow data is the corresponding flow rate,averaged glottal pressure and nodal pressure distributions over the glottal surface.The 3D CNN model is built to establish the mapping between the input image data and output flow data.The ground-truth flow variables of each glottal shape in the training and test datasets are obtained by a high-fidelity sharp-interface immersed-boundary solver.The proposed model is trained to predict the concerned flow variables for glottal shapes in the test set.The present 3D CNN model is more efficient than traditional Computational Fluid Dynamics(CFD)models while the accuracy can still be retained,and more powerful than previous data-driven prediction models because more details of the glottal flow can be provided.The prediction performance of the trained 3D CNN model in accuracy and efficiency indicates that this model could be promising for future clinical applications.展开更多
Cerenkov Luminescence Tomography(CLT)is a novel and potential imaging modality which can display the three-dimensional distribution of radioactive probes.However,due to severe ill-posed inverse problem,obtaining accur...Cerenkov Luminescence Tomography(CLT)is a novel and potential imaging modality which can display the three-dimensional distribution of radioactive probes.However,due to severe ill-posed inverse problem,obtaining accurate reconstruction results is still a challenge for traditional model-based methods.The recently emerged deep learning-based methods can directly learn the mapping relation between the surface photon intensity and the distribution of the radioactive source,which effectively improves the performance of CLT reconstruction.However,the previously proposed deep learning-based methods cannot work well when the order of input is disarranged.In this paper,a novel 3D graph convolution-based residual network,GCR-Net,is proposed,which can obtain a robust and accurate reconstruction result from the photon intensity of the surface.Additionally,it is proved that the network is insensitive to the order of input.The performance of this method was evaluated with numerical simulations and in vivo experiments.The results demonstrated that compared with the existing methods,the proposed method can achieve efficient and accurate reconstruction in localization and shape recovery by utilizing threedimensional information.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Ou...Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.展开更多
In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution in...In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter,which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.展开更多
AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segment...AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segmentation was employed. In order to solve the category imbalance in retinal optical coherence tomography(OCT) images, the network parameters and loss function based on the 2D fully convolutional network were modified. For this network, the correlations of corresponding positions among adjacent images in space are ignored. Thus, we proposed a three-dimensional(3D) fully convolutional network for segmentation in the retinal OCT images.RESULTS: The algorithm was evaluated according to segmentation accuracy, Kappa coefficient, and F1 score. For the 3D fully convolutional network proposed in this paper, the overall segmentation accuracy rate is 99.56%, Kappa coefficient is 98.47%, and F1 score of retinal fluid is 95.50%. CONCLUSION: The OCT image segmentation algorithm based on deep learning is primarily founded on the 2D convolutional network. The 3D network architecture proposed in this paper reduces the influence of category imbalance, realizes end-to-end segmentation of volume images, and achieves optimal segmentation results. The segmentation maps are practically the same as the manual annotations of doctors, and can provide doctors with more accurate diagnostic data.展开更多
Previous multi-view 3D human pose estimation methods neither correlate different human joints in each view nor model learnable correlations between the same joints in different views explicitly,meaning that skeleton s...Previous multi-view 3D human pose estimation methods neither correlate different human joints in each view nor model learnable correlations between the same joints in different views explicitly,meaning that skeleton structure information is not utilized and multi-view pose information is not completely fused.Moreover,existing graph convolutional operations do not consider the specificity of different joints and different views of pose information when processing skeleton graphs,making the correlation weights between nodes in the graph and their neighborhood nodes shared.Existing Graph Convolutional Networks(GCNs)cannot extract global and deeplevel skeleton structure information and view correlations efficiently.To solve these problems,pre-estimated multiview 2D poses are designed as a multi-view skeleton graph to fuse skeleton priors and view correlations explicitly to process occlusion problem,with the skeleton-edge and symmetry-edge representing the structure correlations between adjacent joints in each viewof skeleton graph and the view-edge representing the view correlations between the same joints in different views.To make graph convolution operation mine elaborate and sufficient skeleton structure information and view correlations,different correlation weights are assigned to different categories of neighborhood nodes and further assigned to each node in the graph.Based on the graph convolution operation proposed above,a Residual Graph Convolution(RGC)module is designed as the basic module to be combined with the simplified Hourglass architecture to construct the Hourglass-GCN as our 3D pose estimation network.Hourglass-GCNwith a symmetrical and concise architecture processes three scales ofmulti-viewskeleton graphs to extract local-to-global scale and shallow-to-deep level skeleton features efficiently.Experimental results on common large 3D pose dataset Human3.6M and MPI-INF-3DHP show that Hourglass-GCN outperforms some excellent methods in 3D pose estimation accuracy.展开更多
Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered ...Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered by the loss of shallow texture information during feature extraction.To address this limitation,we propose a 3D Convolutional Enhanced Residual Video Super-Resolution Network(3D-ERVSNet).This network employs a forward and backward bidirectional propagation module(FBBPM)that aligns features across frames using explicit optical flow through lightweight SPyNet.By incorporating an enhanced residual structure(ERS)with skip connections,shallow and deep features are effectively integrated,enhancing texture restoration capabilities.Furthermore,3D convolution module(3DCM)is applied after the backward propagation module to implicitly capture spatio-temporal dependencies.The architecture synergizes these components where FBBPM extracts aligned features,ERS fuses hierarchical representations,and 3DCM refines temporal coherence.Finally,a deep feature aggregation module(DFAM)fuses the processed features,and a pixel-upsampling module(PUM)reconstructs the high-resolution(HR)video frames.Comprehensive evaluations on REDS,Vid4,UDM10,and Vim4 benchmarks demonstrate well performance including 30.95 dB PSNR/0.8822 SSIM on REDS and 32.78 dB/0.8987 on Vim4.3D-ERVSNet achieves significant gains over baselines while maintaining high efficiency with only 6.3M parameters and 77ms/frame runtime(i.e.,20×faster than RBPN).The network’s effectiveness stems from its task-specific asymmetric design that balances explicit alignment and implicit fusion.展开更多
As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of t...As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of that point cloud data is sparse, disordered and rotation-invariant, the success of convolutional neural network in 2 D image cannot be directly reproduced on point cloud. In this paper, we propose WECNN, namely, Weight-Edge Convolution Neural Network, which has an excellent ability to utilize local structural features. As the core of WECNN, a novel convolution operator called WEConv tries to capture structural features by constructing a fixed number of directed graphs and extracting the edge information of the graph to further analyze the local regions of point cloud. Moreover, a weight function is designed for different tasks to assign weights to the edges, so that feature extractions on the edges can be more fine-grained and robust. WECNN gets overall accuracy of 93.8% and mean class accuracy of 91.6% on Model Net40 dataset. At the same time, it gets a mean Io U of 85.5% on Shape Net Part dataset. Results of extensive experiments show that our WECNN outperforms other classification and segmentation approaches on challenging benchmarks.展开更多
The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience ...The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience of geologists.This approach has strong subjectivity,low efficiency,and high uncertainty.This uncertainty may be one of the key factors affecting the results of 3 D modeling of tight sandstone reservoirs.In recent years,deep learning,which is a cutting-edge artificial intelligence technology,has attracted attention from various fields.However,the study of deep-learning techniques in the field of lithofacies classification has not been sufficient.Therefore,this paper proposes a novel hybrid deep-learning model based on the efficient data feature-extraction ability of convolutional neural networks(CNN)and the excellent ability to describe time-dependent features of long short-term memory networks(LSTM)to conduct lithological facies-classification experiments.The results of a series of experiments show that the hybrid CNN-LSTM model had an average accuracy of 87.3%and the best classification effect compared to the CNN,LSTM or the three commonly used machine learning models(Support vector machine,random forest,and gradient boosting decision tree).In addition,the borderline synthetic minority oversampling technique(BSMOTE)is introduced to address the class-imbalance issue of raw data.The results show that processed data balance can significantly improve the accuracy of lithofacies classification.Beside that,based on the fine lithofacies constraints,the sequential indicator simulation method is used to establish a three-dimensional lithofacies model,which completes the fine description of the spatial distribution of tight sandstone reservoirs in the study area.According to this comprehensive analysis,the proposed CNN-LSTM model,which eliminates class imbalance,can be effectively applied to lithofacies classification,and is expected to improve the reality of the geological model for the tight sandstone reservoirs.展开更多
In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object ...In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object recognition.In this paper,we propose to use the principal curvature directions of 3D objects(using a CAD model)to represent the geometric features as inputs for the 3D CNN.Our framework,namely CurveNet,learns perceptually relevant salient features and predicts object class labels.Curvature directions incorporate complex surface information of a 3D object,which helps our framework to produce more precise and discriminative features for object recognition.Multitask learning is inspired by sharing features between two related tasks,where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification.Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification.We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs.A Cross-Stitch module was adopted to learn effective shared features across multiple representations.We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task.展开更多
Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on...Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on 3 D panicle phenotyping has been limited. Given that existing 3 D modeling techniques do not focus on specified parts of a target object, an efficient method for panicle modeling of large numbers of rice plants is lacking. This paper presents an automatic and nondestructive method for 3 D panicle modeling. The proposed method integrates shoot rice reconstruction with shape from silhouette, 2 D panicle segmentation with a deep convolutional neural network, and 3 D panicle segmentation with ray tracing and supervoxel clustering. A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant. The execution time of panicle modeling per rice plant using 90 images was approximately 26 min. The outputs of the algorithm for a single rice plant are a shoot rice model, surface shoot rice model, panicle model, and surface panicle model, all represented by a list of spatial coordinates. The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm. The results demonstrated that the proposed method is well qualified to recover the 3 D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages. The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency. The sample images and implementation of the algorithm are available online. This automatic, cost-efficient, and nondestructive method of 3 D panicle modeling may be applied to high-throughput 3 D phenotyping of large rice populations.展开更多
Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great...Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great challenges as a partial visualization regularly occurs owing to occlusion from static or dynamic objects or a limited perspective of camera.This paper presents an imagery-based framework to infer parking space status by generating 3D bounding box of the vehicle.A specially designed convolutional neural network based on ResNet and feature pyramid network is proposed to overcome challenges from partial visualization and occlusion.It predicts 3D box candidates on multi-scale feature maps with five different 3D anchors,which generated by clustering diverse scales of ground truth box according to different vehicle templates in the source data set.Subsequently,vehicle distribution map is constructed jointly from the coordinates of vehicle box and artificially segmented parking spaces,where the normative degree of parked vehicle is calculated by computing the intersection over union between vehicle’s box and parking space edge.In space status inference,to further eliminate mutual vehicle interference,three adjacent spaces are combined into one unit and then a multinomial logistic regression model is trained to refine the status of the unit.Experiments on KITTI benchmark and Shanghai road show that the proposed method outperforms most monocular approaches in 3D box regression and achieves satisfactory accuracy in space status inference.展开更多
Because behavior recognition is based on video frame sequences,this paper proposes a behavior recognition algorithm that combines 3D residual convolutional neural network(R3D)and long short-term memory(LSTM).First,the...Because behavior recognition is based on video frame sequences,this paper proposes a behavior recognition algorithm that combines 3D residual convolutional neural network(R3D)and long short-term memory(LSTM).First,the residual module is extended to three dimensions,which can extract features in the time and space domain at the same time.Second,by changing the size of the pooling layer window the integrity of the time domain features is preserved,at the same time,in order to overcome the difficulty of network training and over-fitting problems,the batch normalization(BN)layer and the dropout layer are added.After that,because the global average pooling layer(GAP)is affected by the size of the feature map,the network cannot be further deepened,so the convolution layer and maxpool layer are added to the R3D network.Finally,because LSTM has the ability to memorize information and can extract more abstract timing features,the LSTM network is introduced into the R3D network.Experimental results show that the R3D+LSTM network achieves 91%recognition rate on the UCF-101 dataset.展开更多
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap...Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.展开更多
Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing...Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.展开更多
Lip-reading technology,based on visual speech decoding and automatic speech recognition,offers a promising solution to overcoming communication barriers,particularly for individuals with temporary or permanent speech ...Lip-reading technology,based on visual speech decoding and automatic speech recognition,offers a promising solution to overcoming communication barriers,particularly for individuals with temporary or permanent speech impairments.However,most Visual Speech Recognition(VSR)research has primarily focused on the English language and general-purpose applications,limiting its practical applicability in medical and rehabilitative settings.This study introduces the first Deep Learning(DL)based lip-reading system for the Italian language designed to assist individuals with vocal cord pathologies in daily interactions,facilitating communication for patients recovering from vocal cord surgeries,whether temporarily or permanently impaired.To ensure relevance and effectiveness in real-world scenarios,a carefully curated vocabulary of twenty-five Italian words was selected,encompassing critical semantic fields such as Needs,Questions,Answers,Emergencies,Greetings,Requests,and Body Parts.These words were chosen to address both essential daily communication and urgent medical assistance requests.Our approach combines a spatiotemporal Convolutional Neural Network(CNN)with a bidirectional Long Short-Term Memory(BiLSTM)recurrent network,and a Connectionist Temporal Classification(CTC)loss function to recognize individual words,without requiring predefined words boundaries.The experimental results demonstrate the system’s robust performance in recognizing target words,reaching an average accuracy of 96.4%in individual word recognition,suggesting that the system is particularly well-suited for offering support in constrained clinical and caregiving environments,where quick and reliable communication is critical.In conclusion,the study highlights the importance of developing language-specific,application-driven VSR solutions,particularly for non-English languages with limited linguistic resources.By bridging the gap between deep learning-based lip-reading and real-world clinical needs,this research advances assistive communication technologies,paving the way for more inclusive and medically relevant applications of VSR in rehabilitation and healthcare.展开更多
文摘3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems,enabling selective feature extraction from non-empty voxels while suppressing computational waste.Despite its theoretical efficiency advantages,practical implementations face under-explored limitations:the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations,particularly in regions with uneven point cloud density.To address this,we propose Hierarchical Shape Pruning for 3D Sparse Convolution(HSP-S),which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding.Unlike static soft pruning methods,HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization,enlarging original parameter search space while removing redundant operations.Extensive experiments validate effectiveness of HSP-S acrossmajor autonomous driving benchmarks.On KITTI’s 3D object detection task,our method reduces 93.47%redundant kernel computations whilemaintaining comparable accuracy(1.56%mAP drop).Remarkably,on themore complexNuScenes benchmark,HSP-S achieves simultaneous computation reduction(21.94%sparsity)and accuracy gains(1.02%mAP(mean Average Precision)and 0.47%NDS(nuScenes detection score)improvement),demonstrating its scalability to diverse perception scenarios.This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.
基金the National Natural Science Foundation of China(No.41274129)Chuan Qing Drilling Engineering Company's Scientific Research Project:Seismic detection technology and application of complex carbonate reservoir in Sulige Majiagou Formation and the 2018 Central Supporting Local Co-construction Fund(No.80000-18Z0140504)the Construction and Development of Universities in 2019-Joint Support for Geophysics(Double First-Class center,80000-19Z0204)。
文摘In this paper, the complete process of constructing 3D digital core by fullconvolutional neural network is described carefully. A large number of sandstone computedtomography (CT) images are used as training input for a fully convolutional neural networkmodel. This model is used to reconstruct the three-dimensional (3D) digital core of Bereasandstone based on a small number of CT images. The Hamming distance together with theMinkowski functions for porosity, average volume specifi c surface area, average curvature,and connectivity of both the real core and the digital reconstruction are used to evaluate theaccuracy of the proposed method. The results show that the reconstruction achieved relativeerrors of 6.26%, 1.40%, 6.06%, and 4.91% for the four Minkowski functions and a Hammingdistance of 0.04479. This demonstrates that the proposed method can not only reconstructthe physical properties of real sandstone but can also restore the real characteristics of poredistribution in sandstone, is the ability to which is a new way to characterize the internalmicrostructure of rocks.
基金supported by the Open Project of Key Laboratory of Computational Aerodynamics,AVIC Aerodynamics Research Institute(Grant No.YL2022XFX0409).
文摘In this work,a three dimensional(3D)convolutional neural network(CNN)model based on image slices of various normal and pathological vocal folds is proposed for accurate and efficient prediction of glottal flows.The 3D CNN model is composed of the feature extraction block and regression block.The feature extraction block is capable of learning low dimensional features from the high dimensional image data of the glottal shape,and the regression block is employed to flatten the output from the feature extraction block and obtain the desired glottal flow data.The input image data is the condensed set of 2D image slices captured in the axial plane of the 3D vocal folds,where these glottal shapes are synthesized based on the equations of normal vibration modes.The output flow data is the corresponding flow rate,averaged glottal pressure and nodal pressure distributions over the glottal surface.The 3D CNN model is built to establish the mapping between the input image data and output flow data.The ground-truth flow variables of each glottal shape in the training and test datasets are obtained by a high-fidelity sharp-interface immersed-boundary solver.The proposed model is trained to predict the concerned flow variables for glottal shapes in the test set.The present 3D CNN model is more efficient than traditional Computational Fluid Dynamics(CFD)models while the accuracy can still be retained,and more powerful than previous data-driven prediction models because more details of the glottal flow can be provided.The prediction performance of the trained 3D CNN model in accuracy and efficiency indicates that this model could be promising for future clinical applications.
基金National Key Research and Development Program of China (2019YFC1521102)National Natural Science Foundation of China (61701403,61806164,62101439,61906154)+4 种基金China Postdoctoral Science Foundation (2018M643719)Natural Science Foundation of Shaanxi Province (2020JQ-601)Young Talent Support Program of the Shaanxi Association for Science and Technology (20190107)Key Research and Development Program of Shaanxi Province (2019GY-215,2021ZDLSF06-04)Major research and development project of Qinghai (2020-SF-143).
文摘Cerenkov Luminescence Tomography(CLT)is a novel and potential imaging modality which can display the three-dimensional distribution of radioactive probes.However,due to severe ill-posed inverse problem,obtaining accurate reconstruction results is still a challenge for traditional model-based methods.The recently emerged deep learning-based methods can directly learn the mapping relation between the surface photon intensity and the distribution of the radioactive source,which effectively improves the performance of CLT reconstruction.However,the previously proposed deep learning-based methods cannot work well when the order of input is disarranged.In this paper,a novel 3D graph convolution-based residual network,GCR-Net,is proposed,which can obtain a robust and accurate reconstruction result from the photon intensity of the surface.Additionally,it is proved that the network is insensitive to the order of input.The performance of this method was evaluated with numerical simulations and in vivo experiments.The results demonstrated that compared with the existing methods,the proposed method can achieve efficient and accurate reconstruction in localization and shape recovery by utilizing threedimensional information.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
文摘Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.
文摘In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter,which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.
基金Supported by National Science Foundation of China(No.81800878)Interdisciplinary Program of Shanghai Jiao Tong University(No.YG2017QN24)+1 种基金Key Technological Research Projects of Songjiang District(No.18sjkjgg24)Bethune Langmu Ophthalmological Research Fund for Young and Middle-aged People(No.BJ-LM2018002J)
文摘AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segmentation was employed. In order to solve the category imbalance in retinal optical coherence tomography(OCT) images, the network parameters and loss function based on the 2D fully convolutional network were modified. For this network, the correlations of corresponding positions among adjacent images in space are ignored. Thus, we proposed a three-dimensional(3D) fully convolutional network for segmentation in the retinal OCT images.RESULTS: The algorithm was evaluated according to segmentation accuracy, Kappa coefficient, and F1 score. For the 3D fully convolutional network proposed in this paper, the overall segmentation accuracy rate is 99.56%, Kappa coefficient is 98.47%, and F1 score of retinal fluid is 95.50%. CONCLUSION: The OCT image segmentation algorithm based on deep learning is primarily founded on the 2D convolutional network. The 3D network architecture proposed in this paper reduces the influence of category imbalance, realizes end-to-end segmentation of volume images, and achieves optimal segmentation results. The segmentation maps are practically the same as the manual annotations of doctors, and can provide doctors with more accurate diagnostic data.
基金supported in part by the National Natural Science Foundation of China under Grants 61973065,U20A20197,61973063.
文摘Previous multi-view 3D human pose estimation methods neither correlate different human joints in each view nor model learnable correlations between the same joints in different views explicitly,meaning that skeleton structure information is not utilized and multi-view pose information is not completely fused.Moreover,existing graph convolutional operations do not consider the specificity of different joints and different views of pose information when processing skeleton graphs,making the correlation weights between nodes in the graph and their neighborhood nodes shared.Existing Graph Convolutional Networks(GCNs)cannot extract global and deeplevel skeleton structure information and view correlations efficiently.To solve these problems,pre-estimated multiview 2D poses are designed as a multi-view skeleton graph to fuse skeleton priors and view correlations explicitly to process occlusion problem,with the skeleton-edge and symmetry-edge representing the structure correlations between adjacent joints in each viewof skeleton graph and the view-edge representing the view correlations between the same joints in different views.To make graph convolution operation mine elaborate and sufficient skeleton structure information and view correlations,different correlation weights are assigned to different categories of neighborhood nodes and further assigned to each node in the graph.Based on the graph convolution operation proposed above,a Residual Graph Convolution(RGC)module is designed as the basic module to be combined with the simplified Hourglass architecture to construct the Hourglass-GCN as our 3D pose estimation network.Hourglass-GCNwith a symmetrical and concise architecture processes three scales ofmulti-viewskeleton graphs to extract local-to-global scale and shallow-to-deep level skeleton features efficiently.Experimental results on common large 3D pose dataset Human3.6M and MPI-INF-3DHP show that Hourglass-GCN outperforms some excellent methods in 3D pose estimation accuracy.
基金supported in part by the Basic and Applied Basic Research Foundation of Guangdong Province[2025A1515011566]in part by the State Key Laboratory for Novel Software Technology,Nanjing University[KFKT2024B08]+1 种基金in part by Leading Talents in Gusu Innovation and Entrepreneurship[ZXL2023170]in part by the Basic Research Programs of Taicang 2024,[TC2024JC32].
文摘Deep convolutional neural networks(CNNs)have demonstrated remarkable performance in video super-resolution(VSR).However,the ability of most existing methods to recover fine details in complex scenes is often hindered by the loss of shallow texture information during feature extraction.To address this limitation,we propose a 3D Convolutional Enhanced Residual Video Super-Resolution Network(3D-ERVSNet).This network employs a forward and backward bidirectional propagation module(FBBPM)that aligns features across frames using explicit optical flow through lightweight SPyNet.By incorporating an enhanced residual structure(ERS)with skip connections,shallow and deep features are effectively integrated,enhancing texture restoration capabilities.Furthermore,3D convolution module(3DCM)is applied after the backward propagation module to implicitly capture spatio-temporal dependencies.The architecture synergizes these components where FBBPM extracts aligned features,ERS fuses hierarchical representations,and 3DCM refines temporal coherence.Finally,a deep feature aggregation module(DFAM)fuses the processed features,and a pixel-upsampling module(PUM)reconstructs the high-resolution(HR)video frames.Comprehensive evaluations on REDS,Vid4,UDM10,and Vim4 benchmarks demonstrate well performance including 30.95 dB PSNR/0.8822 SSIM on REDS and 32.78 dB/0.8987 on Vim4.3D-ERVSNet achieves significant gains over baselines while maintaining high efficiency with only 6.3M parameters and 77ms/frame runtime(i.e.,20×faster than RBPN).The network’s effectiveness stems from its task-specific asymmetric design that balances explicit alignment and implicit fusion.
基金Supported by the National Natural Science Foundation of China (61772328)。
文摘As a kind of flexible three-dimensional geometric data, point clouds can accomplish many challenging tasks so long as the rich information in the geometric topology architecture can be deeply analyzed. On account of that point cloud data is sparse, disordered and rotation-invariant, the success of convolutional neural network in 2 D image cannot be directly reproduced on point cloud. In this paper, we propose WECNN, namely, Weight-Edge Convolution Neural Network, which has an excellent ability to utilize local structural features. As the core of WECNN, a novel convolution operator called WEConv tries to capture structural features by constructing a fixed number of directed graphs and extracting the edge information of the graph to further analyze the local regions of point cloud. Moreover, a weight function is designed for different tasks to assign weights to the edges, so that feature extractions on the edges can be more fine-grained and robust. WECNN gets overall accuracy of 93.8% and mean class accuracy of 91.6% on Model Net40 dataset. At the same time, it gets a mean Io U of 85.5% on Shape Net Part dataset. Results of extensive experiments show that our WECNN outperforms other classification and segmentation approaches on challenging benchmarks.
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.300102278402)。
文摘The lithofacies classification is essential for oil and gas reservoir exploration and development.The traditional method of lithofacies classification is based on"core calibration logging"and the experience of geologists.This approach has strong subjectivity,low efficiency,and high uncertainty.This uncertainty may be one of the key factors affecting the results of 3 D modeling of tight sandstone reservoirs.In recent years,deep learning,which is a cutting-edge artificial intelligence technology,has attracted attention from various fields.However,the study of deep-learning techniques in the field of lithofacies classification has not been sufficient.Therefore,this paper proposes a novel hybrid deep-learning model based on the efficient data feature-extraction ability of convolutional neural networks(CNN)and the excellent ability to describe time-dependent features of long short-term memory networks(LSTM)to conduct lithological facies-classification experiments.The results of a series of experiments show that the hybrid CNN-LSTM model had an average accuracy of 87.3%and the best classification effect compared to the CNN,LSTM or the three commonly used machine learning models(Support vector machine,random forest,and gradient boosting decision tree).In addition,the borderline synthetic minority oversampling technique(BSMOTE)is introduced to address the class-imbalance issue of raw data.The results show that processed data balance can significantly improve the accuracy of lithofacies classification.Beside that,based on the fine lithofacies constraints,the sequential indicator simulation method is used to establish a three-dimensional lithofacies model,which completes the fine description of the spatial distribution of tight sandstone reservoirs in the study area.According to this comprehensive analysis,the proposed CNN-LSTM model,which eliminates class imbalance,can be effectively applied to lithofacies classification,and is expected to improve the reality of the geological model for the tight sandstone reservoirs.
基金This paper was partially supported by a project of the Shanghai Science and Technology Committee(18510760300)Anhui Natural Science Foundation(1908085MF178)Anhui Excellent Young Talents Support Program Project(gxyqZD2019069).
文摘In computer vision fields,3D object recognition is one of the most important tasks for many real-world applications.Three-dimensional convolutional neural networks(CNNs)have demonstrated their advantages in 3D object recognition.In this paper,we propose to use the principal curvature directions of 3D objects(using a CAD model)to represent the geometric features as inputs for the 3D CNN.Our framework,namely CurveNet,learns perceptually relevant salient features and predicts object class labels.Curvature directions incorporate complex surface information of a 3D object,which helps our framework to produce more precise and discriminative features for object recognition.Multitask learning is inspired by sharing features between two related tasks,where we consider pose classification as an auxiliary task to enable our CurveNet to better generalize object label classification.Experimental results show that our proposed framework using curvature vectors performs better than voxels as an input for 3D object classification.We further improved the performance of CurveNet by combining two networks with both curvature direction and voxels of a 3D object as the inputs.A Cross-Stitch module was adopted to learn effective shared features across multiple representations.We evaluated our methods using three publicly available datasets and achieved competitive performance in the 3D object recognition task.
基金supported by the National Natural Science Foundation of China (U21A20205)Key Projects of Natural Science Foundation of Hubei Province (2021CFA059)+1 种基金Fundamental Research Funds for the Central Universities (2021ZKPY006)cooperative funding between Huazhong Agricultural University and Shenzhen Institute of Agricultural Genomics (SZYJY2021005,SZYJY2021007)。
文摘Self-occlusions are common in rice canopy images and strongly influence the calculation accuracies of panicle traits. Such interference can be largely eliminated if panicles are phenotyped at the 3 D level.Research on 3 D panicle phenotyping has been limited. Given that existing 3 D modeling techniques do not focus on specified parts of a target object, an efficient method for panicle modeling of large numbers of rice plants is lacking. This paper presents an automatic and nondestructive method for 3 D panicle modeling. The proposed method integrates shoot rice reconstruction with shape from silhouette, 2 D panicle segmentation with a deep convolutional neural network, and 3 D panicle segmentation with ray tracing and supervoxel clustering. A multiview imaging system was built to acquire image sequences of rice canopies with an efficiency of approximately 4 min per rice plant. The execution time of panicle modeling per rice plant using 90 images was approximately 26 min. The outputs of the algorithm for a single rice plant are a shoot rice model, surface shoot rice model, panicle model, and surface panicle model, all represented by a list of spatial coordinates. The efficiency and performance were evaluated and compared with the classical structure-from-motion algorithm. The results demonstrated that the proposed method is well qualified to recover the 3 D shapes of rice panicles from multiview images and is readily adaptable to rice plants of diverse accessions and growth stages. The proposed algorithm is superior to the structure-from-motion method in terms of texture preservation and computational efficiency. The sample images and implementation of the algorithm are available online. This automatic, cost-efficient, and nondestructive method of 3 D panicle modeling may be applied to high-throughput 3 D phenotyping of large rice populations.
基金This work was supported in part by National Natural Science Foundation of China(No.51805312)in part by Shanghai Sailing Program(No.18YF1409400)+2 种基金in part by Training and Funding Program of Shanghai College young teachers(No.ZZGCD15102)in part by Scientific Research Project of Shanghai University of Engineering Science(No.2016-19)in part by the Shanghai University of Engineering Science Innovation Fund for Graduate Students(No.18KY0613).
文摘Vision-based technologies have been extensively applied for on-street parking space sensing,aiming at providing timely and accurate information for drivers and improving daily travel convenience.However,it faces great challenges as a partial visualization regularly occurs owing to occlusion from static or dynamic objects or a limited perspective of camera.This paper presents an imagery-based framework to infer parking space status by generating 3D bounding box of the vehicle.A specially designed convolutional neural network based on ResNet and feature pyramid network is proposed to overcome challenges from partial visualization and occlusion.It predicts 3D box candidates on multi-scale feature maps with five different 3D anchors,which generated by clustering diverse scales of ground truth box according to different vehicle templates in the source data set.Subsequently,vehicle distribution map is constructed jointly from the coordinates of vehicle box and artificially segmented parking spaces,where the normative degree of parked vehicle is calculated by computing the intersection over union between vehicle’s box and parking space edge.In space status inference,to further eliminate mutual vehicle interference,three adjacent spaces are combined into one unit and then a multinomial logistic regression model is trained to refine the status of the unit.Experiments on KITTI benchmark and Shanghai road show that the proposed method outperforms most monocular approaches in 3D box regression and achieves satisfactory accuracy in space status inference.
基金Supported by the Shaanxi Province Key Research and Development Project (No. 2021GY-280)Shaanxi Province Natural Science Basic Research Program (No. 2021JM-459)the National Natural Science Foundation of China (No. 61772417)
文摘Because behavior recognition is based on video frame sequences,this paper proposes a behavior recognition algorithm that combines 3D residual convolutional neural network(R3D)and long short-term memory(LSTM).First,the residual module is extended to three dimensions,which can extract features in the time and space domain at the same time.Second,by changing the size of the pooling layer window the integrity of the time domain features is preserved,at the same time,in order to overcome the difficulty of network training and over-fitting problems,the batch normalization(BN)layer and the dropout layer are added.After that,because the global average pooling layer(GAP)is affected by the size of the feature map,the network cannot be further deepened,so the convolution layer and maxpool layer are added to the R3D network.Finally,because LSTM has the ability to memorize information and can extract more abstract timing features,the LSTM network is introduced into the R3D network.Experimental results show that the R3D+LSTM network achieves 91%recognition rate on the UCF-101 dataset.
基金supported by the National Key Research and Development Program of China under Grant No.2018YFE0206900the National Natural Science Foundation of China under Grant No.61871440 and CAAI‐Huawei Mind-Spore Open Fund.
文摘Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.
基金Macao Polytechnic University Grant(RP/FCSD-01/2022RP/FCA-05/2022)Science and Technology Development Fund of Macao(0105/2022/A).
文摘Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.
文摘Lip-reading technology,based on visual speech decoding and automatic speech recognition,offers a promising solution to overcoming communication barriers,particularly for individuals with temporary or permanent speech impairments.However,most Visual Speech Recognition(VSR)research has primarily focused on the English language and general-purpose applications,limiting its practical applicability in medical and rehabilitative settings.This study introduces the first Deep Learning(DL)based lip-reading system for the Italian language designed to assist individuals with vocal cord pathologies in daily interactions,facilitating communication for patients recovering from vocal cord surgeries,whether temporarily or permanently impaired.To ensure relevance and effectiveness in real-world scenarios,a carefully curated vocabulary of twenty-five Italian words was selected,encompassing critical semantic fields such as Needs,Questions,Answers,Emergencies,Greetings,Requests,and Body Parts.These words were chosen to address both essential daily communication and urgent medical assistance requests.Our approach combines a spatiotemporal Convolutional Neural Network(CNN)with a bidirectional Long Short-Term Memory(BiLSTM)recurrent network,and a Connectionist Temporal Classification(CTC)loss function to recognize individual words,without requiring predefined words boundaries.The experimental results demonstrate the system’s robust performance in recognizing target words,reaching an average accuracy of 96.4%in individual word recognition,suggesting that the system is particularly well-suited for offering support in constrained clinical and caregiving environments,where quick and reliable communication is critical.In conclusion,the study highlights the importance of developing language-specific,application-driven VSR solutions,particularly for non-English languages with limited linguistic resources.By bridging the gap between deep learning-based lip-reading and real-world clinical needs,this research advances assistive communication technologies,paving the way for more inclusive and medically relevant applications of VSR in rehabilitation and healthcare.