期刊文献+
共找到119,363篇文章
< 1 2 250 >
每页显示 20 50 100
An Unsupervised Online Detection Method for Foreign Objects in Complex Environments
1
作者 YANG Xiaoyang YANG Yanzhu DENG Haiping 《Journal of Donghua University(English Edition)》 2026年第1期140-151,共12页
In modern industrial production,foreign object detection in complex environments is crucial to ensure product quality and production safety.Detection systems based on deep-learning image processing algorithms often fa... In modern industrial production,foreign object detection in complex environments is crucial to ensure product quality and production safety.Detection systems based on deep-learning image processing algorithms often face challenges with handling high-resolution images and achieving accurate detection against complex backgrounds.To address these issues,this study employs the PatchCore unsupervised anomaly detection algorithm combined with data augmentation techniques to enhance the system’s generalization capability across varying lighting conditions,viewing angles,and object scales.The proposed method is evaluated in a complex industrial detection scenario involving the bogie of an electric multiple unit(EMU).A dataset consisting of complex backgrounds,diverse lighting conditions,and multiple viewing angles is constructed to validate the performance of the detection system in real industrial environments.Experimental results show that the proposed model achieves an average area under the receiver operating characteristic curve(AUROC)of 0.92 and an average F1 score of 0.85.Combined with data augmentation,the proposed model exhibits improvements in AUROC by 0.06 and F1 score by 0.03,demonstrating enhanced accuracy and robustness for foreign object detection in complex industrial settings.In addition,the effects of key factors on detection performance are systematically analyzed,providing practical guidance for parameter selection in real industrial applications. 展开更多
关键词 foreign object detection unsupervised learning data augmentation complex environment BOGIE DATASET
在线阅读 下载PDF
Ontological exploration of geospatial objects in context
2
作者 Mohammad H.VAHIDNIA Ali A.ALESHEIKH 《Geo-Spatial Information Science》 SCIE EI 2014年第2期129-138,共10页
Structured study of spatial objects and their relationships leads to a better cognition of the geospatial information and creates the concept of context at a higher level of abstraction.This study is aimed at providin... Structured study of spatial objects and their relationships leads to a better cognition of the geospatial information and creates the concept of context at a higher level of abstraction.This study is aimed at providing a comprehensive definition of the context for geospatial objects.A combination of binary qualitative spatial relationships(i.e.direction,distance,and topological relations)among the members of a set of spatial objects will be used accordingly.In addition,by incorporating the general concept of context,obtained from either static data(attributes in a database)or dynamic data(sensors),the compact context of spatial objects will be introduced.Our framework for presentation of the involved knowledge and conception about the objects in context is also explored using ontology and description logic because of powerful conceptualization of relationships,either spatial or non-spatial,integrally.For this purpose,the hierarchies of main structure and object properties are formed at first.The constraint and characteristics of classes,such as subclasses,equivalent classes,cardinality etc.,and object properties,such as being functional,transitive,symmetric,asymmetric,inverse functional,disjoint etc.,are discovered and presented in more detail using web ontology language in description logic mode.The implementation is then performed in the framework of semantic web and extensible markup language syntaxes.The method ultimately facilitates,spatial reasoning by effective querying in a semantic framework taking pellet reasoner and SPARQL(a recursive acronym for SPARQL Protocol and RDF Query Language). 展开更多
关键词 spatial objects’context geographic information system qualitative spatial relation ONTOLOGY description logic OWL
原文传递
Transforming Education with Photogrammetry:Creating Realistic 3D Objects for Augmented Reality Applications
3
作者 Kaviyaraj Ravichandran Uma Mohan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期185-208,共24页
Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in ed... Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector. 展开更多
关键词 Augmented reality education immersive learning 3D object creation PHOTOGRAMMETRY and StructureFromMotion
在线阅读 下载PDF
Study on Color Difference of Color Reproduction of 3D Objects
4
作者 GU Chong DENG Yi-qiang 《印刷与数字媒体技术研究》 北大核心 2025年第4期33-38,69,共7页
To investigate the applicability of four commonly used color difference formulas(CIELAB,CIE94,CMC(1:1),and CIEDE2000)in the printing field on 3D objects,as well as the impact of four standard light sources(D65,D50,A,a... To investigate the applicability of four commonly used color difference formulas(CIELAB,CIE94,CMC(1:1),and CIEDE2000)in the printing field on 3D objects,as well as the impact of four standard light sources(D65,D50,A,and TL84)on 3D color difference evaluations,50 glossy spheres with a diameter of 2cm based on the Sailner J4003D color printing device were created.These spheres were centered around the five recommended colors(gray,red,yellow,green,and blue)by CIE.Color difference was calculated according to the four formulas,and 111 pairs of experimental samples meeting the CIELAB gray scale color difference requirements(1.0-14.0)were selected.Ten observers,aged between 22 and 27 with normal color vision,were participated in this study,using the gray scale method from psychophysical experiments to conduct color difference evaluations under the four light sources,with repeated experiments for each observer.The results indicated that the overall effect of the D65 light source on 3D objects color difference was minimal.In contrast,D50 and A light sources had a significant impact within the small color difference range,while the TL84 light source influenced both large and small color difference considerably.Among the four color difference formulas,CIEDE2000 demonstrated the best predictive performance for color difference in 3D objects,followed by CMC(1:1),CIE94,and CIELAB. 展开更多
关键词 Color difference formula 3D objects Light source Gray scale Normalized residual sum of squares
在线阅读 下载PDF
Global-local feature optimization based RGB-IR fusion object detection on drone view 被引量:1
5
作者 Zhaodong CHEN Hongbing JI Yongquan ZHANG 《Chinese Journal of Aeronautics》 2026年第1期436-453,共18页
Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still st... Visible and infrared(RGB-IR)fusion object detection plays an important role in security,disaster relief,etc.In recent years,deep-learning-based RGB-IR fusion detection methods have been developing rapidly,but still struggle to deal with the complex and changing scenarios captured by drones,mainly due to two reasons:(A)RGB-IR fusion detectors are susceptible to inferior inputs that degrade performance and stability.(B)RGB-IR fusion detectors are susceptible to redundant features that reduce accuracy and efficiency.In this paper,an innovative RGB-IR fusion detection framework based on global-local feature optimization,named GLFDet,is proposed to improve the detection performance and efficiency of drone-captured objects.The key components of GLFDet include a Global Feature Optimization(GFO)module,a Local Feature Optimization(LFO)module and a Channel Separation Fusion(CSF)module.Specifically,GFO calculates the information content of the input image from the frequency domain and optimizes the features holistically.Then,LFO dynamically selects high-value features and filters out low-value features before fusion,which significantly improves the efficiency of fusion.Finally,CSF fuses the RGB and IR features across the corresponding channels,which avoids the rearrangement of the channel relationships and enhances the model stability.Extensive experimental results show that the proposed method achieves the best performance on three popular RGB-IR datasets Drone Vehicle,VEDAI,and LLVIP.In addition,GLFDet is more lightweight than other comparable models,making it more appealing to edge devices such as drones.The code is available at https://github.com/lao chen330/GLFDet. 展开更多
关键词 object detection Deep learning RGB-IR fusion DRONES Global feature Local feature
原文传递
Transorbital craniocerebral injury caused by metallic foreign objects
6
作者 Chongqing Yang Hongguang Cui +2 位作者 Xiawei Wang Chenying Yu Yan Long 《World Journal of Emergency Medicine》 2025年第3期277-279,共3页
Transorbital craniocerebral injury is a relatively rare type of penetrating head injury that poses a significant threat to the ocular and cerebral structures.^([1])The clinical prognosis of transorbital craniocerebral... Transorbital craniocerebral injury is a relatively rare type of penetrating head injury that poses a significant threat to the ocular and cerebral structures.^([1])The clinical prognosis of transorbital craniocerebral injury is closely related to the size,shape,speed,nature,and trajectory of the foreign object,as well as the incidence of central nervous system damage and secondary complications.The foreign objects reported to have caused these injuries are categorized into wooden items,metallic items,^([2-8])and other materials,which penetrate the intracranial region via fi ve major pathways,including the orbital roof (OR),superior orbital fissure (SOF),inferior orbital fissure(IOF),optic canal (OC),and sphenoid wing.Herein,we present eight cases of transorbital craniocerebral injury caused by an unusual metallic foreign body. 展开更多
关键词 transorbital craniocerebral injury ocular cerebral structures foreign objectas central nervous system damage penetrating head injury foreign objects metallic foreign objects clinical prognosis
暂未订购
ASL-OOD:Hierarchical Contextual Feature Fusion with Angle-Sensitive Loss for Oriented Object Detection
7
作者 Kexin Wang Jiancheng Liu +5 位作者 Yuqing Lin Tuo Wang Zhipeng Zhang Wanlong Qi Xingye Han Runyuan Wen 《Computers, Materials & Continua》 2025年第2期1879-1899,共21页
Detecting oriented targets in remote sensing images amidst complex and heterogeneous backgrounds remains a formidable challenge in the field of object detection.Current frameworks for oriented detection modules are co... Detecting oriented targets in remote sensing images amidst complex and heterogeneous backgrounds remains a formidable challenge in the field of object detection.Current frameworks for oriented detection modules are constrained by intrinsic limitations,including excessive computational and memory overheads,discrepancies between predefined anchors and ground truth bounding boxes,intricate training processes,and feature alignment inconsistencies.To overcome these challenges,we present ASL-OOD(Angle-based SIOU Loss for Oriented Object Detection),a novel,efficient,and robust one-stage framework tailored for oriented object detection.The ASL-OOD framework comprises three core components:the Transformer-based Backbone(TB),the Transformer-based Neck(TN),and the Angle-SIOU(Scylla Intersection over Union)based Decoupled Head(ASDH).By leveraging the Swin Transformer,the TB and TN modules offer several key advantages,such as the capacity to model long-range dependencies,preserve high-resolution feature representations,seamlessly integrate multi-scale features,and enhance parameter efficiency.These improvements empower the model to accurately detect objects across varying scales.The ASDH module further enhances detection performance by incorporating angle-aware optimization based on SIOU,ensuring precise angular consistency and bounding box coherence.This approach effectively harmonizes shape loss and distance loss during the optimization process,thereby significantly boosting detection accuracy.Comprehensive evaluations and ablation studies on standard benchmark datasets such as DOTA with an mAP(mean Average Precision)of 80.16 percent,HRSC2016 with an mAP of 91.07 percent,MAR20 with an mAP of 85.45 percent,and UAVDT with an mAP of 39.7 percent demonstrate the clear superiority of ASL-OOD over state-of-the-art oriented object detection models.These findings underscore the model’s efficacy as an advanced solution for challenging remote sensing object detection tasks. 展开更多
关键词 Oriented object detection transformer deep learning
在线阅读 下载PDF
Exploration of the Application of Artificial Intelligence Technology in the Transformation of Old Objects
8
作者 Tonghuan Zhang Xinyu Yang +1 位作者 Ying Chen Qiufan Xie 《Journal of Electronic Research and Application》 2025年第2期51-57,共7页
With the rapid development of technology,artificial intelligence(AI)is increasingly being applied in various fields.In today’s context of resource scarcity,pursuit of sustainable development and resource reuse,the tr... With the rapid development of technology,artificial intelligence(AI)is increasingly being applied in various fields.In today’s context of resource scarcity,pursuit of sustainable development and resource reuse,the transformation of old objects is particularly important.This article analyzes the current status of old object transformation and the opportunities brought by the internet to old objects and delves into the application of artificial intelligence in old object transformation.The focus is on five aspects:intelligent identification and classification,intelligent evaluation and prediction,automation integration,intelligent design and optimization,and integration of 3D printing technology.Finally,the process of“redesigning an old furniture,such as a wooden desk,through AI technology”is described,including the recycling,identification,detection,design,transformation,and final user feedback of the old wooden desk.This illustrates the unlimited potential of the“AI+old object transformation”approach,advocates for people to strengthen green environmental protection,and drives sustainable development. 展开更多
关键词 Artificial Intelligence(AI) Old object transformation Environmental protection
在线阅读 下载PDF
Transformer-Driven Multimodal for Human-Object Detection and Recognition for Intelligent Robotic Surveillance
9
作者 Aman Aman Ullah Yanfeng Wu +3 位作者 Shaheryar Najam Nouf Abdullah Almujally Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 2026年第4期1364-1383,共20页
Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To addre... Human object detection and recognition is essential for elderly monitoring and assisted living however,models relying solely on pose or scene context often struggle in cluttered or visually ambiguous settings.To address this,we present SCENET-3D,a transformer-drivenmultimodal framework that unifies human-centric skeleton features with scene-object semantics for intelligent robotic vision through a three-stage pipeline.In the first stage,scene analysis,rich geometric and texture descriptors are extracted from RGB frames,including surface-normal histograms,angles between neighboring normals,Zernike moments,directional standard deviation,and Gabor-filter responses.In the second stage,scene-object analysis,non-human objects are segmented and represented using local feature descriptors and complementary surface-normal information.In the third stage,human-pose estimation,silhouettes are processed through an enhanced MoveNet to obtain 2D anatomical keypoints,which are fused with depth information and converted into RGB-based point clouds to construct pseudo-3D skeletons.Features from all three stages are fused and fed in a transformer encoder with multi-head attention to resolve visually similar activities.Experiments on UCLA(95.8%),ETRI-Activity3D(89.4%),andCAD-120(91.2%)demonstrate that combining pseudo-3D skeletonswith rich scene-object fusion significantly improves generalizable activity recognition,enabling safer elderly care,natural human–robot interaction,and robust context-aware robotic perception in real-world environments. 展开更多
关键词 Human object detection elderly care RGB-based pose estimation scene context analysis object recognition Gabor features point cloud reconstruction
在线阅读 下载PDF
Physics-Informed Graph Learning for Shape Prediction in Robot Manipulate of Deformable Linear Objects
10
作者 Meixuan Wang Junliang Wang +2 位作者 Jie Zhang Xinting Liao Guojin Li 《Chinese Journal of Mechanical Engineering》 2025年第6期154-165,共12页
Shape prediction of deformable linear objects(DLO)plays critical roles in robotics,medical devices,aerospace,and manufacturing,especially in manipulating objects such as cables,wires,and fibers.Due to the inherent fle... Shape prediction of deformable linear objects(DLO)plays critical roles in robotics,medical devices,aerospace,and manufacturing,especially in manipulating objects such as cables,wires,and fibers.Due to the inherent flexibility of DLO and their complex deformation behaviors,such as bending and torsion,it is challenging to predict their dynamic characteristics accurately.Although the traditional physical modeling method can simulate the complex deformation behavior of DLO,the calculation cost is high and it is difficult to meet the demand of real-time prediction.In addition,the scarcity of data resources also limits the prediction accuracy of existing models.To solve these problems,a method of fiber shape prediction based on a physical information graph neural network(PIGNN)is proposed in this paper.This method cleverly combines the powerful expressive power of graph neural networks with the strict constraints of physical laws.Specifically,we learn the initial deformation model of the fiber through graph neural networks(GNN)to provide a good initial estimate for the model,which helps alleviate the problem of data resource scarcity.During the training process,we incorporate the physical prior knowledge of the dynamic deformation of the fiber optics into the loss function as a constraint,which is then fed back to the network model.This ensures that the shape of the fiber optics gradually approaches the true target shape,effectively solving the complex nonlinear behavior prediction problem of deformable linear objects.Experimental results demonstrate that,compared to traditional methods,the proposed method significantly reduces execution time and prediction error when handling the complex deformations of deformable fibers.This showcases its potential application value and superiority in fiber manipulation. 展开更多
关键词 Deformable linear objects Fiber Physics-informed graph neural network(PIGNN) Shape prediction
在线阅读 下载PDF
Implementing Convolutional Neural Networks to Detect Dangerous Objects in Video Surveillance Systems
11
作者 Carlos Rojas Cristian Bravo +1 位作者 Carlos Enrique Montenegro-Marín Rubén González-Crespo 《Computers, Materials & Continua》 2025年第12期5489-5507,共19页
The increasing prevalence of violent incidents in public spaces has created an urgent need for intelligent surveillance systems capable of detecting dangerous objects in real time.While traditional video surveillance ... The increasing prevalence of violent incidents in public spaces has created an urgent need for intelligent surveillance systems capable of detecting dangerous objects in real time.While traditional video surveillance relies on human monitoring,this approach suffers from limitations such as fatigue and delayed response times.This study addresses these challenges by developing an automated detection system using advanced deep learning techniques to enhance public safety.Our approach leverages state-of-the-art convolutional neural networks(CNNs),specifically You Only Look Once version 4(YOLOv4)and EfficientDet,for real-time object detection.The system was trained on a comprehensive dataset of over 50,000 images,enhanced through data augmentation techniques to improve robustness across varying lighting conditions and viewing angles.Cloud-based deployment on Amazon Web Services(AWS)ensured scalability and efficient processing.Experimental evaluations demonstrated high performance,with YOLOv4 achieving 92%accuracy and processing images in 0.45 s,while EfficientDet reached 93%accuracy with a slightly longer processing time of 0.55 s per image.Field tests in high-traffic environments such as train stations and shopping malls confirmed the system’s reliability,with a false alarm rate of only 4.5%.The integration of automatic alerts enabled rapid security responses to potential threats.The proposed CNN-based system provides an effective solution for real-time detection of dangerous objects in video surveillance,significantly improving response times and public safety.While YOLOv4 proved more suitable for speed-critical applications,EfficientDet offered marginally better accuracy.Future work will focus on optimizing the system for low-light conditions and further reducing false positives.This research contributes to the advancement of AI-driven surveillance technologies,offering a scalable framework adaptable to various security scenarios. 展开更多
关键词 Automatic detection of objects convolutional neural networks deep learning real-time image processing video surveillance systems automatic alerts
在线阅读 下载PDF
Semantic segmentation of camouflage objects via fusing reconstructed multispectral and RGB images
12
作者 Feng Huang Gonghan Yang +5 位作者 Jing Chen Yixuan Xu Jingze Su Guimin Huang Shu Wang Wenxi Liu 《Defence Technology(防务技术)》 2025年第8期324-337,共14页
Accurate segmentation of camouflage objects in aerial imagery is vital for improving the efficiency of UAV-based reconnaissance and rescue missions.However,camouflage object segmentation is increasingly challenging du... Accurate segmentation of camouflage objects in aerial imagery is vital for improving the efficiency of UAV-based reconnaissance and rescue missions.However,camouflage object segmentation is increasingly challenging due to advances in both camouflage materials and biological mimicry.Although multispectral-RGB based technology shows promise,conventional dual-aperture multispectral-RGB imaging systems are constrained by imprecise and time-consuming registration and fusion across different modalities,limiting their performance.Here,we propose the Reconstructed Multispectral-RGB Fusion Network(RMRF-Net),which reconstructs RGB images into multispectral ones,enabling efficient multimodal segmentation using only an RGB camera.Specifically,RMRF-Net employs a divergentsimilarity feature correction strategy to minimize reconstruction errors and includes an efficient boundary-aware decoder to enhance object contours.Notably,we establish the first real-world aerial multispectral-RGB semantic segmentation of camouflage objects dataset,including 11 object categories.Experimental results demonstrate that RMRF-Net outperforms existing methods,achieving 17.38 FPS on the NVIDIA Jetson AGX Orin,with only a 0.96%drop in mIoU compared to the RTX 3090,showing its practical applicability in multimodal remote sensing. 展开更多
关键词 Camouflage object detection Reconstructed multispectral image(MSI) Unmanned aerial vehicle(UAV) Semantic segmentation Remote sensing
在线阅读 下载PDF
An intelligent detection method for directional bolt hole objects of shield tunnel lining structures
13
作者 Yiding Ma Dechun Lu +3 位作者 Fanchao Kong Tao Tian Dongmei Zhang Xiuli Du 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第12期7555-7569,共15页
Most image-based object detection methods employ horizontal bounding boxes(HBBs)to capture objects in tunnel images.However,these bounding boxes often fail to effectively enclose objects oriented in arbitrary directio... Most image-based object detection methods employ horizontal bounding boxes(HBBs)to capture objects in tunnel images.However,these bounding boxes often fail to effectively enclose objects oriented in arbitrary directions,resulting in reduced accuracy and suboptimal detection performance.Moreover,HBBs cannot provide directional information for rotated objects.This study proposes a rotated detection method for identifying apparent defects in shield tunnels.Specifically,the oriented region-convolutional neural network(oriented R-CNN)is utilized to detect rotated objects in tunnel images.To enhance feature extraction,a novel hybrid backbone combining CNN-based networks with Swin Transformers is proposed.A feature fusion strategy is employed to integrate features extracted from both networks.Additionally,a neck network based on the bidirectional-feature pyramid network(Bi-FPN)is designed to combine multi-scale object features.The bolt hole dataset is curated to evaluate the efficacyof the proposed method.In addition,a dedicated pre-processing approach is developed for large-sized images to accommodate the rotated,dense,and small-scale characteristics of objects in tunnel images.Experimental results demonstrate that the proposed method achieves a more than 4%improvement in mAP_(50-95)compared to other rotated detectors and a 6.6%-12.7%improvement over mainstream horizontal detectors.Furthermore,the proposed method outperforms mainstream methods by 6.5%-14.7%in detecting leakage bolt holes,underscoring its significant engineering applicability. 展开更多
关键词 Apparent defects of shield tunnels Rotated object detection Swin transformer Oriented region-convolutional neural network(oriented R-CNN)
在线阅读 下载PDF
A Blockchain-Based Efficient Verification Scheme for Context Semantic-Aware Ciphertext Retrieval
14
作者 Haochen Bao Lingyun Yuan +2 位作者 Tianyu Xie Han Chen Hui Dai 《Computers, Materials & Continua》 2026年第1期550-579,共30页
In the age of big data,ensuring data privacy while enabling efficient encrypted data retrieval has become a critical challenge.Traditional searchable encryption schemes face difficulties in handling complex semantic q... In the age of big data,ensuring data privacy while enabling efficient encrypted data retrieval has become a critical challenge.Traditional searchable encryption schemes face difficulties in handling complex semantic queries.Additionally,they typically rely on honest but curious cloud servers,which introduces the risk of repudiation.Furthermore,the combined operations of search and verification increase system load,thereby reducing performance.Traditional verification mechanisms,which rely on complex hash constructions,suffer from low verification efficiency.To address these challenges,this paper proposes a blockchain-based contextual semantic-aware ciphertext retrieval scheme with efficient verification.Building on existing single and multi-keyword search methods,the scheme uses vector models to semantically train the dataset,enabling it to retain semantic information and achieve context-aware encrypted retrieval,significantly improving search accuracy.Additionally,a blockchain-based updatable master-slave chain storage model is designed,where the master chain stores encrypted keyword indexes and the slave chain stores verification information generated by zero-knowledge proofs,thus balancing system load while improving search and verification efficiency.Finally,an improved non-interactive zero-knowledge proof mechanism is introduced,reducing the computational complexity of verification and ensuring efficient validation of search results.Experimental results demonstrate that the proposed scheme offers stronger security,balanced overhead,and higher search verification efficiency. 展开更多
关键词 Searchable encryption blockchain context semantic awareness zero-knowledge proof
在线阅读 下载PDF
GLMCNet: A Global-Local Multiscale Context Network for High-Resolution Remote Sensing Image Semantic Segmentation
15
作者 Yanting Zhang Qiyue Liu +4 位作者 Chuanzhao Tian Xuewen Li Na Yang Feng Zhang Hongyue Zhang 《Computers, Materials & Continua》 2026年第1期2086-2110,共25页
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an... High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet. 展开更多
关键词 Multiscale context attention mechanism remote sensing images semantic segmentation
在线阅读 下载PDF
TSMixerE:Entity Context-Aware Method for Static Knowledge Graph Completion
16
作者 Jianzhong Chen Yunsheng Xu +2 位作者 Zirui Guo Tianmin Liu Ying Pan 《Computers, Materials & Continua》 2026年第4期2207-2230,共24页
The rapid development of information technology and accelerated digitalization have led to an explosive growth of data across various fields.As a key technology for knowledge representation and sharing,knowledge graph... The rapid development of information technology and accelerated digitalization have led to an explosive growth of data across various fields.As a key technology for knowledge representation and sharing,knowledge graphs play a crucial role by constructing structured networks of relationships among entities.However,data sparsity and numerous unexplored implicit relations result in the widespread incompleteness of knowledge graphs.In static knowledge graph completion,most existing methods rely on linear operations or simple interaction mechanisms for triple encoding,making it difficult to fully capture the deep semantic associations between entities and relations.Moreover,many methods focus only on the local information of individual triples,ignoring the rich semantic dependencies embedded in the neighboring nodes of entities within the graph structure,which leads to incomplete embedding representations.To address these challenges,we propose Two-Stage Mixer Embedding(TSMixerE),a static knowledge graph completion method based on entity context.In the unit semantic extraction stage,TSMixerE leveragesmulti-scale circular convolution to capture local features atmultiple granularities,enhancing the flexibility and robustness of feature interactions.A channel attention mechanism amplifies key channel responses to suppress noise and irrelevant information,thereby improving the discriminative power and semantic depth of feature representations.For contextual information fusion,a multi-layer self-attentionmechanism enables deep interactions among contextual cues,effectively integrating local details with global context.Simultaneously,type embeddings clarify the semantic identities and roles of each component,enhancing the model’s sensitivity and fusion capabilities for diverse information sources.Furthermore,TSMixerE constructs contextual unit sequences for entities,fully exploring neighborhood information within the graph structure to model complex semantic dependencies,thus improving the completeness and generalization of embedding representations. 展开更多
关键词 Knowledge graph knowledge graph complementation convolutional neural network feature interaction context
在线阅读 下载PDF
Hybrid Quantum Gate Enabled CNN Framework with Optimized Features for Human-Object Detection and Recognition
17
作者 Nouf Abdullah Almujally Tanvir Fatima Naik Bukht +3 位作者 Shuaa S.Alharbi Asaad Algarni Ahmad Jalal Jeongmin Park 《Computers, Materials & Continua》 2026年第4期2254-2271,共18页
Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex dataset... Recognising human-object interactions(HOI)is a challenging task for traditional machine learning models,including convolutional neural networks(CNNs).Existing models show limited transferability across complex datasets such as D3D-HOI and SYSU 3D HOI.The conventional architecture of CNNs restricts their ability to handle HOI scenarios with high complexity.HOI recognition requires improved feature extraction methods to overcome the current limitations in accuracy and scalability.This work proposes a Novel quantum gate-enabled hybrid CNN(QEH-CNN)for effectiveHOI recognition.Themodel enhancesCNNperformance by integrating quantumcomputing components.The framework begins with bilateral image filtering,followed bymulti-object tracking(MOT)and Felzenszwalb superpixel segmentation.A watershed algorithm refines object boundaries by cleaning merged superpixels.Feature extraction combines a histogram of oriented gradients(HOG),Global Image Statistics for Texture(GIST)descriptors,and a novel 23-joint keypoint extractionmethod using relative joint angles and joint proximitymeasures.A fuzzy optimization process refines the extracted features before feeding them into the QEH-CNNmodel.The proposed model achieves 95.06%accuracy on the 3D-D3D-HOI dataset and 97.29%on the SYSU3DHOI dataset.Theintegration of quantum computing enhances feature optimization,leading to improved accuracy and overall model efficiency. 展开更多
关键词 Pattern recognition image segmentation computer vision object detection
在线阅读 下载PDF
Enhanced Multi-Scale Feature Extraction Lightweight Network for Remote Sensing Object Detection
18
作者 Xiang Luo Yuxuan Peng +2 位作者 Renghong Xie Peng Li Yuwen Qian 《Computers, Materials & Continua》 2026年第3期2097-2118,共22页
Deep learning has made significant progress in the field of oriented object detection for remote sensing images.However,existing methods still face challenges when dealing with difficult tasks such as multi-scale targ... Deep learning has made significant progress in the field of oriented object detection for remote sensing images.However,existing methods still face challenges when dealing with difficult tasks such as multi-scale targets,complex backgrounds,and small objects in remote sensing.Maintaining model lightweight to address resource constraints in remote sensing scenarios while improving task completion for remote sensing tasks remains a research hotspot.Therefore,we propose an enhanced multi-scale feature extraction lightweight network EM-YOLO based on the YOLOv8s architecture,specifically optimized for the characteristics of large target scale variations,diverse orientations,and numerous small objects in remote sensing images.Our innovations lie in two main aspects:First,a dynamic snake convolution(DSC)is introduced into the backbone network to enhance the model’s feature extraction capability for oriented targets.Second,an innovative focusing-diffusion module is designed in the feature fusion neck to effectively integrate multi-scale feature information.Finally,we introduce Layer-Adaptive Sparsity for magnitude-based Pruning(LASP)method to perform lightweight network pruning to better complete tasks in resource-constrained scenarios.Experimental results on the lightweight platform Orin demonstrate that the proposed method significantly outperforms the original YOLOv8s model in oriented remote sensing object detection tasks,and achieves comparable or superior performance to state-of-the-art methods on three authoritative remote sensing datasets(DOTA v1.0,DOTA v1.5,and HRSC2016). 展开更多
关键词 Deep learning object detection feature extraction feature fusion remote sensing
在线阅读 下载PDF
FMCSNet: Mobile Devices-Oriented Lightweight Multi-Scale Object Detection via Fast Multi-Scale Channel Shuffling Network Model
19
作者 Lijuan Huang Xianyi Liu +1 位作者 Jinping Liu Pengfei Xu 《Computers, Materials & Continua》 2026年第1期1292-1311,共20页
The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditio... The ubiquity of mobile devices has driven advancements in mobile object detection.However,challenges in multi-scale object detection in open,complex environments persist due to limited computational resources.Traditional approaches like network compression,quantization,and lightweight design often sacrifice accuracy or feature representation robustness.This article introduces the Fast Multi-scale Channel Shuffling Network(FMCSNet),a novel lightweight detection model optimized for mobile devices.FMCSNet integrates a fully convolutional Multilayer Perceptron(MLP)module,offering global perception without significantly increasing parameters,effectively bridging the gap between CNNs and Vision Transformers.FMCSNet achieves a delicate balance between computation and accuracy mainly by two key modules:the ShiftMLP module,including a shift operation and an MLP module,and a Partial group Convolutional(PGConv)module,reducing computation while enhancing information exchange between channels.With a computational complexity of 1.4G FLOPs and 1.3M parameters,FMCSNet outperforms CNN-based and DWConv-based ShuffleNetv2 by 1%and 4.5%mAP on the Pascal VOC 2007 dataset,respectively.Additionally,FMCSNet achieves a mAP of 30.0(0.5:0.95 IoU threshold)with only 2.5G FLOPs and 2.0M parameters.It achieves 32 FPS on low-performance i5-series CPUs,meeting real-time detection requirements.The versatility of the PGConv module’s adaptability across scenarios further highlights FMCSNet as a promising solution for real-time mobile object detection. 展开更多
关键词 object detection lightweight network partial group convolution multilayer perceptron
在线阅读 下载PDF
A Comprehensive Literature Review on YOLO-Based Small Object Detection:Methods,Challenges,and Future Trends
20
作者 Hui Yu Jun Liu Mingwei Lin 《Computers, Materials & Continua》 2026年第4期258-309,共52页
Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of... Small object detection has been a focus of attention since the emergence of deep learning-based object detection.Although classical object detection frameworks have made significant contributions to the development of object detection,there are still many issues to be resolved in detecting small objects due to the inherent complexity and diversity of real-world visual scenes.In particular,the YOLO(You Only Look Once)series of detection models,renowned for their real-time performance,have undergone numerous adaptations aimed at improving the detection of small targets.In this survey,we summarize the state-of-the-art YOLO-based small object detection methods.This review presents a systematic categorization of YOLO-based approaches for small-object detection,organized into four methodological avenues,namely attention-based feature enhancement,detection-head optimization,loss function,and multi-scale feature fusion strategies.We then examine the principal challenges addressed by each category.Finally,we analyze the performance of thesemethods on public benchmarks and,by comparing current approaches,identify limitations and outline directions for future research. 展开更多
关键词 Small object detection YOLO real-time detection feature fusion deep learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部