期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Binary Cyclic Codes and Minimal Codewords
1
作者 Selda Calkavur 《Computer Technology and Application》 2013年第9期486-489,共4页
Cyclic codes form an important class of codes. They have very interesting algebraic structure. Furthermore, they are equivalent to many important codes, such as binary Hamming codes, Golay codes and BCH codes. Minimal... Cyclic codes form an important class of codes. They have very interesting algebraic structure. Furthermore, they are equivalent to many important codes, such as binary Hamming codes, Golay codes and BCH codes. Minimal codewords in linear codes are widely used in constructing decoding algorithms and studying linear secret sharing scheme. In this paper, we show that in the binary cyclic code all of the codewords are minimal, except 0 and 1. Then, we obtain a result about the number of minimal codewords in the binary cyclic codes. 展开更多
关键词 Linear code cyclic code binary cyclic code generator polynomial minimal codeword secret sharing.
在线阅读 下载PDF
Robust Multi-Label Cartoon Character Classification on the Novel Kral Sakir Dataset Using Deep Learning Techniques
2
作者 Candan Tumer Erdal Guvenoglu Volkan Tunali 《Computers, Materials & Continua》 2025年第12期5135-5158,共24页
Automated cartoon character recognition is crucial for applications in content indexing,filtering,and copyright protection,yet it faces a significant challenge in animated media due to high intra-class visual variabil... Automated cartoon character recognition is crucial for applications in content indexing,filtering,and copyright protection,yet it faces a significant challenge in animated media due to high intra-class visual variability,where characters frequently alter their appearance.To address this problem,we introduce the novel Kral Sakir dataset,a public benchmark of 16,725 images specifically curated for the task of multi-label cartoon character classification under these varied conditions.This paper conducts a comprehensive benchmark study,evaluating the performance of state-of-the-art pretrained Convolutional Neural Networks(CNNs),including DenseNet,ResNet,and VGG,against a custom baseline model trained from scratch.Our experiments,evaluated using metrics of F1-Score,accuracy,and Area Under the ROC Curve(AUC),demonstrate that fine-tuning pretrained models is a highly effective strategy.The best-performing model,DenseNet121,achieved an F1-Score of 0.9890 and an accuracy of 0.9898,significantly outperforming our baseline CNN(F1-Score of 0.9545).The findings validate the power of transfer learning for this domain and establish a strong performance benchmark.The introduced dataset provides a valuable resource for future research into developing robust and accurate character recognition systems. 展开更多
关键词 Cartoon character recognition multi-label classification deep learning transfer learning predictive modelling artificial intelligence-enhanced(AI-Enhanced)systems Kral Sakir dataset
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部