期刊文献+
共找到13,476篇文章
< 1 2 250 >
每页显示 20 50 100
A Fast and Effective Multiple Kernel Clustering Method on Incomplete Data 被引量:1
1
作者 Lingyun Xiang Guohan Zhao +3 位作者 Qian Li Gwang-Jun Kim Osama Alfarraj Amr Tolba 《Computers, Materials & Continua》 SCIE EI 2021年第4期267-284,共18页
Multiple kernel clustering is an unsupervised data analysis method that has been used in various scenarios where data is easy to be collected but hard to be labeled.However,multiple kernel clustering for incomplete da... Multiple kernel clustering is an unsupervised data analysis method that has been used in various scenarios where data is easy to be collected but hard to be labeled.However,multiple kernel clustering for incomplete data is a critical yet challenging task.Although the existing absent multiple kernel clustering methods have achieved remarkable performance on this task,they may fail when data has a high value-missing rate,and they may easily fall into a local optimum.To address these problems,in this paper,we propose an absent multiple kernel clustering(AMKC)method on incomplete data.The AMKC method rst clusters the initialized incomplete data.Then,it constructs a new multiple-kernel-based data space,referred to as K-space,from multiple sources to learn kernel combination coefcients.Finally,it seamlessly integrates an incomplete-kernel-imputation objective,a multiple-kernel-learning objective,and a kernel-clustering objective in order to achieve absent multiple kernel clustering.The three stages in this process are carried out simultaneously until the convergence condition is met.Experiments on six datasets with various characteristics demonstrate that the kernel imputation and clustering performance of the proposed method is signicantly better than state-of-the-art competitors.Meanwhile,the proposed method gains fast convergence speed. 展开更多
关键词 Multiple kernel clustering absent-kernel imputation incomplete data kernel k-means clustering
在线阅读 下载PDF
DATA PREPROCESSING AND RE KERNEL CLUSTERING FOR LETTER
2
作者 Zhu Changming Gao Daqi 《Journal of Electronics(China)》 2014年第6期552-564,共13页
Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing ... Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy. 展开更多
关键词 data preprocessing kernel clustering kernel Nearest Neighbor(KNN) Re kernel clustering
在线阅读 下载PDF
Identification of reservoir types in deep carbonates based on mixedkernel machine learning using geophysical logging data
3
作者 Jin-Xiong Shi Xiang-Yuan Zhao +3 位作者 Lian-Bo Zeng Yun-Zhao Zhang Zheng-Ping Zhu Shao-Qun Dong 《Petroleum Science》 SCIE EI CAS CSCD 2024年第3期1632-1648,共17页
Identification of reservoir types in deep carbonates has always been a great challenge due to complex logging responses caused by the heterogeneous scale and distribution of storage spaces.Traditional cross-plot analy... Identification of reservoir types in deep carbonates has always been a great challenge due to complex logging responses caused by the heterogeneous scale and distribution of storage spaces.Traditional cross-plot analysis and empirical formula methods for identifying reservoir types using geophysical logging data have high uncertainty and low efficiency,which cannot accurately reflect the nonlinear relationship between reservoir types and logging data.Recently,the kernel Fisher discriminant analysis(KFD),a kernel-based machine learning technique,attracts attention in many fields because of its strong nonlinear processing ability.However,the overall performance of KFD model may be limited as a single kernel function cannot simultaneously extrapolate and interpolate well,especially for highly complex data cases.To address this issue,in this study,a mixed kernel Fisher discriminant analysis(MKFD)model was established and applied to identify reservoir types of the deep Sinian carbonates in central Sichuan Basin,China.The MKFD model was trained and tested with 453 datasets from 7 coring wells,utilizing GR,CAL,DEN,AC,CNL and RT logs as input variables.The particle swarm optimization(PSO)was adopted for hyper-parameter optimization of MKFD model.To evaluate the model performance,prediction results of MKFD were compared with those of basic-kernel based KFD,RF and SVM models.Subsequently,the built MKFD model was applied in a blind well test,and a variable importance analysis was conducted.The comparison and blind test results demonstrated that MKFD outperformed traditional KFD,RF and SVM in the identification of reservoir types,which provided higher accuracy and stronger generalization.The MKFD can therefore be a reliable method for identifying reservoir types of deep carbonates. 展开更多
关键词 Reservoir type identification Geophysical logging data kernel Fisher discriminantanalysis Mixedkernel function Deep carbonates
原文传递
A Quantized Kernel Least Mean Square Scheme with Entropy-Guided Learning for Intelligent Data Analysis 被引量:5
4
作者 Xiong Luo Jing Deng +3 位作者 Ji Liu Weiping Wang Xiaojuan Ban Jenq-Haur Wang 《China Communications》 SCIE CSCD 2017年第7期127-136,共10页
Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for inp... Quantized kernel least mean square(QKLMS) algorithm is an effective nonlinear adaptive online learning algorithm with good performance in constraining the growth of network size through the use of quantization for input space. It can serve as a powerful tool to perform complex computing for network service and application. With the purpose of compressing the input to further improve learning performance, this article proposes a novel QKLMS with entropy-guided learning, called EQ-KLMS. Under the consecutive square entropy learning framework, the basic idea of entropy-guided learning technique is to measure the uncertainty of the input vectors used for QKLMS, and delete those data with larger uncertainty, which are insignificant or easy to cause learning errors. Then, the dataset is compressed. Consequently, by using square entropy, the learning performance of proposed EQ-KLMS is improved with high precision and low computational cost. The proposed EQ-KLMS is validated using a weather-related dataset, and the results demonstrate the desirable performance of our scheme. 展开更多
关键词 quantized kernel least mean square (QKLMS) consecutive square entropy data analysis
在线阅读 下载PDF
ID-Based Public Auditing Protocol for Cloud Storage Data Integrity Checking with Strengthened Authentication and Security 被引量:1
5
作者 JIANG Hong XIE Mingming +2 位作者 KANG Baoyuan LI Chunqing SI Lin 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2018年第4期362-368,共7页
Cloud storage service reduces the burden of data users by storing users' data files in the cloud. But, the files might be modified in the cloud. So, data users hope to check data files integrity periodically. In a pu... Cloud storage service reduces the burden of data users by storing users' data files in the cloud. But, the files might be modified in the cloud. So, data users hope to check data files integrity periodically. In a public auditing protocol, there is a trusted auditor who has certain ability to help users to check the integrity of data files. With the advantages of no public key management and verification, researchers focus on public auditing protocol in ID-based cryptography recently. However, some existing protocols are vulnerable to forgery attack. In this paper, based on ID-based signature technology, by strengthening information authentication and the computing power of the auditor, we propose an ID-based public auditing protocol for cloud data integrity checking. We also prove that the proposed protocol is secure in the random oracle model under the assumption that the Diffie-Hellman problem is hard. Furthermore, we compare the proposed protocol with other two ID-based auditing protocols in security features, communication efficiency and computation cost. The comparisons show that the proposed protocol satisfies more security features with lower computation cost. 展开更多
关键词 ID-based auditing data integrity checking digital signature SECURITY bilinear map
原文传递
Data Integrity and Risk 被引量:1
6
作者 Sasidhar Duggineni 《Open Journal of Optimization》 2023年第2期25-33,共9页
Data Integrity is a critical component of Data lifecycle management. Its importance increases even more in a complex and dynamic landscape. Actions like unauthorized access, unauthorized modifications, data manipulati... Data Integrity is a critical component of Data lifecycle management. Its importance increases even more in a complex and dynamic landscape. Actions like unauthorized access, unauthorized modifications, data manipulations, audit tampering, data backdating, data falsification, phishing and spoofing are no longer restricted to rogue individuals but in fact also prevalent in systematic organizations and states as well. Therefore, data security requires strong data integrity measures and associated technical controls in place. Without proper customized framework in place, organizations are prone to high risk of financial, reputational, revenue losses, bankruptcies, and legal penalties which we shall discuss further throughout this paper. We will also explore some of the improvised and innovative techniques in product development to better tackle the challenges and requirements of data security and integrity. 展开更多
关键词 data Governance data integrity data Management data Security Technical Controls REGULATIONS
在线阅读 下载PDF
An iterative modified kernel based on training data 被引量:2
7
作者 周志祥 韩逢庆 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI 2009年第1期121-128,共8页
To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thu... To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thus the kernel function is data-dependent. With a random initial parameter, the kernel function is modified repeatedly until a satisfactory result is achieved. Compared with the conventional model, the improved approach does not need to select parameters of the kernel function. Sim- ulation is carried out for the one-dimension continuous function and a case of strong earthquakes. The results show that the improved approach has better learning ability and forecasting precision than the traditional model. With the increase of the iteration number, the figure of merit decreases and converges. The speed of convergence depends on the parameters used in the algorithm. 展开更多
关键词 support vector regression data-dependent kernel function ITERATION
在线阅读 下载PDF
Data-driven computing in elasticity via kernel regression 被引量:2
8
作者 Yoshihiro Kanno 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2018年第6期361-365,I0003,共6页
This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to o... This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set. 展开更多
关键词 data-driven computational mechanics Model-free method Nonparametric method kernel regression Nadaraya–Watson estimator
在线阅读 下载PDF
PIR-based data integrity verification method in sensor network
9
作者 Yong-Ki Kim Kwangnam Choi +1 位作者 Jaesoo Kim JungHo Seok 《Journal of Central South University》 SCIE EI CAS 2014年第10期3883-3888,共6页
Since a sensor node handles wireless communication in data transmission and reception and is installed in poor environment, it is easily exposed to certain attacks such as data transformation and sniffing. Therefore, ... Since a sensor node handles wireless communication in data transmission and reception and is installed in poor environment, it is easily exposed to certain attacks such as data transformation and sniffing. Therefore, it is necessary to verify data integrity to properly respond to an adversary's ill-intentioned data modification. In sensor network environment, the data integrity verification method verifies the final data only, requesting multiple communications. An energy-efficient private information retrieval(PIR)-based data integrity verification method is proposed. Because the proposed method verifies the integrity of data between parent and child nodes, it is more efficient than the existing method which verifies data integrity after receiving data from the entire network or in a cluster. Since the number of messages for verification is reduced, in addition, energy could be used more efficiently. Lastly, the excellence of the proposed method is verified through performance evaluation. 展开更多
关键词 data integrity VERIFICATION private information retrieval sensor network
在线阅读 下载PDF
Utilizing Machine Learning with Unique Pentaplet Data Structure to Enhance Data Integrity
10
作者 Abdulwahab Alazeb 《Computers, Materials & Continua》 SCIE EI 2023年第12期2995-3014,共20页
Data protection in databases is critical for any organization,as unauthorized access or manipulation can have severe negative consequences.Intrusion detection systems are essential for keeping databases secure.Advance... Data protection in databases is critical for any organization,as unauthorized access or manipulation can have severe negative consequences.Intrusion detection systems are essential for keeping databases secure.Advancements in technology will lead to significant changes in the medical field,improving healthcare services through real-time information sharing.However,reliability and consistency still need to be solved.Safeguards against cyber-attacks are necessary due to the risk of unauthorized access to sensitive information and potential data corruption.Dis-ruptions to data items can propagate throughout the database,making it crucial to reverse fraudulent transactions without delay,especially in the healthcare industry,where real-time data access is vital.This research presents a role-based access control architecture for an anomaly detection technique.Additionally,the Structured Query Language(SQL)queries are stored in a new data structure called Pentaplet.These pentaplets allow us to maintain the correlation between SQL statements within the same transaction by employing the transaction-log entry information,thereby increasing detection accuracy,particularly for individuals within the company exhibiting unusual behavior.To identify anomalous queries,this system employs a supervised machine learning technique called Support Vector Machine(SVM).According to experimental findings,the proposed model performed well in terms of detection accuracy,achieving 99.92%through SVM with One Hot Encoding and Principal Component Analysis(PCA). 展开更多
关键词 database intrusion detection system data integrity machine learning pentaplet data structure
在线阅读 下载PDF
DEEPNOISE:Learning Sensor and Process Noise to Detect Data Integrity Attacks in CPS
11
作者 Yuan Luo Long Cheng +2 位作者 Yu Liang Jianming Fu Guojun Peng 《China Communications》 SCIE CSCD 2021年第9期192-209,共18页
Cyber-physical systems(CPS)have been widely deployed in critical infrastructures and are vulnerable to various attacks.Data integrity attacks manipulate sensor measurements and cause control systems to fail,which are ... Cyber-physical systems(CPS)have been widely deployed in critical infrastructures and are vulnerable to various attacks.Data integrity attacks manipulate sensor measurements and cause control systems to fail,which are one of the prominent threats to CPS.Anomaly detection methods are proposed to secure CPS.However,existing anomaly detection studies usually require expert knowledge(e.g.,system model-based)or are lack of interpretability(e.g.,deep learning-based).In this paper,we present DEEPNOISE,a deep learning-based anomaly detection method for CPS with interpretability.Specifically,we utilize the sensor and process noise to detect data integrity attacks.Such noise represents the intrinsic characteristics of physical devices and the production process in CPS.One key enabler is that we use a robust deep autoencoder to automatically extract the noise from measurement data.Further,an LSTM-based detector is designed to inspect the obtained noise and detect anomalies.Data integrity attacks change noise patterns and thus are identified as the root cause of anomalies by DEEPNOISE.Evaluated on the SWaT testbed,DEEPNOISE achieves higher accuracy and recall compared with state-of-the-art model-based and deep learningbased methods.On average,when detecting direct attacks,the precision is 95.47%,the recall is 96.58%,and F_(1) is 95.98%.When detecting stealthy attacks,precision,recall,and F_(1) scores are between 96% and 99.5%. 展开更多
关键词 cyber-physical systems anomaly detection data integrity attacks
在线阅读 下载PDF
Large Deviations for a Test of Symmetry Based on Kernel Density Estimator of Directional Data
12
作者 Mingzhou XU Kun CHENG 《Journal of Mathematical Research with Applications》 CSCD 2021年第6期639-647,共9页
Assume that f_(n)is the nonparametric kernel density estimator of directional data based on a kernel function K and a sequence of independent and identically distributed random variables taking values in d-dimensional... Assume that f_(n)is the nonparametric kernel density estimator of directional data based on a kernel function K and a sequence of independent and identically distributed random variables taking values in d-dimensional unit sphere S^(d-1).We established that the large deviation principle for{sup_(x∈S^(d-1))|fn(x)-fn(-x)|,n≥1}holds if the kernel function is a function with bounded variation,and the density function f of the random variables is continuous and symmetric. 展开更多
关键词 symmetry test kernel density estimator directional data large deviations
原文传递
Blockchain and Data Integrity Authentication Technique for Secure Cloud Environment
13
作者 A.Ramachandran P.Ramadevi +1 位作者 Ahmed Alkhayyat Yousif Kerrar Yousif 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期2055-2070,共16页
Nowadays,numerous applications are associated with cloud and user data gets collected globally and stored in cloud units.In addition to shared data storage,cloud computing technique offers multiple advantages for the ... Nowadays,numerous applications are associated with cloud and user data gets collected globally and stored in cloud units.In addition to shared data storage,cloud computing technique offers multiple advantages for the user through different distribution designs like hybrid cloud,public cloud,community cloud and private cloud.Though cloud-based computing solutions are highly con-venient to the users,it also brings a challenge i.e.,security of the data shared.Hence,in current research paper,blockchain with data integrity authentication technique is developed for an efficient and secure operation with user authentica-tion process.Blockchain technology is utilized in this study to enable efficient and secure operation which not only empowers cloud security but also avoids threats and attacks.Additionally,the data integrity authentication technique is also uti-lized to limit the unwanted access of data in cloud storage unit.The major objec-tive of the projected technique is to empower data security and user authentication in cloud computing environment.To improve the proposed authentication pro-cess,cuckoofilter and Merkle Hash Tree(MHT)are utilized.The proposed meth-odology was validated using few performance metrics such as processing time,uploading time,downloading time,authentication time,consensus time,waiting time,initialization time,in addition to storage overhead.The proposed method was compared with conventional cloud security techniques and the outcomes establish the supremacy of the proposed method. 展开更多
关键词 Blockchain SECURITY data integrity AUTHENTICATION cloud computing SIGNATURE hash tree
在线阅读 下载PDF
Block Level Data Integrity Assurance Using Matrix Dialing Method towards High Performance Data Security on Cloud Storage
14
作者 P. Premkumar D. Shanthi 《Circuits and Systems》 2016年第11期3626-3644,共19页
Data outsourcing through cloud storage enables the users to share on-demand resources with cost effective IT services but several security issues arise like confidentiality, integrity and authentication. Each of them ... Data outsourcing through cloud storage enables the users to share on-demand resources with cost effective IT services but several security issues arise like confidentiality, integrity and authentication. Each of them plays an important role in the successful achievement of the other. In cloud computing data integrity assurance is one of the major challenges because the user has no control over the security mechanism to protect the data. Data integrity insures that data received are the same as data stored. It is a result of data security but data integrity refers to validity and accuracy of data rather than protect the data. Data security refers to protection of data against unauthorized access, modification or corruption and it is necessary to ensure data integrity. This paper proposed a new approach using Matrix Dialing Method in block level to enhance the performance of both data integrity and data security without using Third Party Auditor (TPA). In this approach, the data are partitioned into number of blocks and each block converted into a square matrix. Determinant factor of each matrix is generated dynamically to ensure data integrity. This model also implements a combination of AES algorithm and SHA-1 algorithm for digital signature generation. Data coloring on digital signature is applied to ensure data security with better performance. The performance analysis using cloud simulator shows that the proposed scheme is highly efficient and secure as it overcomes the limitations of previous approaches of data security using encryption and decryption algorithms and data integrity assurance using TPA due to server computation time and accuracy. 展开更多
关键词 Cloud Computing data integrity data Security SHA-1 Digital Signature AES Encryption and Decryption
在线阅读 下载PDF
Discussion on Data Integrity of Pharmaceutical Batch Production Records
15
作者 GAO Rui 《外文科技期刊数据库(文摘版)医药卫生》 2021年第3期257-260,共4页
Information and data of the pharmaceutical production process are recorded and presented through the pharmaceutical batch production records. In this regard, in order to ensure the integrity of the data in the batch p... Information and data of the pharmaceutical production process are recorded and presented through the pharmaceutical batch production records. In this regard, in order to ensure the integrity of the data in the batch production records, the batch records must be filled with the information and data in the production process in a timely, complete and correct manner. Referring to the domestic and foreign literature on the data integrity of pharmaceutical batch production records, through discussion on the data integrity and checking and reviewing these quantities and information, the general common management and implementation level can be observed. Data integrity is the basic requirement to ensure the production and quality of pharmaceutical products. 展开更多
关键词 drug production data integrity GMP research and analysis
暂未订购
Visualising data distributions with kernel density estimation and reduced chi-squared statistic 被引量:8
16
作者 C.J.Spencer C.Yakymchuk M.Ghaznavi 《Geoscience Frontiers》 SCIE CAS CSCD 2017年第6期1247-1252,共6页
The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two c... The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two commonly used tools are the kernel density estimation and reduced chi-squared statistic used in combination with a weighted mean.Due to the wide applicability of these tools,we present a Java-based computer application called KDX to facilitate the visualization of data and the utilization of these numerical tools. 展开更多
关键词 data visualisation kernel DENSITY estimation REDUCED chi-squared statistic Mean SQUARE WEIGHTED deviation GEOSTATISTICS
在线阅读 下载PDF
NONLINEAR DATA RECONCILIATION METHOD BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS 被引量:6
17
作者 Yan Weiwu Shao HuiheDepartment of Automation,Shanghai Jiaotong University,Shanghai 200030, China 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2003年第2期117-119,共3页
In the industrial process situation, principal component analysis (PCA) is ageneral method in data reconciliation. However, PCA sometime is unfeasible to nonlinear featureanalysis and limited in application to nonline... In the industrial process situation, principal component analysis (PCA) is ageneral method in data reconciliation. However, PCA sometime is unfeasible to nonlinear featureanalysis and limited in application to nonlinear industrial process. Kernel PCA (KPCA) is extensionof PCA and can be used for nonlinear feature analysis. A nonlinear data reconciliation method basedon KPCA is proposed. The basic idea of this method is that firstly original data are mapped to highdimensional feature space by nonlinear function, and PCA is implemented in the feature space. Thennonlinear feature analysis is implemented and data are reconstructed by using the kernel. The datareconciliation method based on KPCA is applied to ternary distillation column. Simulation resultsshow that this method can filter the noise in measurements of nonlinear process and reconciliateddata can represent the true information of nonlinear process. 展开更多
关键词 principal component analysis kernel data reconciliation NONLINEAR
在线阅读 下载PDF
国际优秀生物医学期刊深度查证学术图片的特色流程——data integrity analysis专岗审核 被引量:7
18
作者 韩磊 叶青 +1 位作者 郑云飞 邱源 《编辑学报》 CSSCI 北大核心 2021年第2期231-236,共6页
介绍国外优秀生物医学期刊深度查证学术图片的特色流程——图片数据完整性分析(data integrity analysis)专岗审核。期刊在审稿流程中设置了图片数据完整性分析的专门岗位,由具备丰富Photoshop软件操作经验的图片分析师,对经同行评议后... 介绍国外优秀生物医学期刊深度查证学术图片的特色流程——图片数据完整性分析(data integrity analysis)专岗审核。期刊在审稿流程中设置了图片数据完整性分析的专门岗位,由具备丰富Photoshop软件操作经验的图片分析师,对经同行评议后稿件中的图片进行技术分析,配合作者提交的源数据,审查图片是否存在各类问题尤其是造假的情况。审核效果表明,在经过同行评议后的稿件中仍存在各类图片问题甚至图片造假,该流程对于查证图片造假是非常有效和必要的。 展开更多
关键词 生物医学 科技期刊 图片 学术不端 数据完整性
原文传递
Enhanced Lithofacies Classification of Tight Sandstone Reservoirs Using a Hybrid CNN-GRU Model with BSMOTE and Heat Kernel Imputation
19
作者 Li Pan Meng Jia-bing +1 位作者 Li Jun Chen Qi-jing 《Applied Geophysics》 2025年第4期1141-1157,1495,1496,共19页
Accurate lithofacies classification in low-permeability sandstone reservoirs remains challenging due to class imbalance in well-log data and the difficulty of the modeling vertical lithological dependencies.Traditiona... Accurate lithofacies classification in low-permeability sandstone reservoirs remains challenging due to class imbalance in well-log data and the difficulty of the modeling vertical lithological dependencies.Traditional core-based interpretation introduces subjectivity,while conventional deep learning models often fail to capture stratigraphic sequences effectively.To address these limitations,we propose a hybrid CNN–GRU framework that integrates spatial feature extraction and sequential modeling.Heat Kernel Imputation is applied to reconstruct missing log data,and Borderline SMOTE(BSMOTE)improves class balance by augmenting boundary-case minority samples.The CNN component extracts localized petrophysical features,and the GRU component captures depth-wise lithological transitions,to enable spatial-sequential feature fusion.Experiments on real-well datasets from tight sandstone reservoirs show that the proposed model achieves an average accuracy of 93.3%and a Macro F1-score of 0.934.It outperforms baseline models,including RF(87.8%),GBDT(81.8%),CNN-only(87.5%),and GRU-only(86.1%).Leave-one-well-out validation further confirms strong generalization ability.These results demonstrate that the proposed approach effectively addresses data imbalance and enhances classification robustness,offering a scalable and automated solution for lithofacies interpretation under complex geological conditions. 展开更多
关键词 Lithofacies Classification Deep Learning CNN-GRU Model Imbalanced data processing Heat kernel Imputation
在线阅读 下载PDF
DCFl-Checker: Checking Kernel Dynamic Control Flow Integrity with Performance Monitoring Counter 被引量:2
20
作者 SHI Wenchang ZHOU Hongwei +1 位作者 YUAN Jinhui LIANG Bin 《China Communications》 SCIE CSCD 2014年第9期31-46,共16页
It is a challenge to verify integrity of dynamic control flows due to their dynamic and volatile nature. To meet the challenge, existing solutions usually implant an "attachment" in each control transfer. However, t... It is a challenge to verify integrity of dynamic control flows due to their dynamic and volatile nature. To meet the challenge, existing solutions usually implant an "attachment" in each control transfer. However, the attachment introduces additional cost except performance penalty. For example, the attachment must be unique or restrictedly modified. In this paper, we propose a novel approach to detect integrity of dynamic control flows by counting executed branch instructions without involving any attachment. Our solution is based on the following observation. If a control flow is compromised, the number of executed branch instructions will be abnormally increased. The cause is that intruders usually hijack control flows for malicious execution which absolutely introduces additional branch instructions. Inspired by the above observation, in this paper, we devise a novel system named DCFI- Checker, which detect integrity corruption of dynamic control flows with the support of Performance Monitoring Counter (PMC). We have developed a proof-of-concept prototype system of DCFI-Checker on Linux fedora 5. Our experiments with existing kemel rootkits and buffer overflow attack show that DCFI- Checker is effective to detect compromised dynamic control transfer, and performance evaluations indicate that performance penaltyinduced by DCFI-Checker is acceptable. 展开更多
关键词 integrity dynamic control flow kernel branch performance monitoringcounter
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部