期刊文献+
共找到4,035篇文章
< 1 2 202 >
每页显示 20 50 100
ISTIRDA:An Efficient Data Availability Sampling Scheme for Lightweight Nodes in Blockchain
1
作者 Jiaxi Wang Wenbo Sun +3 位作者 Ziyuan Zhou Shihua Wu Jiang Xu Shan Ji 《Computers, Materials & Continua》 2026年第4期685-700,共16页
Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either re... Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes. 展开更多
关键词 Blockchain scalability data availability sampling lightweight nodes
在线阅读 下载PDF
Data partitioning based on sampling for power load streams
2
作者 王永利 徐宏炳 +2 位作者 董逸生 钱江波 刘学军 《Journal of Southeast University(English Edition)》 EI CAS 2005年第3期293-298,共6页
A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,wh... A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing. 展开更多
关键词 data streams continuous queries parallel processing sampling data partitioning
在线阅读 下载PDF
Super point detection based on sampling and data streaming algorithms
3
作者 程光 强士卿 《Journal of Southeast University(English Edition)》 EI CAS 2009年第2期224-227,共4页
In order to improve the precision of super point detection and control measurement resource consumption, this paper proposes a super point detection method based on sampling and data streaming algorithms (SDSD), and... In order to improve the precision of super point detection and control measurement resource consumption, this paper proposes a super point detection method based on sampling and data streaming algorithms (SDSD), and proves that only sources or destinations with a lot of flows can be sampled probabilistically using the SDSD algorithm. The SDSD algorithm uses both the IP table and the flow bloom filter (BF) data structures to maintain the IP and flow information. The IP table is used to judge whether an IP address has been recorded. If the IP exists, then all its subsequent flows will be recorded into the flow BF; otherwise, the IP flow is sampled. This paper also analyzes the accuracy and memory requirements of the SDSD algorithm , and tests them using the CERNET trace. The theoretical analysis and experimental tests demonstrate that the most relative errors of the super points estimated by the SDSD algorithm are less than 5%, whereas the results of other algorithms are about 10%. Because of the BF structure, the SDSD algorithm is also better than previous algorithms in terms of memory consumption. 展开更多
关键词 super point flow sampling data streaming
在线阅读 下载PDF
Scaling up the DBSCAN Algorithm for Clustering Large Spatial Databases Based on Sampling Technique 被引量:9
4
作者 Guan Ji hong 1, Zhou Shui geng 2, Bian Fu ling 3, He Yan xiang 1 1. School of Computer, Wuhan University, Wuhan 430072, China 2.State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, China 3.College of Remote Sensin 《Wuhan University Journal of Natural Sciences》 CAS 2001年第Z1期467-473,共7页
Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recogni... Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and etc. We combine sampling technique with DBSCAN algorithm to cluster large spatial databases, and two sampling based DBSCAN (SDBSCAN) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental results demonstrate that our algorithms are effective and efficient in clustering large scale spatial databases. 展开更多
关键词 spatial databases data mining CLUSTERING sampling DBSCAN algorithm
在线阅读 下载PDF
Characteristics analysis on high density spatial sampling seismic data 被引量:12
5
作者 Cai Xiling Liu Xuewei +1 位作者 Deng Chunyan Lv Yingme 《Applied Geophysics》 SCIE CSCD 2006年第1期48-54,共7页
China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a... China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data. 展开更多
关键词 high density spatial sampling symmetric sampling static correction noise suppression wave field separation and data processing.
在线阅读 下载PDF
Over-sampling algorithm for imbalanced data classification 被引量:14
6
作者 XU Xiaolong CHEN Wen SUN Yanfei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第6期1182-1191,共10页
For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic... For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value. 展开更多
关键词 imbalanced data density-based spatial clustering of applications with noise(DBSCAN) synthetic minority over sampling technique(SMOTE) over-sampling.
在线阅读 下载PDF
Effective data sampling strategies and boundary condition constraints of physics-informed neural networks for identifying material properties in solid mechanics 被引量:4
7
作者 W.WU M.DANEKER +2 位作者 M.A.JOLLEY K.T.TURNER L.LU 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2023年第7期1039-1068,共30页
Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the ch... Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials. 展开更多
关键词 solid mechanics material identification physics-informed neural network(PINN) data sampling boundary condition(BC)constraint
在线阅读 下载PDF
Brittleness index predictions from Lower Barnett Shale well-log data applying an optimized data matching algorithm at various sampling densities 被引量:2
8
作者 David A.Wood 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第6期444-457,共14页
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical... The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially. 展开更多
关键词 Well-log brittleness index estimates data record sample densities Zoomed-in data interpolation Correlation-free prediction analysis Mineralogical and elastic influences
在线阅读 下载PDF
Cooperative Iteration Matching Method for Aligning Samples from Heterogeneous Industrial Datasets
9
作者 LI Han SHI Guohong +1 位作者 LIU Zhao ZHU Ping 《Journal of Shanghai Jiaotong university(Science)》 2025年第2期375-384,共10页
Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining s... Industrial data mining usually deals with data from different sources.These heterogeneous datasets describe the same object in different views.However,samples from some of the datasets may be lost.Then the remaining samples do not correspond one-to-one correctly.Mismatched datasets caused by missing samples make the industrial data unavailable for further machine learning.In order to align the mismatched samples,this article presents a cooperative iteration matching method(CIMM)based on the modified dynamic time warping(DTW).The proposed method regards the sequentially accumulated industrial data as the time series.Mismatched samples are aligned by the DTW.In addition,dynamic constraints are applied to the warping distance of the DTW process to make the alignment more efficient.Then a series of models are trained with the cumulated samples iteratively.Several groups of numerical experiments on different missing patterns and missing locations are designed and analyzed to prove the effectiveness and the applicability of the proposed method. 展开更多
关键词 dynamic time warping mismatched samples sample alignment industrial data data missing
原文传递
Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay 被引量:1
10
作者 王娜 吴治海 彭力 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第10期617-625,共9页
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay fo... In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. 展开更多
关键词 heterogeneous multi-agent systems CONSENSUS samplED-data small sampling delay
原文传递
Power Analysis and Sample Size Determination for Crossover Trials with Application to Bioequivalence Assessment of Topical Ophthalmic Drugs Using Serial Sampling Pharmacokinetic Data
11
作者 YU Yong Pei YAN Xiao Yan +1 位作者 YAO Chen XIA Jie Lai 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2019年第8期614-623,共10页
Objective To develop methods for determining a suitable sample size for bioequivalence assessment of generic topical ophthalmic drugs using crossover design with serial sampling schemes.Methods The power functions of ... Objective To develop methods for determining a suitable sample size for bioequivalence assessment of generic topical ophthalmic drugs using crossover design with serial sampling schemes.Methods The power functions of the Fieller-type confidence interval and the asymptotic confidence interval in crossover designs with serial-sampling data are here derived.Simulation studies were conducted to evaluate the derived power functions.Results Simulation studies show that two power functions can provide precise power estimates when normality assumptions are satisfied and yield conservative estimates of power in cases when data are log-normally distributed.The intra-correlation showed a positive correlation with the power of the bioequivalence test.When the expected ratio of the AUCs was less than or equal to 1, the power of the Fieller-type confidence interval was larger than the asymptotic confidence interval.If the expected ratio of the AUCs was larger than 1, the asymptotic confidence interval had greater power.Sample size can be calculated through numerical iteration with the derived power functions.Conclusion The Fieller-type power function and the asymptotic power function can be used to determine sample sizes of crossover trials for bioequivalence assessment of topical ophthalmic drugs. 展开更多
关键词 Serial-sampling data CROSSOVER design TOPICAL OPHTHALMIC drug BIOEQUIVALENCE sample size
在线阅读 下载PDF
Novel Stability Criteria for Sampled-Data Systems With Variable Sampling Periods 被引量:2
12
作者 Hanyong Shao Jianrong Zhao Dan Zhang 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2020年第1期257-262,共6页
This paper is concerned with a novel Lyapunovlike functional approach to the stability of sampled-data systems with variable sampling periods. The Lyapunov-like functional has four striking characters compared to usua... This paper is concerned with a novel Lyapunovlike functional approach to the stability of sampled-data systems with variable sampling periods. The Lyapunov-like functional has four striking characters compared to usual ones. First, it is time-dependent. Second, it may be discontinuous. Third, not every term of it is required to be positive definite. Fourth, the Lyapunov functional includes not only the state and the sampled state but also the integral of the state. By using a recently reported inequality to estimate the derivative of this Lyapunov functional, a sampled-interval-dependent stability criterion with reduced conservatism is obtained. The stability criterion is further extended to sampled-data systems with polytopic uncertainties. Finally, three examples are given to illustrate the reduced conservatism of the stability criteria. 展开更多
关键词 Lyapunov functional sampled-data systems sampling-interval-dependent stability
在线阅读 下载PDF
A two-sample KS test with m out of n bootstrap for massive data
13
作者 XIE Xiaoyue TIAN Xinyu SHI Jian 《纯粹数学与应用数学》 2025年第4期744-760,共17页
In the era of big data,traditional statistical inference methods are faced with great challenges.Taking the two-sample distribution test scenario of big data as an example,this paper proposes the BB-KS test based on m... In the era of big data,traditional statistical inference methods are faced with great challenges.Taking the two-sample distribution test scenario of big data as an example,this paper proposes the BB-KS test based on m out of n bootstrap for solving a single-machine memory and computing constraints.It is verified to the feasibility and effectiveness of the proposed test method through theoretical analysis and numerical simulation.The results show that the BB-KS test can improve the calculation efficiency of the test to a certain extent in the single machine scenario. 展开更多
关键词 massive data two sample test the BB-KS method m out of n bootstrap
在线阅读 下载PDF
Security control of Markovian jump neural networks with stochastic sampling subject to false data injection attacks
14
作者 Lan Yao Xia Huang +1 位作者 Zhen Wang Min Xiao 《Communications in Theoretical Physics》 SCIE CAS CSCD 2023年第10期146-154,共9页
The security control of Markovian jumping neural networks(MJNNs)is investigated under false data injection attacks that take place in the shared communication network.Stochastic sampleddata control is employed to rese... The security control of Markovian jumping neural networks(MJNNs)is investigated under false data injection attacks that take place in the shared communication network.Stochastic sampleddata control is employed to research the exponential synchronization of MJNNs under false data injection attacks(FDIAs)since it can alleviate the impact of the FDIAs on the performance of the system by adjusting the sampling periods.A multi-delay error system model is established through the input-delay approach.To reduce the conservatism of the results,a sampling-periodprobability-dependent looped Lyapunov functional is constructed.In light of some less conservative integral inequalities,a synchronization criterion is derived,and an algorithm is provided that can be solved for determining the controller gain.Finally,a numerical simulation is presented to confirm the efficiency of the proposed method. 展开更多
关键词 Markovian jumping neural networks stochastic sampling looped-functional false data injection attack
原文传递
Big Data Analytics:Deep Content-Based Prediction with Sampling Perspective
15
作者 Waleed Albattah Saleh Albahli 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期531-544,共14页
The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.T... The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.This research uses a technique called sampling,which selects a representative subset of the data points,manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined,and finally,creates models.Sampling uses a small proportion of the original data for analysis and model training,so that it is relatively faster while maintaining data integrity and achieving accurate results.Two deep neural networks,AlexNet and DenseNet,were used in this research to test two sampling techniques,namely sampling with replacement and reservoir sampling.The dataset used for this research was divided into three classes:acceptable,flagged as easy,and flagged as hard.The base models were trained with the whole dataset,whereas the other models were trained on 50%of the original dataset.There were four combinations of model and sampling technique.The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808.Combination 1 was the AlexNet model and sampling with replacement,achieving an average F-measure of 0.8852.Combination 3 was the AlexNet model and reservoir sampling.It had an average F-measure of 0.8545.Combination 2 was the DenseNet model and sampling with replacement,achieving an average F-measure of 0.8017.Finally,combination 4 was the DenseNet model and reservoir sampling.It had an average F-measure of 0.8111.Overall,we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models,which used the whole dataset. 展开更多
关键词 sampling big data deep learning AlexNet DenseNet
在线阅读 下载PDF
Minimum Data Sampling Method in the Inverse Scattering Problem
16
作者 Yu Wenhua(Res. Inst. of EM Field and Microwave Tech.), Southwest Jiaotong University, Chengdu 610031 ,ChinaPeng Zhongqiu(Beijng Remote Sensing and Information Institute),Beijing 100011,ChinaRen Lang(Res.inst. of EM Field and Microwave Tech.), Southwest J 《Journal of Modern Transportation》 1994年第2期114-118,共5页
Fourier transform is a basis of the analysis. This paper presents a kind ofmethod of minimum sampling data determined profile of the inverted object ininverse scattering.
关键词 inverse scattering nonuniqueness sampling data
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
17
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Major Data of China's 1% National Population Sampling Survey,1995
18
《China Population Today》 1996年第4期7-8,共2页
MajorDataofChina′s1%NationalPopulationSamplingSurvey,1995Major Data of China's 1% National Population Sampling Survey,199...
关键词 Major data of China’s 1 National Population sampling Survey 1995
在线阅读 下载PDF
基于依赖结构和Gibbs Sampling的离散数据聚类
19
作者 王双成 俞时权 程新章 《计算机工程》 CAS CSCD 北大核心 2006年第9期28-30,共3页
建立了一种新的离散数据聚类方法,该方法结合变量之间的依赖结构和Gibbs sampling进行离散数据聚类,能够显著提高抽样效率,并且避免使用EM算法进行聚类所带来的问题。试验结果表明,该方法能够有效地进行离散数据的聚类。
关键词 聚类 离散数据 依赖结构 GIBBS抽样 MDL标准
在线阅读 下载PDF
Fuzzy data envelopment analysis approach based on sample decision making units 被引量:11
20
作者 Muren Zhanxin Ma Wei Cui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第3期399-407,共9页
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty... The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches. 展开更多
关键词 fuzzy mathematical programming sample decision making unit fuzzy data envelopment analysis EFFICIENCY α-cut.
在线阅读 下载PDF
上一页 1 2 202 下一页 到第
使用帮助 返回顶部