期刊文献+
共找到1,529篇文章
< 1 2 77 >
每页显示 20 50 100
Search Processes in the Exploration of Complex Data under Different Display Conditions
1
作者 Charles Tatum David Dickason 《Journal of Data Analysis and Information Processing》 2021年第2期51-62,共12页
The study investigated user experience, display complexity, display type (tables versus graphs), and task difficulty as variables affecting the user’s ability to navigate through complex visual data. A total of 64 pa... The study investigated user experience, display complexity, display type (tables versus graphs), and task difficulty as variables affecting the user’s ability to navigate through complex visual data. A total of 64 participants, 39 undergraduate students (novice users) and 25 graduate students (intermediate-level users) participated in the study. The experimental design was 2 × 2 × 2 × 3 mixed design using two between-subject variables (display complexity, user experience) and two within-subject variables (display format, question difficulty). The results indicated that response time was superior for graphs (relative to tables), especially when the questions were difficult. The intermediate users seemed to adopt more extensive search strategies than novices, as revealed by an analysis of the number of changes they made to the display prior to answering questions. It was concluded that designers of data displays should consider the (a) type of display, (b) difficulty of the task, and (c) expertise level of the user to obtain optimal levels of performance. 展开更多
关键词 Computer Users data Displays data Visualization data Tables data Graphs Visual Search data complexity Visual Displays Visual data
在线阅读 下载PDF
Diversity,Complexity,and Challenges of Viral Infectious Disease Data in the Big Data Era:A Comprehensive Review 被引量:1
2
作者 Yun Ma Lu-Yao Qin +1 位作者 Xiao Ding Ai-Ping Wu 《Chinese Medical Sciences Journal》 2025年第1期29-44,I0005,共17页
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr... Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape. 展开更多
关键词 viral infectious diseases big data data diversity and complexity data standardization artificial intelligence data analysis
暂未订购
Linear mixed-effects model for longitudinal complex data with diversified characteristics 被引量:2
3
作者 Zhichao Wang Huiwen Wang +2 位作者 Shanshan Wang Shan Lu Gilbert Saporta 《Journal of Management Science and Engineering》 2020年第2期105-124,共20页
The increasing richness of data encourages a comprehensive understanding of economic and financial activities,where variables of interest may include not only scalar(point-like)indicators,but also functional(curve-lik... The increasing richness of data encourages a comprehensive understanding of economic and financial activities,where variables of interest may include not only scalar(point-like)indicators,but also functional(curve-like)and compositional(pie-like)ones.In many research topics,the variables are also chronologically collected across individuals,which falls into the paradigm of longitudinal analysis.The complicated nature of data,however,increases the difficulty of modeling these variables under the classic longitudinal frame-work.In this study,we investigate the linear mixed-effects model(LMM)for such complex data.Different types of variables arefirst consistently represented using the corresponding basis expansions so that the classic LMM can then be conducted on them,which gener-alizes the theoretical framework of LMM to complex data analysis.A number of simulation studies indicate the feasibility and effectiveness of the proposed model.We further illustrate its practical utility in a real data study on Chinese stock market and show that the proposed method can enhance the performance and interpretability of the regression for complex data with diversified characteristics. 展开更多
关键词 Longitudinal complex data Linear mixed-effects model Compositional data analysis Functional data analysis Chinese stock market Online investors'sentiment
原文传递
Lithological Discrimination of the Mafic-Ultramafic Complex, Huitongshan, Beishan, China: Using ASTER Data 被引量:11
4
作者 Lei Liu Jun Zhou +2 位作者 Dong Jiang Dafang Zhuang Lamin R Mansaray 《Journal of Earth Science》 SCIE CAS CSCD 2014年第3期529-536,共8页
The Beishan area has more than seventy mafic-ultramafic complexes sparsely distributed in the area and is of a big potential in mineral resources related to mafic-ultramafic intrusions. Many mafic-ultramafic intrusion... The Beishan area has more than seventy mafic-ultramafic complexes sparsely distributed in the area and is of a big potential in mineral resources related to mafic-ultramafic intrusions. Many mafic-ultramafic intrusions which are mostly in small sizes have been omitted by previous works. This research takes Huitongshan as the study area, which is a major district for mafic-ultramafic occurrences in Beishan. Advanced spaceborne thermal emission and reflection radiometer(ASTER) data have been processed and interpreted for mapping the mafic-ultramafic complex. ASTER data were processed by different techniques that were selected based on image reflectance and laboratory emissivity spectra. The visible near-infrared(VNIR) and short wave infrared(SWIR) data were transformed using band ratios and minimum noise fraction(MNF), while the thermal infrared(TIR) data were processed using mafic index(MI) and principal components analysis(PCA). ASTER band ratios(6/8, 5/4, 2/1) in RGB image and MNF(1, 2, 4) in RGB image were powerful in distinguishing the subtle differences between the various rock units. PCA applied to all five bands of ASTER TIR imagery highlighted marked differences among the mafic rock units and was more effective than the MI in differentiating mafic-ultramafic rocks. Our results were consistent with information derived from local geological maps. Based on the remote sensing results and field inspection, eleven gabbroic intrusions and a pyroxenite occurrence were recognized for the first time. A new geologic map of the Huitongshan area was created by integrating the results of remote sensing, previous geological maps and field inspection. It is concluded that the workflow of ASTER image processing, interpretation and ground inspection has great potential for mafic-ultramafic rocks identifying and relevant mineral targeting in the sparsely vegetated arid region of northwestern China. 展开更多
关键词 mafic-ultramafic complex ASTER data band ratio minimum noise fraction mafic index principal component analysis.
原文传递
Source complexity of the 2016 M_W7.8 Kaikoura (New Zealand) earthquake revealed from teleseismic and InSAR data 被引量:4
5
作者 HaiLin Du Xu Zhang +3 位作者 LiSheng Xu WanPeng Feng Lei Yi Peng Li 《Earth and Planetary Physics》 2018年第4期310-326,共17页
On November 13, 2016, an MW7.8 earthquake struck Kaikoura in South Island of New Zealand. By means of back-projection of array recordings, ASTFs-analysis of global seismic recordings, and joint inversion of global sei... On November 13, 2016, an MW7.8 earthquake struck Kaikoura in South Island of New Zealand. By means of back-projection of array recordings, ASTFs-analysis of global seismic recordings, and joint inversion of global seismic data and co-seismic In SAR data, we investigated complexity of the earthquake source. The result shows that the 2016 MW7.8 Kaikoura earthquake ruptured about 100 s unilaterally from south to northeast(~N28°–33°E), producing a rupture area about 160 km long and about 50 km wide and releasing scalar moment 1.01×1021 Nm. In particular, the rupture area consisted of two slip asperities, with one close to the initial rupture point having a maximal slip value ~6.9 m while the other far away in the northeast having a maximal slip value ~9.3 m. The first asperity slipped for about 65 s and the second one started 40 s after the first one had initiated. The two slipped simultaneously for about 25 s.Furthermore, the first had a nearly thrust slip while the second had both thrust and strike slip. It is interesting that the rupture velocity was not constant, and the whole process may be divided into 5 stages in which the velocities were estimated to be 1.4 km/s, 0 km/s, 2.1 km/s, 0 km/s and 1.1 km/s, respectively. The high-frequency sources distributed nearly along the lower edge of the rupture area, the highfrequency radiating mainly occurred at launching of the asperities, and it seemed that no high-frequency energy was radiated when the rupturing was going to stop. 展开更多
关键词 2016 MW7.8 Kaikoura EARTHQUAKE BACK-PROJECTION of array RECORDINGS ASTFs-analysis of global RECORDINGS joint inversion of teleseismic and InSAR data complexITY of SOURCE
在线阅读 下载PDF
Data complexity-based batch sanitization method against poison in distributed learning
6
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 Distributed machine learning security Federated learning data poisoning attacks data sanitization Batch detection data complexity
在线阅读 下载PDF
Data Driven Uncertainty Evaluation for Complex Engineered System Design 被引量:1
7
作者 LIU Boyuan HUANG Shuangxi +4 位作者 FAN Wenhui XIAO Tianyuan James HUMANN LAI Yuyang JIN Yan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第5期889-900,共12页
Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate struct... Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design. 展开更多
关键词 complex engineered system design UNCERTAINTY data-driven evaluation Monte Carlo simulation
在线阅读 下载PDF
Analysis of Complex Correlated Interval-Censored HIV Data from Population Based Survey
8
作者 Khangelani Zuma Goitseone Mafoko 《Open Journal of Statistics》 2015年第2期120-126,共7页
In studies of HIV, interval-censored data occur naturally. HIV infection time is not usually known exactly, only that it occurred before the survey, within some time interval or has not occurred at the time of the sur... In studies of HIV, interval-censored data occur naturally. HIV infection time is not usually known exactly, only that it occurred before the survey, within some time interval or has not occurred at the time of the survey. Infections are often clustered within geographical areas such as enumerator areas (EAs) and thus inducing unobserved frailty. In this paper we consider an approach for estimating parameters when infection time is unknown and assumed correlated within an EA where dependency is modeled as frailties assuming a normal distribution for frailties and a Weibull distribution for baseline hazards. The data was from a household based population survey that used a multi-stage stratified sample design to randomly select 23,275 interviewed individuals from 10,584 households of whom 15,851 interviewed individuals were further tested for HIV (crude prevalence = 9.1%). A further test conducted among those that tested HIV positive found 181 (12.5%) recently infected. Results show high degree of heterogeneity in HIV distribution between EAs translating to a modest correlation of 0.198. Intervention strategies should target geographical areas that contribute disproportionately to the epidemic of HIV. Further research needs to identify such hot spot areas and understand what factors make these areas prone to HIV. 展开更多
关键词 Analysis of complex CORRELATED Interval-Censored HIV data from Population Based SURVEY
暂未订购
A Complexity Analysis and Entropy for Different Data Compression Algorithms on Text Files 被引量:1
9
作者 Mohammad Hjouj Btoush Ziad E. Dawahdeh 《Journal of Computer and Communications》 2018年第1期301-315,共15页
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith... In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results. 展开更多
关键词 TEXT FILES data Compression HUFFMAN Coding LZW Hamming ENTROPY complexITY
暂未订购
Baddeleyite from Large Complex Deposits: Significance for Archean-Paleozoic Plume Processes in the Arctic Region (NE Fennoscandian Shield) Based on U-Pb (ID-TIMS) and LA-ICP-MS Data 被引量:1
10
作者 Tamara Bayanova Viktor Subbotin +2 位作者 Svetlana Drogobuzhskaya Anatoliy Nikolaev Ekaterina Steshenko 《Open Journal of Geology》 2019年第8期474-496,共23页
Baddeleyite is an important mineral geochronometer. It is valued in the U-Pb (ID-TIMS) geochronology more than zircon because of its magmatic origin, while zircon can be metamorphic, hydrothermal or occur as xenocryst... Baddeleyite is an important mineral geochronometer. It is valued in the U-Pb (ID-TIMS) geochronology more than zircon because of its magmatic origin, while zircon can be metamorphic, hydrothermal or occur as xenocrysts. Detailed mineralogical (BSE, KL, etc.) research of baddeleyite started in the Fennoscandian Shield in the 1990s. The mineral was first extracted from the Paleozoic Kovdor deposit, the second-biggest baddeleyite deposit in the world after Phalaborwa (2.1 Ga), South Africa. The mineral was successfully introduced into the U-Pb systematics. This study provides new U-Pb and LA-ICP-MS data on Archean Ti-Mgt and BIF deposits, Paleoproterozoic layered PGE intrusions with Pt-Pd and Cu-Ni reefs and Paleozoic complex deposits (baddeleyite, apatite, foscorite ores, etc.) in the NE Fennoscandian Shield. Data on concentrations of REE in baddeleyite and temperature of the U-Pb systematics closure are also provided. It is shown that baddeleyite plays an important role in the geological history of the Earth, in particular, in the break-up of supercontinents. 展开更多
关键词 BADDELEYITE PGE U-PB Isotope data Geochronology Paleoproterozoic PGE Layered Intrusion complex DEPOSITS PALEOZOIC Fennoscandian Shield
在线阅读 下载PDF
Multilevel Modeling of Binary Outcomes with Three-Level Complex Health Survey Data
11
作者 Shafquat Rozi Sadia Mahmud +2 位作者 Gillian Lancaster Wilbur Hadden Gregory Pappas 《Open Journal of Epidemiology》 2017年第1期27-43,共17页
Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producin... Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producing a pseudo likeli-hood. In a 3-level weighted analysis for a binary outcome, we implemented two methods for scaling the sampling weights in the National Health Survey of Pa-kistan (NHSP). For NHSP with health care utilization as a binary outcome we found age, gender, household (HH) goods, urban/rural status, community de-velopment index, province and marital status as significant predictors of health care utilization (p-value < 0.05). The variance of the random intercepts using scaling method 1 is estimated as 0.0961 (standard error 0.0339) for PSU level, and 0.2726 (standard error 0.0995) for household level respectively. Both esti-mates are significantly different from zero (p-value < 0.05) and indicate consid-erable heterogeneity in health care utilization with respect to households and PSUs. The results of the NHSP data analysis showed that all three analyses, weighted (two scaling methods) and un-weighted, converged to almost identical results with few exceptions. This may have occurred because of the large num-ber of 3rd and 2nd level clusters and relatively small ICC. We performed a sim-ulation study to assess the effect of varying prevalence and intra-class correla-tion coefficients (ICCs) on bias of fixed effect parameters and variance components of a multilevel pseudo maximum likelihood (weighted) analysis. The simulation results showed that the performance of the scaled weighted estimators is satisfactory for both scaling methods. Incorporating simulation into the analysis of complex multilevel surveys allows the integrity of the results to be tested and is recommended as good practice. 展开更多
关键词 HEALTH Care Utilization complex HEALTH SURVEY with Sampling WEIGHTS Simulations for complex SURVEY Pseudo LIKELIHOOD THREE-LEVEL data
暂未订购
Pinning sampled-data synchronization for complex networks with probabilistic coupling delay
12
作者 王健安 聂瑞兴 孙志毅 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第5期172-179,共8页
We deal with the problem of pinning sampled-data synchronization for a complex network with probabilistic time-varying coupling delay. The sampling period considered here is assumed to be less than a given bound. With... We deal with the problem of pinning sampled-data synchronization for a complex network with probabilistic time-varying coupling delay. The sampling period considered here is assumed to be less than a given bound. Without using the Kronecker product, a new synchronization error system is constructed by using the property of the random variable and input delay approach. Based on the Lyapunov theory, a delay-dependent pinning sampled-data synchronization criterion is derived in terms of linear matrix inequalities (LMIs) that can be solved effectively by using MATLAB LMI toolbox. Numerical examples are provided to demonstrate the effectiveness of the proposed scheme. 展开更多
关键词 complex network probabilistic time-varying coupling delay sampled-data synchronization pin-ning control
原文传递
Reversible Data Hiding Algorithm in Encrypted Images Based on Adaptive Median Edge Detection and Ciphertext-Policy Attribute-Based Encryption
13
作者 Zongbao Jiang Minqing Zhang +2 位作者 Weina Dong Chao Jiang Fuqiang Di 《Computers, Materials & Continua》 SCIE EI 2024年第10期1123-1155,共33页
With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed... With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data. 展开更多
关键词 Ciphertext-policy attribute-based encryption complex data access structure reversible data hiding large embedding space
在线阅读 下载PDF
GeoDatabase数据模型及其几何网络的拓扑分析应用 被引量:17
14
作者 邵永社 李晶 《测绘工程》 CSCD 2005年第1期17-19,共3页
阐述了ArcInfo软件的GeoDatabase数据模型及特点,介绍了GeoDatabase数据模型的几何网络,依据模型几何网络的特点,分析了几何网络在地理信息系统拓扑分析中的应用。
关键词 几何 网络 特点 依据 应用 数据模型 ArcInfo软件 拓扑分析 地理信息系统
在线阅读 下载PDF
Interference in Complex CDMA-OFDM/OQAM for Better Performance at Low SNR
15
作者 Chrislin Martial Lélé 《International Journal of Communications, Network and System Sciences》 2024年第8期113-128,共16页
This article is about orthogonal frequency-division multiplexing with quadrature amplitude modulation combined with code division multiplexing access for complex data transmission. It aims to present a method which us... This article is about orthogonal frequency-division multiplexing with quadrature amplitude modulation combined with code division multiplexing access for complex data transmission. It aims to present a method which uses two interfering subsets in order to improve the performance of the transmission scheme. The idea is to spread in a coherent manner some data amongst two different codes belonging to the two different subsets involved in complex orthogonal frequency-division multiplexing with quadrature amplitude modulation and code division multiplexing access. This will improve the useful signal level at the receiving side and therefore improve the decoding process especially at low signal to noise ratio. However, this procedure implies some interference with other codes therefore creating a certain noise which is noticeable at high signal to noise ratio. 展开更多
关键词 CDMA OFDM/OQAM complex data
在线阅读 下载PDF
Hold the Drones: Fostering the Development of Big Data Paradigms through Regulatory Frameworks 被引量:1
16
作者 Robert Spousta Steve Chan 《通讯和计算机(中英文版)》 2015年第3期135-145,共11页
关键词 无人机系统 数据范式 框架 监管 飞机系统 生长调节作用 历史教训 无人飞行器
在线阅读 下载PDF
Empirical topological investigation of practical supply chains based on complex networks
17
作者 廖好 沈婧 +2 位作者 吴兴桐 陈博奎 周明洋 《Chinese Physics B》 SCIE EI CAS CSCD 2017年第11期144-150,共7页
The industrial supply chain networks basically capture the circulation of social resource, dominating the stability and efficiency of the industrial system. In this paper, we provide an empirical study of the topology... The industrial supply chain networks basically capture the circulation of social resource, dominating the stability and efficiency of the industrial system. In this paper, we provide an empirical study of the topology of smartphone supply chain network. The supply chain network is constructed using open online data. Our experimental results show that the smartphone supply chain network has small-world feature with scale-free degree distribution, in which a few high degree nodes play a key role in the function and can effectively reduce the communication cost. We also detect the community structure to find the basic functional unit. It shows that information communication between nodes is crucial to improve the resource utilization. We should pay attention to the global resource configuration for such electronic production management. 展开更多
关键词 China supply chain networks complex networks data science network science
原文传递
Correlation between Mortality of Prehospital Trauma Patients and Their Heart Rate Complexity
18
作者 Gholamhussian Erjaee Ali Foroutan +2 位作者 Sara Keshtkar Pegah ShojaMozafar Alham Benabas 《International Journal of Clinical Medicine》 2012年第7期569-574,共6页
Recently, nonlinear analysis of R-to-R interval (RRI) in heart rate has brought research attention in medicine to improve predictive accuracy of medication in severely injured patients. It seems conventional vital sig... Recently, nonlinear analysis of R-to-R interval (RRI) in heart rate has brought research attention in medicine to improve predictive accuracy of medication in severely injured patients. It seems conventional vital signs information such as heart rate and blood pressure to identify critically injured patients eventually replaced by heartrate complexity (HRC) analysis to the electrocardiogram (ECG) of patients in trauma centers. In this respect, different nonlinear analysis tools such as;power spectra, entropy, fractal dimension, auto-correlation function and auto-correlation have been adapted for this complexity analysis of ECG signal. Reidbord and Redington [1] were one of the early reports on applications of nonlinear analysis of the heart physiology. Moody and his colleagues could confidently predicted survival in heart failure cases by use of fully automated methods for deriving nonlinear and conventional indices of heart rate dynamics [2]. Further studies were reported in cases of arrhythmia or general anesthesia by Pomfrett [3], Fortrat [4], Lass [5] and references therein. Recently, noteworthy works of Batchinsky and coworkers have shown that prehospital loss of RRI complexity is associated with mortality in trauma patients [6-8]. They have also shown that prediction of trauma survival by analysis of heart rate complexity is even possible by reducing data set size from 800-beat to 200 or lower beat data sets. In this article, we will use different data nonlinear analysis tools such as;power spectrum, entropy, Lyapunov exponent, capacity dimension and correlation function to analyze HRC as a sensitive indictor of physiologic deterioration. In these analyses, we will use real data of 270-beat sections of ECG from 45 emergency patients brought to Shiraz Rejaee Hospetal trauma center prior to any medication. As we can see, using some manipulation on raw data will provide more informative vital signs in our nonlinear analyses. 展开更多
关键词 Nonlinear Analysis of complex data ELECTROCARDIOGRAPHY TRAUMA Patients
暂未订购
Product Data Model for Performance-driven Design 被引量:2
19
作者 Guang-Zhong Hu Xin-Jian Xu +2 位作者 Shou-Ne Xiao Guang-Wu Yang Fan Pu 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2017年第5期1112-1122,共11页
When designing large-sized complex machinery products, the design focus is always on the overall per- formance; however, there exist no design theory and method based on performance driven. In view of the defi- ciency... When designing large-sized complex machinery products, the design focus is always on the overall per- formance; however, there exist no design theory and method based on performance driven. In view of the defi- ciency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using per- formance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parame- ters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design. 展开更多
关键词 complex product design Performance driven data model Mapping relationship High-speed train
在线阅读 下载PDF
A State of Art Analysis of Telecommunication Data by k-Means and k-Medoids Clustering Algorithms
20
作者 T. Velmurugan 《Journal of Computer and Communications》 2018年第1期190-202,共13页
Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-clus... Cluster analysis is one of the major data analysis methods widely used for many practical applications in emerging areas of data mining. A good clustering method will produce high quality clusters with high intra-cluster similarity and low inter-cluster similarity. Clustering techniques are applied in different domains to predict future trends of available data and its uses for the real world. This research work is carried out to find the performance of two of the most delegated, partition based clustering algorithms namely k-Means and k-Medoids. A state of art analysis of these two algorithms is implemented and performance is analyzed based on their clustering result quality by means of its execution time and other components. Telecommunication data is the source data for this analysis. The connection oriented broadband data is given as input to find the clustering quality of the algorithms. Distance between the server locations and their connection is considered for clustering. Execution time for each algorithm is analyzed and the results are compared with one another. Results found in comparison study are satisfactory for the chosen application. 展开更多
关键词 K-MEANS ALGORITHM k-Medoids ALGORITHM data CLUSTERING Time complexITY TELECOMMUNICATION data
暂未订购
上一页 1 2 77 下一页 到第
使用帮助 返回顶部