期刊文献+
共找到947篇文章
< 1 2 48 >
每页显示 20 50 100
Data partitioning based on sampling for power load streams
1
作者 王永利 徐宏炳 +2 位作者 董逸生 钱江波 刘学军 《Journal of Southeast University(English Edition)》 EI CAS 2005年第3期293-298,共6页
A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,wh... A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing. 展开更多
关键词 data streams continuous queries parallel processing sampling data partitioning
在线阅读 下载PDF
An Improved Hilbert Curve for Parallel Spatial Data Partitioning 被引量:7
2
作者 MENG Lingkui HUANG Changqing ZHAO Chunyu LIN Zhiyong 《Geo-Spatial Information Science》 2007年第4期282-286,共5页
A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on t... A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on the improved Hilbert curve, the algorithm can be designed to achieve almost-uniform spatial data partitioning among multiple disks in parallel spatial databases. Thus, the phenomenon of data imbalance can be significantly avoided and search and query efficiency can be enhanced. 展开更多
关键词 parallel spatial database spatial data partitioning data imbalance Hilbert curve
在线阅读 下载PDF
Clustering method based on data division and partition 被引量:1
3
作者 卢志茂 刘晨 +2 位作者 S.Massinanke 张春祥 王蕾 《Journal of Central South University》 SCIE EI CAS 2014年第1期213-222,共10页
Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets(VLDS).In this work,a novel division and partition clustering method(DP... Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets(VLDS).In this work,a novel division and partition clustering method(DP) was proposed to solve the problem.DP cut the source data set into data blocks,and extracted the eigenvector for each data block to form the local feature set.The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector.Ultimately according to the global eigenvector,the data set was assigned by criterion of minimum distance.The experimental results show that it is more robust than the conventional clusterings.Characteristics of not sensitive to data dimensions,distribution and number of nature clustering make it have a wide range of applications in clustering VLDS. 展开更多
关键词 CLUSTERING DIVISION partition very large data sets (VLDS)
在线阅读 下载PDF
A rate-distortion optimized rate shaping scheme for H.264 data partitioned video bitstream
4
作者 张锦锋 《High Technology Letters》 EI CAS 2009年第1期65-69,共5页
To enable quality sealability and further improve the reconstructed video quallty m rate shaping, a rate-distortion optimized packet dropping scheme for H. 264 data partitioned video bitstream is proposed in this pape... To enable quality sealability and further improve the reconstructed video quallty m rate shaping, a rate-distortion optimized packet dropping scheme for H. 264 data partitioned video bitstream is proposed in this paper. Some side information is generated for each video bitstream in advance, while streaming such side information is exploited by a greedy algorithm to optimally drop partitions in a rate-distortion optimized way. Quality sealability is supported by adopting data partition instead of whole frame as the dropping unit. Simulation resuhs show that the proposed scheme achieves a great gain in the reconstructed video quality over two typical frame dropping schemes, with the help of the fine granularity in dropping unit as well as rate-distortion optimization. 展开更多
关键词 rate shaping frame dropping rate-distortion optimization data partition H.264
在线阅读 下载PDF
A Partition Checkpoint Strategy Based on Data Segment Priority
5
作者 LIANG Ping LIU Yunsheng 《Wuhan University Journal of Natural Sciences》 CAS 2012年第2期109-113,共5页
A partition checkpoint strategy based on data segment priority is presented to meet the timing constraints of the data and the transaction in embedded real-time main memory database systems(ERTMMDBS) as well as to r... A partition checkpoint strategy based on data segment priority is presented to meet the timing constraints of the data and the transaction in embedded real-time main memory database systems(ERTMMDBS) as well as to reduce the number of the transactions missing their deadlines and the recovery time.The partition checkpoint strategy takes into account the characteristics of the data and the transactions associated with it;moreover,it partitions the database according to the data segment priority and sets the corresponding checkpoint frequency to each partition for independent checkpoint operation.The simulation results show that the partition checkpoint strategy decreases the ratio of trans-actions missing their deadlines. 展开更多
关键词 embedded real-time main memory database systems database recovery partition checkpoint data segment priority
原文传递
Highly Available Hypercube Tokenized Sequential Matrix Partitioned Data Sharing in Large P2P Networks
6
作者 C. G. Ravichandran J. Lourdu Xavier 《Circuits and Systems》 2016年第9期2109-2119,共11页
Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P... Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P Networks using tokenizer method is proposed to resolve the problems of the larger P2P networks. The availability of data is first measured by the tokenizer using Dynamic Hypercube Organization. By applying Dynamic Hypercube Organization, that efficiently coordinates and assists the peers in P2P network ensuring data availability at many locations. Each data in peer is then assigned with valid ID by the tokenizer using Sequential Self-Organizing (SSO) ID generation model. This ensures data sharing with other nodes in large P2P network at minimum time interval which is obtained through proximity of data availability. To validate the framework HS-MP, the performance is evaluated using traffic traces collected from data sharing applications. Simulations conducting using Network simulator-2 show that the proposed framework outperforms the conventional streaming models. The performance of the proposed system is analyzed using energy consumption, average latency and average data availability rate with respect to the number of peer nodes, data size, amount of data shared and execution time. The proposed method reduces the energy consumption 43.35% to transpose traffic, 35.29% to bitrev traffic and 25% to bitcomp traffic patterns. 展开更多
关键词 Peer-to-Peer (P2P) VIDEO-ON-DEMAND HYPERCUBE Sequential Matrix partition data Mapping data Availability
在线阅读 下载PDF
Data-Aware Partitioning Schema in MapReduce
7
作者 Liang Junjie Liu Qiongni +1 位作者 Yin Li Yu Dunhui 《国际计算机前沿大会会议论文集》 2015年第1期28-29,共2页
With the advantages of MapReduce programming model in parallel computing and processing of data and tasks on large-scale clusters, a Dataaware partitioning schema in MapReduce for large-scale high-dimensional data is ... With the advantages of MapReduce programming model in parallel computing and processing of data and tasks on large-scale clusters, a Dataaware partitioning schema in MapReduce for large-scale high-dimensional data is proposed. It optimizes partition method of data blocks with the same contribution to computation in MapReduce. Using a two-stage data partitioning strategy, the data are uniformly distributed into data blocks by clustering and partitioning. The experiments show that the data-aware partitioning schema is very effective and extensible for improving the query efficiency of highdimensional data. 展开更多
关键词 CLOUD COMPUTING MAPREDUCE HIGH-DIMENSIONAL data dataaware partitioning
在线阅读 下载PDF
A HEVC Video Steganalysis Algorithm Based on PU Partition Modes 被引量:3
8
作者 Zhonghao Li Laijing Meng +3 位作者 Shutong Xu Zhaohong Li Yunqing Shi Yuanchang Liang 《Computers, Materials & Continua》 SCIE EI 2019年第5期563-574,共12页
Steganalysis is a technique used for detecting the existence of secret information embedded into cover media such as images and videos.Currently,with the higher speed of the Internet,videos have become a kind of main ... Steganalysis is a technique used for detecting the existence of secret information embedded into cover media such as images and videos.Currently,with the higher speed of the Internet,videos have become a kind of main methods for transferring information.The latest video coding standard High Efficiency Video Coding(HEVC)shows better coding performance compared with the H.264/AVC standard published in the previous time.Therefore,since the HEVC was published,HEVC videos have been widely used as carriers of hidden information.In this paper,a steganalysis algorithm is proposed to detect the latest HEVC video steganography method which is based on the modification of Prediction Units(PU)partition modes.To detect the embedded data,All the PU partition modes are extracted from P pictures,and the probability of each PU partition mode in cover videos and stego videos is adopted as the classification feature.Furthermore,feature optimization is applied,that the 25-dimensional steganalysis feature has been reduced to the 3-dimensional feature.Then the Support Vector Machine(SVM)is used to identify stego videos.It is demonstrated in experimental results that the proposed steganalysis algorithm can effectively detect the stego videos,and much higher classification accuracy has been achieved compared with state-of-the-art work. 展开更多
关键词 Video steganalysis PU partition modes data hiding HEVC videos
在线阅读 下载PDF
Tailored Partitioning for Healthcare Big Data: A Novel Technique for Efficient Data Management and Hash Retrieval in RDBMS Relational Architectures
9
作者 Ehsan Soltanmohammadi Neset Hikmet Dilek Akgun 《Journal of Data Analysis and Information Processing》 2025年第1期46-65,共20页
Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and... Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and regulatory complexity of healthcare data. This research introduces a tailored partitioning strategy leveraging the MD5 hashing algorithm to enhance data insertion, query performance, and load balancing in healthcare systems. By applying a consistent hash function to patient IDs, our approach achieves uniform distribution of records across partitions, optimizing retrieval paths and reducing access latency while ensuring data integrity and compliance. We evaluated the method through experiments focusing on partitioning efficiency, scalability, and fault tolerance. The partitioning efficiency analysis compared our MD5-based approach with standard round-robin methods, measuring insertion times, query latency, and data distribution balance. Scalability tests assessed system performance across increasing dataset sizes and varying partition counts, while fault tolerance experiments examined data integrity and retrieval performance under simulated partition failures. The experimental results demonstrate that the MD5-based partitioning strategy significantly reduces query retrieval times by optimizing data access patterns, achieving up to X% better performance compared to round-robin methods. It also scales effectively with larger datasets, maintaining low latency and ensuring robust resilience under failure scenarios. This novel approach offers a scalable, efficient, and fault-tolerant solution for healthcare systems, facilitating faster clinical decision-making and improved patient care in complex data environments. 展开更多
关键词 Healthcare data partitioning Relational database Management Systems (RDBMS) Big data Management Load Balance Query Performance Improvement data Integrity and Fault Tolerance EFFICIENT Big data in Healthcare Dynamic data Distribution Healthcare Information Systems partitioning Algorithms Performance Evaluation in databases
在线阅读 下载PDF
A Partitioning Methodology That Optimizes the Communication Cost for Reconfigurable Computing Systems
10
作者 Ramzi Ayadi Bouraoui Ouni Abdellatif Mtibaa 《International Journal of Automation and computing》 EI 2012年第3期280-287,共8页
This paper focuses on the design process for reconfigurable architecture. Our contribution focuses on introducing a new temporal partitioning algorithm. Our algorithm is based on typical mathematic flow to solve the t... This paper focuses on the design process for reconfigurable architecture. Our contribution focuses on introducing a new temporal partitioning algorithm. Our algorithm is based on typical mathematic flow to solve the temporal partitioning problem. This algorithm optimizes the transfer of data required between design partitions and the reconfiguration overhead. Results show that our algorithm considerably decreases the communication cost and the latency compared with other well known algorithms. 展开更多
关键词 Temporal partitioning data flow graph communication cost reconfigurable computing systems field-programmable gate array (FPGA)
原文传递
A Study on Associated Rules and Fuzzy Partitions for Classification
11
作者 Yeu-Shiang Huang Jyi-Feng Yao 《Intelligent Information Management》 2012年第5期217-224,共8页
The amount of data for decision making has increased tremendously in the age of the digital economy. Decision makers who fail to proficiently manipulate the data produced may make incorrect decisions and therefore har... The amount of data for decision making has increased tremendously in the age of the digital economy. Decision makers who fail to proficiently manipulate the data produced may make incorrect decisions and therefore harm their business. Thus, the task of extracting and classifying the useful information efficiently and effectively from huge amounts of computational data is of special importance. In this paper, we consider that the attributes of data could be both crisp and fuzzy. By examining the suitable partial data, segments with different classes are formed, then a multithreaded computation is performed to generate crisp rules (if possible), and finally, the fuzzy partition technique is employed to deal with the fuzzy attributes for classification. The rules generated in classifying the overall data can be used to gain more knowledge from the data collected. 展开更多
关键词 data Mining Fuzzy partition PARTIAL CLASSIFICATION ASSOCIATION RULE Knowledge Discovery.
暂未订购
Hybrid Graph Partitioning with OLB Approach in Distributed Transactions
12
作者 Rajesh Bharati Vahida Attar 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期763-775,共13页
Online Transaction Processing(OLTP)gets support from data partitioning to achieve better performance and scalability.The primary objective of database and application developers is to provide scalable and reliable dat... Online Transaction Processing(OLTP)gets support from data partitioning to achieve better performance and scalability.The primary objective of database and application developers is to provide scalable and reliable database systems.This research presents a novel method for data partitioning and load balancing for scalable transactions.Data is efficiently partitioned using the hybrid graph partitioning method.Optimized load balancing(OLB)approach is applied to calculate the weight factor,average workload,and partition efficiency.The presented approach is appropriate for various online data transaction applications.The quality of the proposed approach is examined using OLTP database benchmark.The performance of the proposed methodology significantly outperformed with respect to metrics like throughput,response time,and CPU utilization. 展开更多
关键词 datapartitioning SCALABILITY OPTIMIZATION THROUGHPUT
在线阅读 下载PDF
An Animated GIF Steganography Using Variable Block Partition Scheme
13
作者 Maram Abdullah M.Alyahya Arshiya S.Ansari Mohammad Sajid Mohammadi 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期897-914,共18页
The paper presents a novel Graphics Interchange Format (GIF) Steganography system. The algorithm uses an animated (GIF) file format video to applyon, a secured and variable image partition scheme for data embedding. T... The paper presents a novel Graphics Interchange Format (GIF) Steganography system. The algorithm uses an animated (GIF) file format video to applyon, a secured and variable image partition scheme for data embedding. The secretdata could be any character text, any image, an audio file, or a video file;that isconverted in the form of bits. The proposed method uses a variable partitionscheme structure for data embedding in the (GIF) file format video. The algorithmestimates the capacity of the cover (GIF) image frames to embed data bits. Ourmethod built variable partition blocks in an empty frame separately and incorporate it with randomly selected (GIF) frames. This way the (GIF) frame is dividedinto variable block same as in the empty frame. Then algorithm embeds secretdata on appropriate pixel of the (GIF) frame. Each selected partition block for dataembedding, can store a different number of data bits based on block size. Intruders could never come to know exact position of the secrete data in this stegoframe. All the (GIF) frames are rebuild to make animated stego (GIF) video.The performance of the proposed (GIF) algorithm has experimented andevaluated based on different input parameters, like Mean Square Error (MSE)and Peak Signal-to-Noise Ratio (PSNR) values. The results are compared withsome existing methods and found that our method has promising results. 展开更多
关键词 (GIF)Steganography frame partition variable data insertion data encapsulation
在线阅读 下载PDF
An Improved FP-Growth Algorithm Based on SOM Partition
14
作者 Kuikui Jia Haibin Liu 《国际计算机前沿大会会议论文集》 2017年第1期42-44,共3页
FP-growth algorithm is an algorithm for mining association rules without generating candidate sets.It has high practical value in many fields.However,it is a memory resident algorithm,and can only handle small data se... FP-growth algorithm is an algorithm for mining association rules without generating candidate sets.It has high practical value in many fields.However,it is a memory resident algorithm,and can only handle small data sets.It seems powerless when dealing with massive data sets.This paper improves the FP-growth algorithm.The core idea of the improved algorithm is to partition massive data set into small data sets,which would be dealt with separately.Firstly,systematic sampling methods are used to extract representative samples from large data sets,and these samples are used to make SOM(Self-organizing Map)cluster analysis.Then,the large data set is partitioned into several subsets according to the cluster results.Lastly,FP-growth algorithm is executed in each subset,and association rules are mined.The experimental result shows that the improved algorithm reduces the memory consumption,and shortens the time of data mining.The processing capacity and efficiency of massive data is enhanced by the improved algorithm. 展开更多
关键词 FP-GROWTH SOM data MINING CLUSTER partition
在线阅读 下载PDF
LRP:learned robust data partitioning for efficient processing of large dynamic queries
15
作者 Pengju LIU Pan CAI +2 位作者 Kai ZHONG Cuiping LI Hong CHEN 《Frontiers of Computer Science》 2025年第9期43-60,共18页
The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing p... The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing partitioning techniques predominantly focus on query accesses on numeric columns for constructing partitions,often overlooking non-numeric columns and thus limiting optimization potential.Additionally,these techniques,despite creating fine-grained partitions from representative queries to enhance system performance,experience from notable performance declines due to unpredictable fluctuations in future queries.To tackle these issues,we introduce LRP,a learned robust partitioning system for dynamic query processing.LRP first proposes a method for data and query encoding that captures comprehensive column access patterns from historical queries.It then employs Multi-Layer Perceptron and Long Short-Term Memory networks to predict shifts in the distribution of historical queries.To create high-quality,robust partitions based on these predictions,LRP adopts a greedy beam search algorithm for optimal partition division and implements a data redundancy mechanism to share frequently accessed data across partitions.Experimental evaluations reveal that LRP yields partitions with more stable performance under incoming queries and significantly surpasses state-of-the-art partitioning methods. 展开更多
关键词 data partitioning data encoding query prediction beam search data redundancy
原文传递
数据集划分及预处理方法对烟叶化学成分近红外定量模型的影响
16
作者 付博 杨永锋 +6 位作者 刘向真 牛洋洋 刘茂林 赵森森 于建军 彭桂新 姬小明 《河南农业大学学报》 北大核心 2025年第3期516-527,共12页
【目的】明确模型构建适宜的数据集划分方式、比例和数据预处理方法,为建立准确、稳定的烟叶化学成分分析模型奠定基础。【方法】以210份烟叶样本为研究对象,测量烟叶样品的总糖、还原糖、总氮、烟碱、钾和氯等常规化学成分含量,并采集... 【目的】明确模型构建适宜的数据集划分方式、比例和数据预处理方法,为建立准确、稳定的烟叶化学成分分析模型奠定基础。【方法】以210份烟叶样本为研究对象,测量烟叶样品的总糖、还原糖、总氮、烟碱、钾和氯等常规化学成分含量,并采集烟叶样本的光谱数据,研究随机划分法(RS)、等间隔划分法(LS)、基于联合x-y距离的样本集划分法(SPXY)和Kennard-Stone划分法(KS),以及光谱数据预处理和组合方式对烟叶常规化学成分偏最小二乘(PLS)定量模型预测精度的影响。【结果】数据集通过SPXY方式划分的校正集和预测集分布更均匀,预测集比例为24%时,构建的模型预测能力更强。烟叶总糖和氯离子定量模型最佳预处理组合为多元散射校正(MSC)+移动平均平滑(MA)+小波变换(WAVE),构建的定量模型预测集相关系数(r_(p))分别为0.984 0和0.986 0;还原糖和烟碱定量模型最佳预处理组合为极差归一化(MAXMIN)+MSC+WAVE,r_(p)分别为0.990 0和0.985 2;钾离子预处理组合为MSC+WAVE(r_(p)=0.969 4),总氮则以原始光谱数据构建的模型预测能力最强(r_(p)=0.970 9)。【结论】烟叶常规化学成分近红外定量模型经过数据集划分和预处理优化后,提高了模型准确率。 展开更多
关键词 烟叶 近红外光谱 数据集划分 数据预处理 定量模型
在线阅读 下载PDF
基于地质背景的框架-属性耦合建模技术:以锦州市规划区为例
17
作者 李旭光 马天宇 +5 位作者 吴季寰 江山 赵岩 于慧明 邹君 富建华 《地质与勘探》 北大核心 2025年第3期545-555,共11页
三维地质模型是城市空间开发利用过程中不可或缺的可视化数据资源,开发兼具地质背景条件与空间准确性的高精度三维地质模型是当前数字地质领域的重点突破方向。本文研究以锦州市规划区为例,构建了以资料整理、框架刻画、网格剖分和属性... 三维地质模型是城市空间开发利用过程中不可或缺的可视化数据资源,开发兼具地质背景条件与空间准确性的高精度三维地质模型是当前数字地质领域的重点突破方向。本文研究以锦州市规划区为例,构建了以资料整理、框架刻画、网格剖分和属性赋值为基础模块的框架-属性耦合建模技术。将钻孔数据、地质平面图和地表高程作为模型的信息源,采用断层自动拆分聚合算法精细刻画断层面形态,并基于变形场的断裂恢复法生成地层界面,构建地质界面框架模型。在框架内部按地层的地质背景条件选择网格节点排列模式以生成截断矩形网格,并将属性数据粗化到采样点所处的网格节点中。应用变差函数分析已有属性的分布特征,以此匹配插值算法完成模型空间内网格节点的属性赋值。本技术整合并完善了多类型地质信息的层级关系,实现了对地层性质的准确重现,所建立的模型在地质体空间交切关系展示与地质背景表达方面均具备准确性。 展开更多
关键词 三维地质模型 地质背景 多源数据融合 网格剖分 属性插值 锦州
在线阅读 下载PDF
异构环境感知的幂律图流划分算法
18
作者 杨巍 白璐 +3 位作者 宁俊义 董建军 单春海 信俊昌 《计算机应用》 北大核心 2025年第S1期177-182,共6页
图划分在分布式处理大规模图数据中扮演着关键的角色。通过平衡节点的工作负载和通信成本,图划分算法提高了同构集群的幂律图处理效率。然而,异构集群节点的计算能力和通信能力不一致,节点处理相同工作负载的时间成本不同,且最慢的节点... 图划分在分布式处理大规模图数据中扮演着关键的角色。通过平衡节点的工作负载和通信成本,图划分算法提高了同构集群的幂律图处理效率。然而,异构集群节点的计算能力和通信能力不一致,节点处理相同工作负载的时间成本不同,且最慢的节点会成为系统瓶颈。为了解决上述问题,提出异构感知流划分(SHAP)算法。SHAP算法采用One-pass流式邻域启发式划分策略,根据节点的性能来最小化分区间的图处理时间。通过复制因子分析,SHAP算法的划分质量被证明具有理论上界。在一个具有4个真实世界图的异构集群中进行图处理实验的结果表明,与高度优先复制(HDRF)图划分算法相比,SHAP算法的图处理时间最多可以减少67.49%,而SHAP算法的复制因子最小仅为HDRF算法的47.06%。 展开更多
关键词 异构环境 图划分 分布式计算 图计算 数据管理
在线阅读 下载PDF
基于多通道图聚合注意力机制的共享单车借还量预测
19
作者 王福建 张泽天 +1 位作者 陈喜群 王殿海 《浙江大学学报(工学版)》 北大核心 2025年第9期1986-1995,共10页
针对共享单车短期借还量预测中存在的空间范围小、模型时空信息捕捉能力不足及准确性有限等问题,提出基于多通道图聚合注意力机制的预测方法.根据共享单车在不同区域的流量,采用基于流量调整的虚拟站点划分方法,将城市划分为多个共享单... 针对共享单车短期借还量预测中存在的空间范围小、模型时空信息捕捉能力不足及准确性有限等问题,提出基于多通道图聚合注意力机制的预测方法.根据共享单车在不同区域的流量,采用基于流量调整的虚拟站点划分方法,将城市划分为多个共享单车虚拟站点,并以站点间的借还量矩阵构建动态邻接矩阵,形成共享单车图网络结构.通过多通道图聚合模块捕捉不同时间段的站点空间信息,并结合多头自注意力模块捕捉时间相关性.引入交叉注意力机制模块,结合外生变量,获取不同变量之间的潜在联系.在深圳市和纽约市进行实验,结果表明,与其他深度学习方法相比,该模型在不同时间段和地区均表现出显著优势,保持了稳定且较低的预测误差,证明了动态邻接矩阵以及融合外部特征的交叉注意力机制模块能够有效提高共享单车借还量的预测准确率. 展开更多
关键词 共享单车 多源数据 深度学习 虚拟站点划分 动态邻接矩阵 交叉注意力
在线阅读 下载PDF
基于地质信息编码的地震数据分布式存储
20
作者 彭成 《计算机应用与软件》 北大核心 2025年第8期55-62,共8页
针对现有地震数据存取效率所存在的不足,提供一种基于地质信息编码的分布式存储方法,进行空间信息网格划分,将全球范围划分为多级网格,对于要进行分布式存储的数据,根据其所在的地理位置,生成相应的地质信息编码。将需要进行分布式存储... 针对现有地震数据存取效率所存在的不足,提供一种基于地质信息编码的分布式存储方法,进行空间信息网格划分,将全球范围划分为多级网格,对于要进行分布式存储的数据,根据其所在的地理位置,生成相应的地质信息编码。将需要进行分布式存储的数据根据其生成的地质信息编码,分布式存储到多个机器中。通过分布式服务器中所存储的空间信息网格定位到具体的分布式数据,实现数据获取。实现地质信息编码与地质数据的结合,对于相近区域的地质数据存放于相同机器中,更好地利用地质数据的地理信息同时更好地管理地质数据。 展开更多
关键词 分布式 地震数据 网格划分 空间定位 地质信息编码
在线阅读 下载PDF
上一页 1 2 48 下一页 到第
使用帮助 返回顶部