The public has shown great interest in the data factor and data transactions,but the current attention is overly focused on personal behavioral data and transactions happening at Data Exchanges.To deliver a complete p...The public has shown great interest in the data factor and data transactions,but the current attention is overly focused on personal behavioral data and transactions happening at Data Exchanges.To deliver a complete picture of data flaw and transaction,this paper presents a systematic overview of the flow and transaction of personal,corporate and public data on the basis of data factor classification from various perspectives.By utilizing various sources of information,this paper estimates the volume of data generation&storage and the volume&trend of data market transactions for major economies in the world with the following findings:(i)Data classification is diverse due to a broad variety of applying scenarios,and data transaction and profit distribution are complex due to heterogenous entities,ownerships,information density and other attributes of different data types.(ii)Global data transaction has presented with the characteristics of productization,servitization and platform-based mode.(iii)For major economies,there is a commonly observed disequilibrium between data generation scale and storage scale,which is particularly striking for China.(i^v)The global data market is in a nascent stage of rapid development with a transaction volume of about 100 billion US dollars,and China s data market is even more underdeveloped and only accounts for some 10%of the world total.All sectors of the society should be flly aware of the diversity and complexity of data factor classification and data transactions,as well as the arduous and long-term nature of developing and improving relevant institutional systems.Adapting to such features,efforts should be made to improve data classification,enhance computing infrastructure development,foster professional data transaction and development institutions,and perfect the data governance system.展开更多
The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Consi...The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Considering the limitations of traditional high-frequency compensation methods,this paper presents a new method based on adaptive generalized S transform.This method is based on the study of frequency spectrum attenuation law of seismic signals,and the Gauss window function of adaptive generalized S transform is used to fi t the attenuation trend of seismic signals to seek the optimal Gauss window function.The amplitude spectrum compensation function constructed using the optimal Gauss window function is used to modify the time-frequency spectrum of the adaptive generalized S transform of seismic signals and reconstruct seismic signals to compensate for high-frequency attenuation.Practical data processing results show that the method can compensate for the high-frequency components that are absorbed and attenuated by the stratum,thereby eff ectively improving the resolution and quality of seismic data.展开更多
Blockchain is a viable solution to provide data integrity for the enormous volume of 5G IoT social data, while we need to break through the throughput bottleneck of blockchain. Sharding is a promising technology to so...Blockchain is a viable solution to provide data integrity for the enormous volume of 5G IoT social data, while we need to break through the throughput bottleneck of blockchain. Sharding is a promising technology to solve the problem of low throughput in blockchains. However, cross-shard communication hinders the effective improvement of blockchain throughput. Therefore, it is critical to reasonably allocate transactions to different shards to improve blockchain throughput. Existing research on blockchain sharding mainly focuses on shards formation, configuration, and consensus, while ignoring the negative impact of cross-shard communication on blockchain throughput. Aiming to maximize the throughput of transaction processing, we study how to allocate blockchain transactions to shards in this paper. We propose an Associated Transaction assignment algorithm based on Closest Fit (ATCF). ATCF classifies associated transactions into transaction groups which are then assigned to different shards in the non-ascending order of transaction group sizes periodically. Within each epoch, ATCF tries to select a shard that can handle all the transactions for each transaction group. If there are multiple such shards, ATCF selects the shard with the remaining processing capacity closest to the number of transactions in the transaction group. When no such shard exists, ATCF chooses the shard with the largest remaining processing capacity for the transaction group. The transaction groups that cannot be completely processed within the current epoch will be allocated in the subsequent epochs. We prove that ATCF is a 2-approximation algorithm for the associated transaction assignment problem. Simulation results show that ATCF can effectively improve the blockchain throughput and reduce the number of cross-shard transactions.展开更多
To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The...To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The proposed system formulates multiple private data access control strategies,and realizes data trading and sharing through on-chain transactions,which makes transaction records transparent and immutable.In our system,the private data are encrypted,and the role-based account model ensures that access to the data requires owner’s authorization.Moreover,a new consensus protocol named Proof of Transactions(PoT)proposed by ourselves has been used to improve consensus efficiency.The value of Ecare is not only that it aggregates telemedicine,data transactions,and other features,but also that it translates these actions into transaction events stored in the blockchain,making them transparent and immutable to all participants.The proposed system can be extended to more general big data privacy protection and data transaction scenarios.展开更多
As more and more application systems related to big data were developed, NoSQL (Not Only SQL) database systems are becoming more and more popular. In order to add transaction features for some NoSQL database systems, ...As more and more application systems related to big data were developed, NoSQL (Not Only SQL) database systems are becoming more and more popular. In order to add transaction features for some NoSQL database systems, many scholars have tried different techniques. Unfortunately, there is a lack of research on Redis’s transaction in the existing literatures. This paper proposes a transaction model for key-value NoSQL databases including Redis to make possible allowing users to access data in the ACID (Atomicity, Consistency, Isolation and Durability) way, and this model is vividly called the surfing concurrence transaction model. The architecture, important features and implementation principle are described in detail. The key algorithms also were given in the form of pseudo program code, and the performance also was evaluated. With the proposed model, the transactions of Key-Value NoSQL databases can be performed in a lock free and MVCC (Multi-Version Concurrency Control) free manner. This is the result of further research on the related topic, which fills the gap ignored by relevant scholars in this field to make a little contribution to the further development of NoSQL technology.展开更多
As the crypto-asset ecosystem matures,the use of high-frequency data has become increasingly common in decentralized finance literature.Using bibliometric analysis,we characterize the existing cryptocurrency literatur...As the crypto-asset ecosystem matures,the use of high-frequency data has become increasingly common in decentralized finance literature.Using bibliometric analysis,we characterize the existing cryptocurrency literature that employs high-frequency data.We highlighted the most influential authors,articles,and journals based on 189 articles from the Scopus database from 2015 to 2022.This approach enables us to identify emerging trends and research hotspots with the aid of co-citation and cartographic analyses.It shows knowledge expansion through authors’collaboration in cryptocurrency research with co-authorship analysis.We identify four major streams of research:(i)return prediction and measurement of cryptocurrency volatility,(ii)(in)efficiency of cryptocurrencies,(iii)price dynamics and bubbles in cryptocurrencies,and(iv)the diversification,safe haven,and hedging properties of Bitcoin.We conclude that highly traded cryptocurrencies’investment features and economic outcomes are analyzed predominantly on a tick-by-tick basis.This study also provides recommendations for future studies.展开更多
The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent ...The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent access to intra-daily high-frequency data for two of the most liquid contracts at the Nord Pool exchange has made it possible to apply new and promising methods for analyzing volatility and correlation. The concepts of realized volatility and realized correlation are applied, and this study statistically describes the distribution (both distributional properties and temporal dependencies) of electricity forward data from 2005 to 2009. The main findings show that the logarithmic realized volatility is approximately normally distributed, while realized correlation seems not to be. Further, realized volatility and realized correlation have a long-memory feature. There also seems to be a high correlation between realized correlation and volatilities and positive relations between trading volume and realized volatility and between trading volume and realized correlation. These results are to a large extent consistent with earlier studies of stylized facts of other financial and commodity markets.展开更多
High-frequency surface wave radar(HFSWR) and automatic identification system(AIS) are the two most important sensors used for vessel tracking.The HFSWR can be applied to tracking all vessels in a detection area,wh...High-frequency surface wave radar(HFSWR) and automatic identification system(AIS) are the two most important sensors used for vessel tracking.The HFSWR can be applied to tracking all vessels in a detection area,while the AIS is usually used to verify the information of cooperative vessels.Because of interference from sea clutter,employing single-frequency HFSWR for vessel tracking may obscure vessels located in the blind zones of Bragg peaks.Analyzing changes in the detection frequencies constitutes an effective method for addressing this deficiency.A solution consisting of vessel fusion tracking is proposed using dual-frequency HFSWR data calibrated by the AIS.Since different systematic biases exist between HFSWR frequency measurements and AIS measurements,AIS information is used to estimate and correct the HFSWR systematic biases at each frequency.First,AIS point measurements for cooperative vessels are associated with the HFSWR measurements using a JVC assignment algorithm.From the association results of the cooperative vessels,the systematic biases in the dualfrequency HFSWR data are estimated and corrected.Then,based on the corrected dual-frequency HFSWR data,the vessels are tracked using a dual-frequency fusion joint probabilistic data association(JPDA)-unscented Kalman filter(UKF) algorithm.Experimental results using real-life detection data show that the proposed method is efficient at tracking vessels in real time and can improve the tracking capability and accuracy compared with tracking processes involving single-frequency data.展开更多
Cryptocurrency, as a typical application scene of blockchain, has attracted broad interests from both industrial and academic communities. With its rapid development, the cryptocurrency transaction network embedding(C...Cryptocurrency, as a typical application scene of blockchain, has attracted broad interests from both industrial and academic communities. With its rapid development, the cryptocurrency transaction network embedding(CTNE) has become a hot topic. It embeds transaction nodes into low-dimensional feature space while effectively maintaining a network structure,thereby discovering desired patterns demonstrating involved users' normal and abnormal behaviors. Based on a wide investigation into the state-of-the-art CTNE, this survey has made the following efforts: 1) categorizing recent progress of CTNE methods, 2) summarizing the publicly available cryptocurrency transaction network datasets, 3) evaluating several widely-adopted methods to show their performance in several typical evaluation protocols, and 4) discussing the future trends of CTNE. By doing so, it strives to provide a systematic and comprehensive overview of existing CTNE methods from static to dynamic perspectives,thereby promoting further research into this emerging and important field.展开更多
DNS(domain name system) query log analysis has been a popular research topic in recent years. CLOPE, the represented transactional clustering algorithm, could be readily used for DNS query log mining. However, the alg...DNS(domain name system) query log analysis has been a popular research topic in recent years. CLOPE, the represented transactional clustering algorithm, could be readily used for DNS query log mining. However, the algorithm is inefficient when processing large scale data. The MR-CLOPE algorithm is proposed, which is an extension and improvement on CLOPE based on Map Reduce. Different from the previous parallel clustering method, a two-stage Map Reduce implementation framework is proposed. Each of the stage is implemented by one kind Map Reduce task. In the first stage, the DNS query logs are divided into multiple splits and the CLOPE algorithm is executed on each split. The second stage usually tends to iterate many times to merge the small clusters into bigger satisfactory ones. In these two stages, a novel partition process is designed to randomly spread out original sub clusters, which will be moved and merged in the map phrase of the second phase according to the defined merge criteria. In such way, the advantage of the original CLOPE algorithm is kept and its disadvantages are dealt with in the proposed framework to achieve more excellent clustering performance. The experiment results show that MR-CLOPE is not only faster but also has better clustering quality on DNS query logs compared with CLOPE.展开更多
This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data ...This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data in the stock market being crucial, the implementation of robust systems which guarantee trust between the different actors is essential. We therefore proposed, after analyzing the limits of several security approaches in the literature, an architecture based on blockchain technology making it possible to both identify and reduce the vulnerabilities linked to the design, implementation work or the use of web applications used for transactions. Our proposal makes it possible, thanks to two-factor authentication via the Blockchain, to strengthen the security of investors’ accounts and the automated recording of transactions in the Blockchain while guaranteeing the integrity of stock market operations. It also provides an application vulnerability report. To validate our approach, we compared our results to those of three other security tools, at the level of different metrics. Our approach achieved the best performance in each case.展开更多
In the contemporary era,characterized by the Internet and digitalization as fundamental features,the operation and application of digital currency have gradually developed into a comprehensive structural system.This s...In the contemporary era,characterized by the Internet and digitalization as fundamental features,the operation and application of digital currency have gradually developed into a comprehensive structural system.This system restores the essential characteristics of currency while providing auxiliary services related to the formation,circulation,storage,application,and promotion of digital currency.Compared to traditional currency management technologies,big data analysis technology,which is primarily embedded in digital currency systems,enables the rapid acquisition of information.This facilitates the identification of standard associations within currency data and provides technical support for the operational framework of digital currency.展开更多
The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses th...The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses the Gabor-Daubechies tight framebased extrapolator(G-D extrapolator) and its high-frequency asymptotic expansion to extrapolate wavefields and also delineates an improved correlation imaging condition in the local angle domain.The results for migrating synthetic and real VSP data demonstrate that the application of the high-frequency G-D extrapolator asymptotic expansion can effectively decrease computational complexity.The local angle domain correlation imaging condition can be used to weaken migration artifacts without increasing computation.展开更多
Accurate prediction of natural gas well production data is crucial for effective resource management and innovation,particularly amid the global transition to sustainable energy.Traditional models struggle with high-f...Accurate prediction of natural gas well production data is crucial for effective resource management and innovation,particularly amid the global transition to sustainable energy.Traditional models struggle with high-frequency,high-dimensional datasets generated by digital transformation in the oil and gas industry.This study explores the application of Transformer-based models—Transformer,Informer,Autoformer,and Patch Time Series Transformer(PatchTST)—for forecasting high-frequency natural gas production data.These models utilize self-attention mechanisms to capture long-term dependencies and efficiently process large-scale datasets.Autoformer achieves predictive success through its Seasonal Decomposition Attention mechanism,which effectively extracts trend-seasonality patterns.However,our experiments show that Autoformer exhibits sensitivity to dataset changes,as performance declines when using old parameters compared to retrained models,highlighting its reliance on dataset-specific retraining.Experimental results demonstrate that increasing sampling frequency significantly enhances prediction accuracy,reducing MAPE from 0.556 to 0.239.Additionally,these models consistently track actual production trends across extended forecast horizons.Notably,PatchTST maintains stable performance using either pretrained or retrained parameters,showcasing superior adaptability and generalization.This makes it particularly suitable for real-world applications where frequent retraining may not be feasible.Overall,the findings validate the applicability of Transformer-based models,particularly PatchTST,in dynamic and precise natural gas production forecasting.This study provides valuable insights for advancing adaptive,data-driven resource management strategies.展开更多
The smart grid is an evolving critical infrastructure,which combines renewable energy and the most advanced information and communication technologies to provide more economic and secure power supply services.To cope ...The smart grid is an evolving critical infrastructure,which combines renewable energy and the most advanced information and communication technologies to provide more economic and secure power supply services.To cope with the intermittency of ever-increasing renewable energy and ensure the security of the smart grid,state estimation,which serves as a basic tool for understanding the true states of a smart grid,should be performed with high frequency.More complete system state data are needed to support high-frequency state estimation.The data completeness problem for smart grid state estimation is therefore studied in this paper.The problem of improving data completeness by recovering highfrequency data from low-frequency data is formulated as a super resolution perception(SRP)problem in this paper.A novel machine-learning-based SRP approach is thereafter proposed.The proposed method,namely the Super Resolution Perception Net for State Estimation(SRPNSE),consists of three steps:feature extraction,information completion,and data reconstruction.Case studies have demonstrated the effectiveness and value of the proposed SRPNSE approach in recovering high-frequency data from low-frequency data for the state estimation.展开更多
文摘The public has shown great interest in the data factor and data transactions,but the current attention is overly focused on personal behavioral data and transactions happening at Data Exchanges.To deliver a complete picture of data flaw and transaction,this paper presents a systematic overview of the flow and transaction of personal,corporate and public data on the basis of data factor classification from various perspectives.By utilizing various sources of information,this paper estimates the volume of data generation&storage and the volume&trend of data market transactions for major economies in the world with the following findings:(i)Data classification is diverse due to a broad variety of applying scenarios,and data transaction and profit distribution are complex due to heterogenous entities,ownerships,information density and other attributes of different data types.(ii)Global data transaction has presented with the characteristics of productization,servitization and platform-based mode.(iii)For major economies,there is a commonly observed disequilibrium between data generation scale and storage scale,which is particularly striking for China.(i^v)The global data market is in a nascent stage of rapid development with a transaction volume of about 100 billion US dollars,and China s data market is even more underdeveloped and only accounts for some 10%of the world total.All sectors of the society should be flly aware of the diversity and complexity of data factor classification and data transactions,as well as the arduous and long-term nature of developing and improving relevant institutional systems.Adapting to such features,efforts should be made to improve data classification,enhance computing infrastructure development,foster professional data transaction and development institutions,and perfect the data governance system.
基金This research is supported by the National Science and Technology Major Project of China(No.2011ZX05024-001-03)the Natural Science Basic Research Plan in Shaanxi Province of China(No.2021JQ-588)Innovation Fund for graduate students of Xi’an Shiyou University(No.YCS17111017).
文摘The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Considering the limitations of traditional high-frequency compensation methods,this paper presents a new method based on adaptive generalized S transform.This method is based on the study of frequency spectrum attenuation law of seismic signals,and the Gauss window function of adaptive generalized S transform is used to fi t the attenuation trend of seismic signals to seek the optimal Gauss window function.The amplitude spectrum compensation function constructed using the optimal Gauss window function is used to modify the time-frequency spectrum of the adaptive generalized S transform of seismic signals and reconstruct seismic signals to compensate for high-frequency attenuation.Practical data processing results show that the method can compensate for the high-frequency components that are absorbed and attenuated by the stratum,thereby eff ectively improving the resolution and quality of seismic data.
基金supported by Anhui Provincial Key R&D Program of China(202004a05020040),the open project of State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System in China(CEMEE2018Z0102B)the open fund of Intelligent Interconnected Systems Laboratory of Anhui Province(PA2021AKSK0114),Hefei University of Technology.
文摘Blockchain is a viable solution to provide data integrity for the enormous volume of 5G IoT social data, while we need to break through the throughput bottleneck of blockchain. Sharding is a promising technology to solve the problem of low throughput in blockchains. However, cross-shard communication hinders the effective improvement of blockchain throughput. Therefore, it is critical to reasonably allocate transactions to different shards to improve blockchain throughput. Existing research on blockchain sharding mainly focuses on shards formation, configuration, and consensus, while ignoring the negative impact of cross-shard communication on blockchain throughput. Aiming to maximize the throughput of transaction processing, we study how to allocate blockchain transactions to shards in this paper. We propose an Associated Transaction assignment algorithm based on Closest Fit (ATCF). ATCF classifies associated transactions into transaction groups which are then assigned to different shards in the non-ascending order of transaction group sizes periodically. Within each epoch, ATCF tries to select a shard that can handle all the transactions for each transaction group. If there are multiple such shards, ATCF selects the shard with the remaining processing capacity closest to the number of transactions in the transaction group. When no such shard exists, ATCF chooses the shard with the largest remaining processing capacity for the transaction group. The transaction groups that cannot be completely processed within the current epoch will be allocated in the subsequent epochs. We prove that ATCF is a 2-approximation algorithm for the associated transaction assignment problem. Simulation results show that ATCF can effectively improve the blockchain throughput and reduce the number of cross-shard transactions.
基金This work was supported by the National Key R&D Program of China(No.2018YFB1700100)the National Natural Science Foundation of China(No.61873317)。
文摘To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The proposed system formulates multiple private data access control strategies,and realizes data trading and sharing through on-chain transactions,which makes transaction records transparent and immutable.In our system,the private data are encrypted,and the role-based account model ensures that access to the data requires owner’s authorization.Moreover,a new consensus protocol named Proof of Transactions(PoT)proposed by ourselves has been used to improve consensus efficiency.The value of Ecare is not only that it aggregates telemedicine,data transactions,and other features,but also that it translates these actions into transaction events stored in the blockchain,making them transparent and immutable to all participants.The proposed system can be extended to more general big data privacy protection and data transaction scenarios.
文摘As more and more application systems related to big data were developed, NoSQL (Not Only SQL) database systems are becoming more and more popular. In order to add transaction features for some NoSQL database systems, many scholars have tried different techniques. Unfortunately, there is a lack of research on Redis’s transaction in the existing literatures. This paper proposes a transaction model for key-value NoSQL databases including Redis to make possible allowing users to access data in the ACID (Atomicity, Consistency, Isolation and Durability) way, and this model is vividly called the surfing concurrence transaction model. The architecture, important features and implementation principle are described in detail. The key algorithms also were given in the form of pseudo program code, and the performance also was evaluated. With the proposed model, the transactions of Key-Value NoSQL databases can be performed in a lock free and MVCC (Multi-Version Concurrency Control) free manner. This is the result of further research on the related topic, which fills the gap ignored by relevant scholars in this field to make a little contribution to the further development of NoSQL technology.
文摘As the crypto-asset ecosystem matures,the use of high-frequency data has become increasingly common in decentralized finance literature.Using bibliometric analysis,we characterize the existing cryptocurrency literature that employs high-frequency data.We highlighted the most influential authors,articles,and journals based on 189 articles from the Scopus database from 2015 to 2022.This approach enables us to identify emerging trends and research hotspots with the aid of co-citation and cartographic analyses.It shows knowledge expansion through authors’collaboration in cryptocurrency research with co-authorship analysis.We identify four major streams of research:(i)return prediction and measurement of cryptocurrency volatility,(ii)(in)efficiency of cryptocurrencies,(iii)price dynamics and bubbles in cryptocurrencies,and(iv)the diversification,safe haven,and hedging properties of Bitcoin.We conclude that highly traded cryptocurrencies’investment features and economic outcomes are analyzed predominantly on a tick-by-tick basis.This study also provides recommendations for future studies.
文摘The modeling of volatility and correlation is important in order to calculate hedge ratios, value at risk estimates, CAPM (Capital Asset Pricing Model betas), derivate pricing and risk management in general. Recent access to intra-daily high-frequency data for two of the most liquid contracts at the Nord Pool exchange has made it possible to apply new and promising methods for analyzing volatility and correlation. The concepts of realized volatility and realized correlation are applied, and this study statistically describes the distribution (both distributional properties and temporal dependencies) of electricity forward data from 2005 to 2009. The main findings show that the logarithmic realized volatility is approximately normally distributed, while realized correlation seems not to be. Further, realized volatility and realized correlation have a long-memory feature. There also seems to be a high correlation between realized correlation and volatilities and positive relations between trading volume and realized volatility and between trading volume and realized correlation. These results are to a large extent consistent with earlier studies of stylized facts of other financial and commodity markets.
基金The National Natural Science Foundation of China under contract No.61362002the Marine Scientific Research Special Funds for Public Welfare of China under contract No.201505002
文摘High-frequency surface wave radar(HFSWR) and automatic identification system(AIS) are the two most important sensors used for vessel tracking.The HFSWR can be applied to tracking all vessels in a detection area,while the AIS is usually used to verify the information of cooperative vessels.Because of interference from sea clutter,employing single-frequency HFSWR for vessel tracking may obscure vessels located in the blind zones of Bragg peaks.Analyzing changes in the detection frequencies constitutes an effective method for addressing this deficiency.A solution consisting of vessel fusion tracking is proposed using dual-frequency HFSWR data calibrated by the AIS.Since different systematic biases exist between HFSWR frequency measurements and AIS measurements,AIS information is used to estimate and correct the HFSWR systematic biases at each frequency.First,AIS point measurements for cooperative vessels are associated with the HFSWR measurements using a JVC assignment algorithm.From the association results of the cooperative vessels,the systematic biases in the dualfrequency HFSWR data are estimated and corrected.Then,based on the corrected dual-frequency HFSWR data,the vessels are tracked using a dual-frequency fusion joint probabilistic data association(JPDA)-unscented Kalman filter(UKF) algorithm.Experimental results using real-life detection data show that the proposed method is efficient at tracking vessels in real time and can improve the tracking capability and accuracy compared with tracking processes involving single-frequency data.
基金supported in part by the National Natural Science Foundation of China (62272078)the CAAI-Huawei MindSpore Open Fund (CAAIXSJLJJ-2021-035A)the Doctoral Student Talent Training Program of Chongqing University of Posts and Telecommunications (BYJS202009)。
文摘Cryptocurrency, as a typical application scene of blockchain, has attracted broad interests from both industrial and academic communities. With its rapid development, the cryptocurrency transaction network embedding(CTNE) has become a hot topic. It embeds transaction nodes into low-dimensional feature space while effectively maintaining a network structure,thereby discovering desired patterns demonstrating involved users' normal and abnormal behaviors. Based on a wide investigation into the state-of-the-art CTNE, this survey has made the following efforts: 1) categorizing recent progress of CTNE methods, 2) summarizing the publicly available cryptocurrency transaction network datasets, 3) evaluating several widely-adopted methods to show their performance in several typical evaluation protocols, and 4) discussing the future trends of CTNE. By doing so, it strives to provide a systematic and comprehensive overview of existing CTNE methods from static to dynamic perspectives,thereby promoting further research into this emerging and important field.
基金Project(61103046) supported in part by the National Natural Science Foundation of ChinaProject(B201312) supported by DHU Distinguished Young Professor Program,China+1 种基金Project(LY14F020007) supported by Zhejiang Provincial Natural Science Funds of ChinaProject(2014A610072) supported by the Natural Science Foundation of Ningbo City,China
文摘DNS(domain name system) query log analysis has been a popular research topic in recent years. CLOPE, the represented transactional clustering algorithm, could be readily used for DNS query log mining. However, the algorithm is inefficient when processing large scale data. The MR-CLOPE algorithm is proposed, which is an extension and improvement on CLOPE based on Map Reduce. Different from the previous parallel clustering method, a two-stage Map Reduce implementation framework is proposed. Each of the stage is implemented by one kind Map Reduce task. In the first stage, the DNS query logs are divided into multiple splits and the CLOPE algorithm is executed on each split. The second stage usually tends to iterate many times to merge the small clusters into bigger satisfactory ones. In these two stages, a novel partition process is designed to randomly spread out original sub clusters, which will be moved and merged in the map phrase of the second phase according to the defined merge criteria. In such way, the advantage of the original CLOPE algorithm is kept and its disadvantages are dealt with in the proposed framework to achieve more excellent clustering performance. The experiment results show that MR-CLOPE is not only faster but also has better clustering quality on DNS query logs compared with CLOPE.
文摘This paper deals with the security of stock market transactions within financial markets, particularly that of the West African Economic and Monetary Union (UEMOA). The confidentiality and integrity of sensitive data in the stock market being crucial, the implementation of robust systems which guarantee trust between the different actors is essential. We therefore proposed, after analyzing the limits of several security approaches in the literature, an architecture based on blockchain technology making it possible to both identify and reduce the vulnerabilities linked to the design, implementation work or the use of web applications used for transactions. Our proposal makes it possible, thanks to two-factor authentication via the Blockchain, to strengthen the security of investors’ accounts and the automated recording of transactions in the Blockchain while guaranteeing the integrity of stock market operations. It also provides an application vulnerability report. To validate our approach, we compared our results to those of three other security tools, at the level of different metrics. Our approach achieved the best performance in each case.
文摘In the contemporary era,characterized by the Internet and digitalization as fundamental features,the operation and application of digital currency have gradually developed into a comprehensive structural system.This system restores the essential characteristics of currency while providing auxiliary services related to the formation,circulation,storage,application,and promotion of digital currency.Compared to traditional currency management technologies,big data analysis technology,which is primarily embedded in digital currency systems,enables the rapid acquisition of information.This facilitates the identification of standard associations within currency data and provides technical support for the operational framework of digital currency.
基金supported by the National Hi-Tech Research and Development Program of China (Grant No.2006AA09A102-11)the National Natural Science Fund of China (Grant No.40730424 and 40674064)
文摘The imaging of offset VSP data in local phase space can improve the image of the subsurface structure near the well.In this paper,we present a migration scheme for imaging VSP data in a local phase space,which uses the Gabor-Daubechies tight framebased extrapolator(G-D extrapolator) and its high-frequency asymptotic expansion to extrapolate wavefields and also delineates an improved correlation imaging condition in the local angle domain.The results for migrating synthetic and real VSP data demonstrate that the application of the high-frequency G-D extrapolator asymptotic expansion can effectively decrease computational complexity.The local angle domain correlation imaging condition can be used to weaken migration artifacts without increasing computation.
基金support of the Na-tional Natural Science Foundation of China(Grant Nos.52376159 and 52474064)the Frontier Interdisciplinary Exploration Re-search Program of China University of Petroleum,Beijing(Grant No.2462024XKQY005)。
文摘Accurate prediction of natural gas well production data is crucial for effective resource management and innovation,particularly amid the global transition to sustainable energy.Traditional models struggle with high-frequency,high-dimensional datasets generated by digital transformation in the oil and gas industry.This study explores the application of Transformer-based models—Transformer,Informer,Autoformer,and Patch Time Series Transformer(PatchTST)—for forecasting high-frequency natural gas production data.These models utilize self-attention mechanisms to capture long-term dependencies and efficiently process large-scale datasets.Autoformer achieves predictive success through its Seasonal Decomposition Attention mechanism,which effectively extracts trend-seasonality patterns.However,our experiments show that Autoformer exhibits sensitivity to dataset changes,as performance declines when using old parameters compared to retrained models,highlighting its reliance on dataset-specific retraining.Experimental results demonstrate that increasing sampling frequency significantly enhances prediction accuracy,reducing MAPE from 0.556 to 0.239.Additionally,these models consistently track actual production trends across extended forecast horizons.Notably,PatchTST maintains stable performance using either pretrained or retrained parameters,showcasing superior adaptability and generalization.This makes it particularly suitable for real-world applications where frequent retraining may not be feasible.Overall,the findings validate the applicability of Transformer-based models,particularly PatchTST,in dynamic and precise natural gas production forecasting.This study provides valuable insights for advancing adaptive,data-driven resource management strategies.
基金the Training Program of the Major Research Plan of the National Natural Science Foundation of China(91746118)the Shenzhen Municipal Science and Technology Innovation Committee Basic Research project(JCYJ20170410172224515)。
文摘The smart grid is an evolving critical infrastructure,which combines renewable energy and the most advanced information and communication technologies to provide more economic and secure power supply services.To cope with the intermittency of ever-increasing renewable energy and ensure the security of the smart grid,state estimation,which serves as a basic tool for understanding the true states of a smart grid,should be performed with high frequency.More complete system state data are needed to support high-frequency state estimation.The data completeness problem for smart grid state estimation is therefore studied in this paper.The problem of improving data completeness by recovering highfrequency data from low-frequency data is formulated as a super resolution perception(SRP)problem in this paper.A novel machine-learning-based SRP approach is thereafter proposed.The proposed method,namely the Super Resolution Perception Net for State Estimation(SRPNSE),consists of three steps:feature extraction,information completion,and data reconstruction.Case studies have demonstrated the effectiveness and value of the proposed SRPNSE approach in recovering high-frequency data from low-frequency data for the state estimation.