The Fourth Industrial Revolution has endowed the concept of state sovereignty with new era-specific connotations,leading to the emergence of the theory of data sovereignty.While countries refine their domestic legisla...The Fourth Industrial Revolution has endowed the concept of state sovereignty with new era-specific connotations,leading to the emergence of the theory of data sovereignty.While countries refine their domestic legislation to establish their data sovereignty,they are also actively engaging in the negotiation of cross-border data flow rules within international trade agreements to construct data sovereignty.During these negotiations,countries express differing regulatory claims,with some focusing on safeguarding sovereignty and protecting human rights,some prioritizing economic promotion and security assurance,and others targeting traditional and innovative digital trade barriers.These varied approaches reflect the tension between three pairs of values:collectivism and individualism,freedom and security,and tradition and innovation.Based on their distinct value pursuits,three representative models of data sovereignty construction have emerged globally.At the current juncture,when international rules for digital trade are still in their nascent stages,China should timely establish its data sovereignty rules,actively participate in global data sovereignty competition,and balance its sovereignty interests with other interests.Specifically,China should explore the scope of system-acceptable digital trade barriers through free trade zones;integrate domestic and international legal frameworks to ensure the alignment of China’s data governance legislation with its obligations under international trade agreements;and use the development of the“Digital Silk Road”as a starting point to prioritize the formation of digital trade rules with countries participating in the Belt and Road Initiative,promoting the Chinese solutions internationally.展开更多
Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usa...Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usable or not.However,the comprehensive evaluation method of data quality mostly contains the subjective factors of the evaluator,so how to comprehensively and objectively evaluate the data has become a bottleneck that needs to be solved in the research of comprehensive evaluation method.In order to evaluate the data more comprehensively,objectively and differentially,a novel comprehensive evaluation method based on particle swarm optimization(PSO)and grey correlation analysis(GCA)is presented in this paper.At first,an improved GCA evaluation model based on the technique for order preference by similarity to an ideal solution(TOPSIS)is proposed.Then,an objective function model of maximum difference of the comprehensive evaluation values is built,and the PSO algorithm is used to optimize the weights of the improved GCA evaluation model based on the objective function model.Finally,the performance of the proposed method is investigated through parameter analysis.A performance comparison of traffic flow data is carried out,and the simulation results show that the maximum average difference between the evaluation results and its mean value(MDR)of the proposed comprehensive evaluation method is 33.24%higher than that of TOPSIS-GCA,and 6.86%higher than that of GCA.The proposed method has better differentiation than other methods,which means that it objectively and comprehensively evaluates the data from both the relevance and differentiation of the data,and the results more effectively reflect the differences in data quality,which will provide more effective data support for intelligent modeling,prediction and other applications.展开更多
Characteristics of the Internet traffic data flow are studied based on the chaos theory. A phase space that is isometric with the network dynamic system is reconstructed by using the single variable time series of a n...Characteristics of the Internet traffic data flow are studied based on the chaos theory. A phase space that is isometric with the network dynamic system is reconstructed by using the single variable time series of a network flow. Some parameters, such as the correlative dimension and the Lyapunov exponent are calculated, and the chaos characteristic is proved to exist in Internet traffic data flows. A neural network model is construct- ed based on radial basis function (RBF) to forecast actual Internet traffic data flow. Simulation results show that, compared with other forecasts of the forward-feedback neural network, the forecast of the RBF neural network based on the chaos theory has faster learning capacity and higher forecasting accuracy.展开更多
Since heat flow bears valuable information on energy balance between various processes occurred at depths, heat flow data attract more and more attention of earth sciences communities. In China, 3 heat flow values fro...Since heat flow bears valuable information on energy balance between various processes occurred at depths, heat flow data attract more and more attention of earth sciences communities. In China, 3 heat flow values from the Mesozoic basin of NE China were reported in 1966, but the quality and accuracy seemed to be unsatisfactory. The first portion of 25 reliable heat flow data was published by展开更多
This paper proposes a method of data-flow testing for Web services composition.Firstly,to facilitate data flow analysis and constraints collecting,the existing model representation of business process execution langua...This paper proposes a method of data-flow testing for Web services composition.Firstly,to facilitate data flow analysis and constraints collecting,the existing model representation of business process execution language(BPEL)is modified in company with the analysis of data dependency and an exact representation of dead path elimination(DPE)is proposed,which over-comes the difficulties brought to dataflow analysis.Then defining and using information based on data flow rules is collected by parsing BPEL and Web services description language(WSDL)documents and the def-use annotated control flow graph is created.Based on this model,data-flow anomalies which indicate potential errors can be discovered by traversing the paths of graph,and all-du-paths used in dynamic data flow testing for Web services composition are automatically generated,then testers can design the test cases according to the collected constraints for each path selected.展开更多
With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source...With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source signals. To some extent,deconvolution can eliminate changes of the recorded signals due to source variations. Generally speaking,in order to remove the airgun source wavelet signal and obtain the Green's functions between the airgun source and stations,we need to select an appropriate method to perform the deconvolution process for seismic waveform data. Frequency domain water level deconvolution and time domain iterative deconvolution are two kinds of deconvolution methods widely used in the field of receiver functions,etc. We use the Binchuan( in Yunnan Province,China) airgun data as an example to compare the performance of these two deconvolution methods in airgun source data processing. The results indicate that frequency domain water level deconvolution is better in terms of computational efficiency;time domain iterative deconvolution is better in terms of the signal-to-noise ratio( SNR),and the initial motion of P-wave is also clearer. We further discuss the sequence issue of deconvolution and stack for multiple-shot airgun data processing. Finally,we propose a general processing flow for the airgun source data to extract the Green 's functions between the airgun source and stations.展开更多
Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at hig...Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.展开更多
In order to guarantee the correctness of business processes, not only control-flow errors but also data-flow errors should be considered. The control-flow errors mainly focus on deadlock, livelock, soundness, and so o...In order to guarantee the correctness of business processes, not only control-flow errors but also data-flow errors should be considered. The control-flow errors mainly focus on deadlock, livelock, soundness, and so on. However, there are not too many methods for detecting data-flow errors. This paper defines Petri nets with data operations(PN-DO) that can model the operations on data such as read, write and delete. Based on PN-DO, we define some data-flow errors in this paper. We construct a reachability graph with data operations for each PN-DO, and then propose a method to reduce the reachability graph. Based on the reduced reachability graph, data-flow errors can be detected rapidly. A case study is given to illustrate the effectiveness of our methods.展开更多
SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate dat...SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).展开更多
A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of sy...A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of syntax and semantic rules suitable for a semantic network. Based on the model,the paper introduces a formal method defining data flow diagrams (DFD) and also simply explains how to use the method.展开更多
Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture...Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture employ fine-grain data-driven parallelism.These architectures have thepotential to exploit the inherent parallelism in compute intensive applicationslike signal processing,image and video processing and so on and can thusachieve faster throughputs and higher power efficiency.In this paper,severaldata flow computing architectures are explored,and their main architecturalfeatures are studied.Furthermore,a classification of the processors is presented based on whether they employ either the data flow execution modelexclusively or in combination with the control flow model and are accordinglygrouped as exclusive data flow or hybrid architectures.The hybrid categoryis further subdivided as conjoint or accelerator-style architectures dependingon how they deploy and separate the data flow and control flow executionmodel within their execution blocks.Lastly,a brief comparison and discussionof their advantages and drawbacks is also considered.From this study weconclude that although the data flow architectures are seen to have maturedsignificantly,issues like data-structure handling and lack of efficient placementand scheduling algorithms have prevented these from becoming commerciallyviable.展开更多
We consider the problem of data flow fuzzy control of discrete queuing systems with three different service-rate servers. The objective is to dynamically assign customers to idle severs based on the state of the syste...We consider the problem of data flow fuzzy control of discrete queuing systems with three different service-rate servers. The objective is to dynamically assign customers to idle severs based on the state of the system so as to minimize the mean sojourn time of customers. Simulation shows the validity of the fuzzy controller.展开更多
Debris flows are the one type of natural disaster that is most closely associated with hu- man activities. Debris flows are characterized as being widely distributed and frequently activated. Rainfall is an important ...Debris flows are the one type of natural disaster that is most closely associated with hu- man activities. Debris flows are characterized as being widely distributed and frequently activated. Rainfall is an important component of debris flows and is the most active factor when debris flows oc- cur. Rainfall also determines the temporal and spatial distribution characteristics of the hazards. A reasonable rainfall threshold target is essential to ensuring the accuracy of debris flow pre-warning. Such a threshold is important for the study of the mechanisms of debris flow formation, predicting the characteristics of future activities and the design of prevention and engineering control measures. Most mountainous areas have little data regarding rainfall and hazards, especially in debris flow forming re- gions. Therefore, both the traditional demonstration method and frequency calculated method cannot satisfy the debris flow pre-warning requirements. This study presents the characteristics of pre-warning regions, included the rainfall, hydrologic and topographic conditions. An analogous area with abundant data and the same conditions as the pre-warning region was selected, and the rainfall threshold was calculated by proxy. This method resolved the problem of debris flow pre-warning in ar- eas lacking data and provided a new approach for debris flow pre-warning in mountainous areas.展开更多
Metro system has experienced the global rapid rise over the past decades. However,few studies have paid attention to the evolution in system usage with the network expanding. The paper's main objectives are to ana...Metro system has experienced the global rapid rise over the past decades. However,few studies have paid attention to the evolution in system usage with the network expanding. The paper's main objectives are to analyze passenger flow characteristics and evaluate travel time reliability for the Nanjing Metro network by visualizing the smart card data of April 2014,April 2015 and April 2016. We performed visualization techniques and comparative analyses to examine the changes in system usage between before and after the system expansion. Specifically,workdays,holidays and weekends were specially segmented for analysis.Results showed that workdays had obvious morning and evening peak hours due to daily commuting,while no obvious peak hours existed in weekends and holidays and the daily traffic was evenly distributed. Besides,some metro stations had a serious directional imbalance,especially during the morning and evening peak hours of workdays. Serious unreliability occurred in morning peaks on workdays and the reliability of new lines was relatively low,meanwhile,new stations had negative effects on exiting stations in terms of reliability. Monitoring the evolution of system usage over years enables the identification of system performance and can serve as an input for improving the metro system quality.展开更多
The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-ro...The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.展开更多
Named-data Networking(NDN) is a promising future Internet architecture, which introduces some evolutionary elements into layer-3, e.g., consumer-driven communication, soft state on data forwarding plane and hop-byhop ...Named-data Networking(NDN) is a promising future Internet architecture, which introduces some evolutionary elements into layer-3, e.g., consumer-driven communication, soft state on data forwarding plane and hop-byhop traffic control. And those elements ensure data holders to solely return the requested data within the lifetime of the request, instead of pushing data whenever needed and whatever it is. Despite the dispute on the advantages and their prices, this pattern requires data consumers to keep sending requests at the right moments for continuous data transmission, resulting in significant forwarding cost and sophisticated application design. In this paper, we propose Interest Set(IS) mechanism, which compresses a set of similar Interests into one request, and maintains a relative long-term data returning path with soft state and continuous feedback from upstream. In this way, IS relaxes the above requirement, and scales NDN data forwarding by reducing forwarded requests and soft states that are needed to retrieve a given set of data.展开更多
Data sourcing challenges in African nations have led many African urban infrastructure developments to be implemented with minimal scientific backing to support their success. In some cases this may directly impact a ...Data sourcing challenges in African nations have led many African urban infrastructure developments to be implemented with minimal scientific backing to support their success. In some cases this may directly impact a city’s ability to reach service delivery, economic growth and human development goals, let alone the city’s ability to protect ecosystem services upon which it relies. As an attempt to fill this gap, this paper describes an exploratory process used to determine city-level demographic, economic and resource flow data for African nations. The approach makes use of scaling and clustering techniques to form acceptable and utilizable representations of selected African cities. Variables that may serve as the strongest predictors for resource consumption intensity in African nations and cities were explored, in particular, the aspects of the Koppen Climate Zones, estimates of average urban income and GDP, and the influence of urban primacy. It is expected that the approach examined will provide a step towards estimating and understanding African cities and their resource profiles.展开更多
基金This paper is a phased result of the“Research on the Issue of China’s Data Export System”(24SFB3035)a research project of the Ministry of Justice of China on the construction of the rule of law and the study of legal theories at the ministerial level in 2024.
文摘The Fourth Industrial Revolution has endowed the concept of state sovereignty with new era-specific connotations,leading to the emergence of the theory of data sovereignty.While countries refine their domestic legislation to establish their data sovereignty,they are also actively engaging in the negotiation of cross-border data flow rules within international trade agreements to construct data sovereignty.During these negotiations,countries express differing regulatory claims,with some focusing on safeguarding sovereignty and protecting human rights,some prioritizing economic promotion and security assurance,and others targeting traditional and innovative digital trade barriers.These varied approaches reflect the tension between three pairs of values:collectivism and individualism,freedom and security,and tradition and innovation.Based on their distinct value pursuits,three representative models of data sovereignty construction have emerged globally.At the current juncture,when international rules for digital trade are still in their nascent stages,China should timely establish its data sovereignty rules,actively participate in global data sovereignty competition,and balance its sovereignty interests with other interests.Specifically,China should explore the scope of system-acceptable digital trade barriers through free trade zones;integrate domestic and international legal frameworks to ensure the alignment of China’s data governance legislation with its obligations under international trade agreements;and use the development of the“Digital Silk Road”as a starting point to prioritize the formation of digital trade rules with countries participating in the Belt and Road Initiative,promoting the Chinese solutions internationally.
基金the Scientific Research Funding Project of Liaoning Education Department of China under Grant No.JDL2020005,No.LJKZ0485the National Key Research and Development Program of China under Grant No.2018YFA0704605.
文摘Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usable or not.However,the comprehensive evaluation method of data quality mostly contains the subjective factors of the evaluator,so how to comprehensively and objectively evaluate the data has become a bottleneck that needs to be solved in the research of comprehensive evaluation method.In order to evaluate the data more comprehensively,objectively and differentially,a novel comprehensive evaluation method based on particle swarm optimization(PSO)and grey correlation analysis(GCA)is presented in this paper.At first,an improved GCA evaluation model based on the technique for order preference by similarity to an ideal solution(TOPSIS)is proposed.Then,an objective function model of maximum difference of the comprehensive evaluation values is built,and the PSO algorithm is used to optimize the weights of the improved GCA evaluation model based on the objective function model.Finally,the performance of the proposed method is investigated through parameter analysis.A performance comparison of traffic flow data is carried out,and the simulation results show that the maximum average difference between the evaluation results and its mean value(MDR)of the proposed comprehensive evaluation method is 33.24%higher than that of TOPSIS-GCA,and 6.86%higher than that of GCA.The proposed method has better differentiation than other methods,which means that it objectively and comprehensively evaluates the data from both the relevance and differentiation of the data,and the results more effectively reflect the differences in data quality,which will provide more effective data support for intelligent modeling,prediction and other applications.
文摘Characteristics of the Internet traffic data flow are studied based on the chaos theory. A phase space that is isometric with the network dynamic system is reconstructed by using the single variable time series of a network flow. Some parameters, such as the correlative dimension and the Lyapunov exponent are calculated, and the chaos characteristic is proved to exist in Internet traffic data flows. A neural network model is construct- ed based on radial basis function (RBF) to forecast actual Internet traffic data flow. Simulation results show that, compared with other forecasts of the forward-feedback neural network, the forecast of the RBF neural network based on the chaos theory has faster learning capacity and higher forecasting accuracy.
文摘Since heat flow bears valuable information on energy balance between various processes occurred at depths, heat flow data attract more and more attention of earth sciences communities. In China, 3 heat flow values from the Mesozoic basin of NE China were reported in 1966, but the quality and accuracy seemed to be unsatisfactory. The first portion of 25 reliable heat flow data was published by
基金the National Natural Science Foundation of China(60425206,60503033)National Basic Research Program of China(973 Program,2002CB312000)Opening Foundation of State Key Laboratory of Software Engineering in Wuhan University
文摘This paper proposes a method of data-flow testing for Web services composition.Firstly,to facilitate data flow analysis and constraints collecting,the existing model representation of business process execution language(BPEL)is modified in company with the analysis of data dependency and an exact representation of dead path elimination(DPE)is proposed,which over-comes the difficulties brought to dataflow analysis.Then defining and using information based on data flow rules is collected by parsing BPEL and Web services description language(WSDL)documents and the def-use annotated control flow graph is created.Based on this model,data-flow anomalies which indicate potential errors can be discovered by traversing the paths of graph,and all-du-paths used in dynamic data flow testing for Web services composition are automatically generated,then testers can design the test cases according to the collected constraints for each path selected.
基金jointly sponsored by the Special Fund for Earthquake Scientific Research in the Public Welfare of China Earthquake Administration(201508008)the tundamental Research Funds for the Central University(WK2080000053)Academician Chen Yong Workstation Project in Yunnan Province
文摘With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source signals. To some extent,deconvolution can eliminate changes of the recorded signals due to source variations. Generally speaking,in order to remove the airgun source wavelet signal and obtain the Green's functions between the airgun source and stations,we need to select an appropriate method to perform the deconvolution process for seismic waveform data. Frequency domain water level deconvolution and time domain iterative deconvolution are two kinds of deconvolution methods widely used in the field of receiver functions,etc. We use the Binchuan( in Yunnan Province,China) airgun data as an example to compare the performance of these two deconvolution methods in airgun source data processing. The results indicate that frequency domain water level deconvolution is better in terms of computational efficiency;time domain iterative deconvolution is better in terms of the signal-to-noise ratio( SNR),and the initial motion of P-wave is also clearer. We further discuss the sequence issue of deconvolution and stack for multiple-shot airgun data processing. Finally,we propose a general processing flow for the airgun source data to extract the Green 's functions between the airgun source and stations.
文摘Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.
基金supported in part by the National Key R&D Program of China(2017YFB1001804)Shanghai Science and Technology Innovation Action Plan Project(16511100900)
文摘In order to guarantee the correctness of business processes, not only control-flow errors but also data-flow errors should be considered. The control-flow errors mainly focus on deadlock, livelock, soundness, and so on. However, there are not too many methods for detecting data-flow errors. This paper defines Petri nets with data operations(PN-DO) that can model the operations on data such as read, write and delete. Based on PN-DO, we define some data-flow errors in this paper. We construct a reachability graph with data operations for each PN-DO, and then propose a method to reduce the reachability graph. Based on the reduced reachability graph, data-flow errors can be detected rapidly. A case study is given to illustrate the effectiveness of our methods.
文摘SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).
文摘A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of syntax and semantic rules suitable for a semantic network. Based on the model,the paper introduces a formal method defining data flow diagrams (DFD) and also simply explains how to use the method.
文摘Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture employ fine-grain data-driven parallelism.These architectures have thepotential to exploit the inherent parallelism in compute intensive applicationslike signal processing,image and video processing and so on and can thusachieve faster throughputs and higher power efficiency.In this paper,severaldata flow computing architectures are explored,and their main architecturalfeatures are studied.Furthermore,a classification of the processors is presented based on whether they employ either the data flow execution modelexclusively or in combination with the control flow model and are accordinglygrouped as exclusive data flow or hybrid architectures.The hybrid categoryis further subdivided as conjoint or accelerator-style architectures dependingon how they deploy and separate the data flow and control flow executionmodel within their execution blocks.Lastly,a brief comparison and discussionof their advantages and drawbacks is also considered.From this study weconclude that although the data flow architectures are seen to have maturedsignificantly,issues like data-structure handling and lack of efficient placementand scheduling algorithms have prevented these from becoming commerciallyviable.
文摘We consider the problem of data flow fuzzy control of discrete queuing systems with three different service-rate servers. The objective is to dynamically assign customers to idle severs based on the state of the system so as to minimize the mean sojourn time of customers. Simulation shows the validity of the fuzzy controller.
基金supported by the National Natural Science Foundation of China(Nos.40830742 and 40901007)
文摘Debris flows are the one type of natural disaster that is most closely associated with hu- man activities. Debris flows are characterized as being widely distributed and frequently activated. Rainfall is an important component of debris flows and is the most active factor when debris flows oc- cur. Rainfall also determines the temporal and spatial distribution characteristics of the hazards. A reasonable rainfall threshold target is essential to ensuring the accuracy of debris flow pre-warning. Such a threshold is important for the study of the mechanisms of debris flow formation, predicting the characteristics of future activities and the design of prevention and engineering control measures. Most mountainous areas have little data regarding rainfall and hazards, especially in debris flow forming re- gions. Therefore, both the traditional demonstration method and frequency calculated method cannot satisfy the debris flow pre-warning requirements. This study presents the characteristics of pre-warning regions, included the rainfall, hydrologic and topographic conditions. An analogous area with abundant data and the same conditions as the pre-warning region was selected, and the rainfall threshold was calculated by proxy. This method resolved the problem of debris flow pre-warning in ar- eas lacking data and provided a new approach for debris flow pre-warning in mountainous areas.
基金Sponsored by Projects of International Cooperation and Exchange of the National Natural Science Foundation of China(Grant No.51561135003)Key Project of National Natural Science Foundation of China(Grant No.51338003)
文摘Metro system has experienced the global rapid rise over the past decades. However,few studies have paid attention to the evolution in system usage with the network expanding. The paper's main objectives are to analyze passenger flow characteristics and evaluate travel time reliability for the Nanjing Metro network by visualizing the smart card data of April 2014,April 2015 and April 2016. We performed visualization techniques and comparative analyses to examine the changes in system usage between before and after the system expansion. Specifically,workdays,holidays and weekends were specially segmented for analysis.Results showed that workdays had obvious morning and evening peak hours due to daily commuting,while no obvious peak hours existed in weekends and holidays and the daily traffic was evenly distributed. Besides,some metro stations had a serious directional imbalance,especially during the morning and evening peak hours of workdays. Serious unreliability occurred in morning peaks on workdays and the reliability of new lines was relatively low,meanwhile,new stations had negative effects on exiting stations in terms of reliability. Monitoring the evolution of system usage over years enables the identification of system performance and can serve as an input for improving the metro system quality.
基金a preliminary result of the Chinese Government Scholarship High-level Graduate Program sponsored by China Scholarship Council(Program No.CSC202206310052)。
文摘The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.
基金supported by the National Hightech R&D Program ("863" Program) of China (No.2013AA013505)the National Science Foundation of China (No.61472213)
文摘Named-data Networking(NDN) is a promising future Internet architecture, which introduces some evolutionary elements into layer-3, e.g., consumer-driven communication, soft state on data forwarding plane and hop-byhop traffic control. And those elements ensure data holders to solely return the requested data within the lifetime of the request, instead of pushing data whenever needed and whatever it is. Despite the dispute on the advantages and their prices, this pattern requires data consumers to keep sending requests at the right moments for continuous data transmission, resulting in significant forwarding cost and sophisticated application design. In this paper, we propose Interest Set(IS) mechanism, which compresses a set of similar Interests into one request, and maintains a relative long-term data returning path with soft state and continuous feedback from upstream. In this way, IS relaxes the above requirement, and scales NDN data forwarding by reducing forwarded requests and soft states that are needed to retrieve a given set of data.
文摘Data sourcing challenges in African nations have led many African urban infrastructure developments to be implemented with minimal scientific backing to support their success. In some cases this may directly impact a city’s ability to reach service delivery, economic growth and human development goals, let alone the city’s ability to protect ecosystem services upon which it relies. As an attempt to fill this gap, this paper describes an exploratory process used to determine city-level demographic, economic and resource flow data for African nations. The approach makes use of scaling and clustering techniques to form acceptable and utilizable representations of selected African cities. Variables that may serve as the strongest predictors for resource consumption intensity in African nations and cities were explored, in particular, the aspects of the Koppen Climate Zones, estimates of average urban income and GDP, and the influence of urban primacy. It is expected that the approach examined will provide a step towards estimating and understanding African cities and their resource profiles.