Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and ...Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and rely on single indicators to assess startups’roles in predicting future success,failing to comprehensively capture topological variations and structural diversity.To address these limitations,we construct a temporal information network using 14547 investment records from 1013 global blockchain startups between 2004 and 2020,sourced from Crunchbase.We propose two dynamic methods to characterize the information flow:temporal random walk(sTRW)for modeling information flow trajectories and temporal betweenness centrality(tTBET)for identifying key information hubs.These methods enhance walk coverage while ensuring random stability,allowing for more effective identification of influential startups.By integrating sTRW and tTBET,we develop a comprehensive metric to evaluate a startup’s influence within the network.In experiments assessing startups’potential for future success—where successful startups are defined as those that have undergone M&A or IPO—incorporating this metric improves accuracy,recall,and F1 score by 0.035,0.035,and 0.042,respectively.Our findings indicate that information flow from key startups to others diminishes as the network distance increases.Additionally,successful startups generally exhibit higher information inflows than outflows,suggesting that actively seeking investment-related information contributes to startup growth.Our research provides valuable insights for formulating startup development strategies and offers practical guidance for market regulators.展开更多
In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM te...In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM technology plus the supply chain of prefabricated building is analyzed,and the problems existing in the current supply chain and the application of BIM technology at various stages are elaborated.By analyzing the structural composition of the prefabricated building supply chain,an information sharing platform framework for prefabricated building supply chain based on BIM was established,which serves as a valuable reference for managing prefabricated building supply chains.BIM technology aligns well with assembly construction,laying a solid foundation for their synergistic development and offering novel research avenues for the prefabricated building supply chain.展开更多
Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classif...Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classification process can be described as information flow from images to maps through a trained classifier.Characterizing the information flow is essential for understanding the classification mechanism,providing solutions that address such theoretical issues as“what is the maximum number of classes that can be classified from a given MRSI?”and“how much information gain can be obtained?”Consequently,two interesting questions naturally arise,i.e.(i)How can we characterize the information flow?and(ii)What is the mathematical form of the information flow?To answer these two questions,this study first hypothesizes that thermodynamic entropy is the appropriate measure of information for both MRSI and LULCM.This hypothesis is then supported by kinetic-theory-based experiments.Thereafter,upon such an entropy,a generalized Jarzynski equation is formulated to mathematically model the information flow,which contains such parameters as thermodynamic entropy of MRSI,thermodynamic entropy of LULCM,weighted F1-score(classification accuracy),and total number of classes.This generalized Jarzynski equation has been successfully validated by hypothesis-driven experiments where 694 Sentinel-2 images are classified into 10 classes by four classical classifiers.This study provides a way for linking thermodynamic laws and concepts to the characterization and understanding of information flow in land cover classification,opening a new door for constructing domain knowledge.展开更多
The 'central dogma 'of molecular biology indicated that the direction of the genetic information flow is from DNA - RNA - protein. However, up to now, the central dogma has not obtained a sufficient theoretica...The 'central dogma 'of molecular biology indicated that the direction of the genetic information flow is from DNA - RNA - protein. However, up to now, the central dogma has not obtained a sufficient theoretical support from cybernetics and information theory. In addition, some special cases in biology, such as, although the scrapie prion is irreversibly inactivated by alkali, five procedures with more specificity for modifying nucleic acids failed to cause inactivation and when a resting cell is activated by some factors and division occurs, protein synthesis has begun before DNA synthesis etc., are also very difficult to explain clearly by the central dogma. A broad outline of a mechanism for reverse translation can easily be 'designed', based on the normal translation process, and this serves both to prove that there is no fundamental theoretical reason for the central dogma, and to illustrate why the redundancy of genetic code is not a problem.This paper, based on some previous research work of authors, from the view of cybernetics, information theory and theoretical biology, explored the possibility of protein as a genetic information carrier, the probable pairing ways between ammo acids-codons, and the direction of genetic information flows etc., at theory, by comparing and analyzing theoretically the characteristics of information carriers existing in DNA and protein. The authors inferred that perhaps protein may join the informational transferring as a genetic information carrier; the direction of genetic information flows, besides the way described by the central dogma, seem also to have another type, that is, genetic information flowing from protein - DNA (RNA) - protein, which also includes the genetic information flow in the central dogma. Undoubtedly, the research on problems about the position and roles of protein during the genetic information transferring will have an important effect on the investigation and development of molecular biology, molecular genetics and gene engineering.展开更多
The Indonesian Throughflow(ITF)plays important roles in global ocean circulation and climate systems.Previous studies suggested the ITF interannual variability is driven by both the El Niño-Southern Oscillation(E...The Indonesian Throughflow(ITF)plays important roles in global ocean circulation and climate systems.Previous studies suggested the ITF interannual variability is driven by both the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole(IOD)events.The detailed processes of ENSO and/or IOD induced anomalies impacting on the ITF,however,are still not clear.In this study,this issue is investigated through causal relation,statistical,and dynamical analyses based on satellite observation.The results show that the driven mechanisms of ENSO on the ITF include two aspects.Firstly,the ENSO related wind field anomalies driven anomalous cyclonic ocean circulation in the western Pacific,and off equatorial upwelling Rossby waves propagating westward to arrive at the western boundary of the Pacific,both tend to induce negative sea surface height anomalies(SSHA)in the western Pacific,favoring ITF reduction since the develop of the El Niño through the following year.Secondly,the ENSO events modulate equatorial Indian Ocean zonal winds through Walker Circulation,which in turn trigger eastward propagating upwelling Kelvin waves and westward propagating downwelling Rossby waves.The Rossby waves are reflected into downwelling Kelvin waves,which then propagate eastward along the equator and the Sumatra-Java coast in the Indian Ocean.As a result,the wave dynamics tend to generate negative(positive)SSHA in the eastern Indian Ocean,and thus enhance(reduce)the ITF transport with time lag of 0-6 months(9-12 months),respectively.Under the IOD condition,the wave dynamics also tend to enhance the ITF in the positive IOD year,and reduce the ITF in the following year.展开更多
Accelerate processor, efficient software and pervasive connections provide sensor nodes with more powerful computation and storage ability, which can offer various services to user. Based on these atomic services, dif...Accelerate processor, efficient software and pervasive connections provide sensor nodes with more powerful computation and storage ability, which can offer various services to user. Based on these atomic services, different sensor nodes can cooperate and compose with each other to complete more complicated tasks for user. However, because of the regional characteristic of sensor nodes, merging data with different sensitivities become a primary requirement to the composite services, and information flow security should be intensively considered during service composition. In order to mitigate the great cost caused by the complexity of modeling and the heavy load of single-node verification to the energy-limited sensor node, in this paper, we propose a new distributed verification framework to enforce information flow security on composite services of smart sensor network. We analyze the information flows in composite services and specify security constraints for each service participant. Then we propose an algorithm over the distributed verification framework involving each sensor node to participate in the composite service verification based on the security constraints. The experimental results indicate that our approach can reduce the cost of verification and provide a better load balance.展开更多
Cloud computing provides services to users through Internet.This open mode not only facilitates the access by users,but also brings potential security risks.In cloud computing,the risk of data leakage exists between u...Cloud computing provides services to users through Internet.This open mode not only facilitates the access by users,but also brings potential security risks.In cloud computing,the risk of data leakage exists between users and virtual machines.Whether direct or indirect data leakage,it can be regarded as illegal information flow.Methods,such as access control models can control the information flow,but not the covert information flow.Therefore,it needs to use the noninterference models to detect the existence of illegal information flow in cloud computing architecture.Typical noninterference models are not suitable to certificate information flow in cloud computing architecture.In this paper,we propose several information flow models for cloud architecture.One model is for transitive cloud computing architecture.The others are for intransitive cloud computing architecture.When concurrent access actions execute in the cloud architecture,we want that security domain and security domain do not affect each other,that there is no information flow between security domains.But in fact,there will be more or less indirect information flow between security domains.Our models are concerned with how much information is allowed to flow.For example,in the CIP model,the other domain can learn the sequence of actions.But in the CTA model,the other domain can’t learn the information.Which security model will be used in an architecture depends on the security requirements for that architecture.展开更多
Based on Bayesian network (BN) and information flow (IF),a new machine learning-based model named IFBN is put forward to interpolate missing time series of multiple ocean variables. An improved BN structural learning ...Based on Bayesian network (BN) and information flow (IF),a new machine learning-based model named IFBN is put forward to interpolate missing time series of multiple ocean variables. An improved BN structural learning algorithm with IF is designed to mine causal relationships among ocean variables to build network structure. Nondirectional inference mechanism of BN is applied to achieve the synchronous interpolation of multiple missing time series. With the IFBN,all ocean variables are placed in a causal network visually,making full use of information about related variables to fill missing data. More importantly,the synchronous interpolation of multiple variables can avoid model retraining when interpolative objects change. Interpolation experiments show that IFBN has even better interpolation accuracy,effectiveness and stability than existing methods.展开更多
Multi-product collaborative development is adopted widely in manufacturing enterprise, while the present multi-project planning models don't take techni- cal/data interactions of multiple products into account. To de...Multi-product collaborative development is adopted widely in manufacturing enterprise, while the present multi-project planning models don't take techni- cal/data interactions of multiple products into account. To decrease the influence of technical/data interactions on project progresses, the information flow scheduling models based on the extended DSM is presented. Firstly, infor- mation dependencies are divided into four types: series, parallel, coupling and similar. Secondly, different types of dependencies are expressed as DSM units, and the exten- ded DSM model is brought forward, described as a block matrix. Furthermore, the information flow scheduling methods is proposed, which involves four types of opera- tions, where partitioning and clustering algorithm are modified from DSM for ensuring progress of high-priority project, merging and converting is the specific computation of the extended DSM. Finally, the information flow scheduling of two machine tools development is analyzed with example, and different project priorities correspond to different task sequences and total coordination cost. The proposed methodology provides a detailed instruction for information flow scheduling in multi-product development, with specially concerning technical/data interactions.展开更多
The information flow chart within product life cycle is given out based on collaborative production commerce (CPC) thoughts. In this chart, the separated information systems are integrated by means of enterprise kno...The information flow chart within product life cycle is given out based on collaborative production commerce (CPC) thoughts. In this chart, the separated information systems are integrated by means of enterprise knowledge assets that are promoted by CPC from production knowledge. The information flow in R&D process is analyzed in the environment of virtual R&D group and distributed PDM. In addition, the information flow throughout the manufacturing and marketing process is analyzed in CPC environment.展开更多
BACKGROUND:Neuro-rehabilitative training has been shown to promote motor function recovery in stroke patients,although the underlying mechanisms have not been fully clarified.OBJECTIVE:To investigate the effects of ...BACKGROUND:Neuro-rehabilitative training has been shown to promote motor function recovery in stroke patients,although the underlying mechanisms have not been fully clarified.OBJECTIVE:To investigate the effects of finger movement training on functional connectivity and information flow direction in cerebral motor areas of healthy people using electroencephalogram (EEG).DESIGN,TIME AND SETTING:A self-controlled,observational study was performed at the College of Life Science and Bioengineering,Beijing University of Technology between December 2008 and April 2009.PARTICIPANTS:Nineteen healthy adults,who seldom played musical instruments or keyboards,were included in the present study.METHODS:Specific finger movement training was performed,and all subjects were asked to separately press keys with their left or right hand fingers,according to instructions.The task comprised five sessions of test train test train-test.Thirty-six channel EEG signals were recorded in different test sessions prior to and after training.Data were statistically analyzed using one-way analysis of variance.MAIN OUTCOME MEASURES:The number of effective performances,correct ratio,average response time,average movement time,correlation coefficient between pairs of EEG channels,and information flow direction in motor regions were analyzed and compared between different training sessions.RESULTS:Motor function of all subjects was significantly improved in the third test comparedwith the first test (P〈 0.01).More than 80% of connections were strengthened in the motor-related areas following two training sessions,in particular the primary motor regions under the C4 electrode.Compared to the first test,a greater amount of information flowed from the Cz and Fcz electrodes (corresponding to supplementary motor area) to the C4 electrode in the third test.CONCLUSION:Finger task training increased motor ability in subjects by strengthening connections and changing information flow in the motor areas.These results provided a greater understanding of the mechanisms involved in motor rehabilitation.展开更多
With the spread use of the computers, a new crime space and method are presented for criminals. Thus computer evidence plays a key part in criminal cases. Traditional computer evidence searches require that the comput...With the spread use of the computers, a new crime space and method are presented for criminals. Thus computer evidence plays a key part in criminal cases. Traditional computer evidence searches require that the computer specialists know what is stored in the given computer. Binary-based information flow tracking which concerns the changes of control flow is an effective way to analyze the behavior of a program. The existing systems ignore the modifications of the data flow, which may be also a malicious behavior. Thus the function recognition is introduced to improve the information flow tracking. Function recognition is a helpful technique recognizing the function body from the software binary to analyze the binary code. And that no false positive and no false negative in our experiments strongly proves that our approach is effective.展开更多
Flume, which implements decentralized information flow control (DIFC), allows a high security level process to "pre-create" secret files in a low security level directory. However, the pre-create mechanism makes s...Flume, which implements decentralized information flow control (DIFC), allows a high security level process to "pre-create" secret files in a low security level directory. However, the pre-create mechanism makes some normal system calls unavailable, and moreover, it needs priori knowledge to create a large quantity of objects, which is difficult to estimate in practical operating systems. In this paper, we present an extended Flume file access control mechanism, named Effect, to substitute the mechanism of pre-create, which permits write operations (create, delete, and rename a file) on directories and creates a file access virtual layer that allocates operational views for each process with noninterference properties. In the end, we further present an analysis on the security of Effect. Our work makes it easier for multi-user to share confidential information in decentralized information flow control systems.展开更多
The characteristics of highway transport and the application of information technology in highway transport service are combined; the information flows' promotion and substitution on highway transport are analyzed; a...The characteristics of highway transport and the application of information technology in highway transport service are combined; the information flows' promotion and substitution on highway transport are analyzed; and the highway transport development strategy based on information flow impact is proposed.展开更多
The management of information flow for production improvement has always been a target in the research. In this paper, the focus is on the analysis model of the characteristics of information flow in shop floor operat...The management of information flow for production improvement has always been a target in the research. In this paper, the focus is on the analysis model of the characteristics of information flow in shop floor operations based on the influence that dimension (support or medium), direction and the quality information flow have on the value of information flow using machine learning classification algorithms. The obtained results of classification algorithms used to analyze the value of information flow are Decision Trees (DT) and Random Forest (RF) with a score of 0.99% and the mean absolute error of 0.005. The results also show that the management of information flow using DT or RF shows that, the dimension of information such as digital information has the greatest value of information flow in shop floor operations when the shop floor is totally digitalized. Direction of information flow does not have any great influence on shop floor operations processes when the operations processes are digitalized or done by operators as machines.展开更多
We develop a series of mathematical models to describe flow of information in different periods of time and the relationship between flow of information and inherent value. We optimize the diffusion mechanism of infor...We develop a series of mathematical models to describe flow of information in different periods of time and the relationship between flow of information and inherent value. We optimize the diffusion mechanism of information based on model SEIR and improve the diffusion mechanism. In order to explore how inherent value of the information affects the flow of information, we simulate the model by using Matalab. We also use the data that the number of people is connected to Internet in Canada from the year 2009 to 2014 to analysis the model’s reliability. Then we use the model to predict the communication networks’ relationships and capacities around the year 2050. Last we do sensitivity analysis by making small changes in parameters of simulation experiment. The result of the experiment is helpful to model how public interest and opinion can be changed in complex network.展开更多
Microblog is a new Internet featured product, which has seen a rapid development in recent years. Researchers from different countries are making various technical analyses on microblogging applications. In this study...Microblog is a new Internet featured product, which has seen a rapid development in recent years. Researchers from different countries are making various technical analyses on microblogging applications. In this study, through using the natural language processing(NLP) and data mining, we analyzed the information content transmitted via a microblog, users' social networks and their interactions, and carried out an empirical analysis on the dissemination process of one particular piece of information via Sina Weibo.Based on the result of these analyses, we attempt to develop a better understanding about the rule and mechanism of the informal information flow in microblogging.展开更多
Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The struc...Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The structure of a large directed hierarchical network is often strongly influenced by reverse edges from lower-to higher-level nodes,such as lagging birds’howl in a flock or the opinions of lowerlevel individuals feeding back to higher-level ones in a social group.This study reveals that,for most large-scale real hierarchical networks,the majority of the reverse edges do not affect the synchronization process of the entire network;the synchronization process is influenced only by a small part of these reverse edges along specific paths.More surprisingly,a single effective reverse edge can slow down the synchronization of a huge hierarchical network by over 60%.The effect of such edges depends not on the network size but only on the average in-degree of the involved subnetwork.The overwhelming majority of active reverse edges turn out to have some kind of“bunching”effect on the information flows of hierarchical networks,which slows down synchronization processes.This finding refines the current understanding of the role of reverse edges in many natural,social,and engineering hierarchical networks,which might be beneficial for precisely tuning the synchronization rhythms of these networks.Our study also proposes an effective way to attack a hierarchical network by adding a malicious reverse edge to it and provides some guidance for protecting a network by screening out the specific small proportion of vulnerable nodes.展开更多
Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognize...Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognized as the main factor affecting bigeye tuna(BET)distribution during El Niño events,the roles of different types of El Niño and subsurface oceanic signals,such as ocean heat content and mixed layer depth,remain unclear.We conducted A spatial-temporal analysis to investigate the relationship among BET distribution,El Niño events,and the underlying oceanic signals to address this knowledge gap.We used monthly purse seine fisheries data of BET in the eastern tropical Pacific Ocean(ETPO)from 1994 to 2012 and extracted the central-Pacific El Niño(CPEN)indices based on Niño 3 and Niño 4indexes.Furthermore,we employed Explainable Artificial Intelligence(XAI)models to identify the main patterns and feature importance of the six environmental variables and used information flow analysis to determine the causality between the selected factors and BET distribution.Finally,we analyzed Argo datasets to calculate the vertical,horizontal,and zonal mean temperature differences during CPEN and normal years to clarify the oceanic thermodynamic structure differences between the two types of years.Our findings reveal that BET distribution during the CPEN years is mainly driven by advection feedback of subsurface warmer thermal signals and vertically warmer habitats in the CPEN domain area,especially in high-yield fishing areas.The high frequency of CPEN events will likely lead to the westward shift of fisheries centers.展开更多
基金the funding from the National Natural Science Foundation of China(Grant Nos.42001236,71991481,and 71991480)Young Elite Scientist Sponsor-ship Program by Bast(Grant No.BYESS2023413)。
文摘Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and rely on single indicators to assess startups’roles in predicting future success,failing to comprehensively capture topological variations and structural diversity.To address these limitations,we construct a temporal information network using 14547 investment records from 1013 global blockchain startups between 2004 and 2020,sourced from Crunchbase.We propose two dynamic methods to characterize the information flow:temporal random walk(sTRW)for modeling information flow trajectories and temporal betweenness centrality(tTBET)for identifying key information hubs.These methods enhance walk coverage while ensuring random stability,allowing for more effective identification of influential startups.By integrating sTRW and tTBET,we develop a comprehensive metric to evaluate a startup’s influence within the network.In experiments assessing startups’potential for future success—where successful startups are defined as those that have undergone M&A or IPO—incorporating this metric improves accuracy,recall,and F1 score by 0.035,0.035,and 0.042,respectively.Our findings indicate that information flow from key startups to others diminishes as the network distance increases.Additionally,successful startups generally exhibit higher information inflows than outflows,suggesting that actively seeking investment-related information contributes to startup growth.Our research provides valuable insights for formulating startup development strategies and offers practical guidance for market regulators.
基金“Education Department of Hebei Funding Project for Cultivating the Innovative Capabilities of Graduate Students”(Project No.:XJCX202510)。
文摘In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM technology plus the supply chain of prefabricated building is analyzed,and the problems existing in the current supply chain and the application of BIM technology at various stages are elaborated.By analyzing the structural composition of the prefabricated building supply chain,an information sharing platform framework for prefabricated building supply chain based on BIM was established,which serves as a valuable reference for managing prefabricated building supply chains.BIM technology aligns well with assembly construction,laying a solid foundation for their synergistic development and offering novel research avenues for the prefabricated building supply chain.
基金supported by the National Natural Science Foundation of China[grant number 41930104]by the Research Grants Council of Hong Kong[grant number PolyU 152219/18E].
文摘Information on Land Use and Land Cover Map(LULCM)is essential for environment and socioeconomic applications.Such maps are generally derived from Multispectral Remote Sensing Images(MRSI)via classification.The classification process can be described as information flow from images to maps through a trained classifier.Characterizing the information flow is essential for understanding the classification mechanism,providing solutions that address such theoretical issues as“what is the maximum number of classes that can be classified from a given MRSI?”and“how much information gain can be obtained?”Consequently,two interesting questions naturally arise,i.e.(i)How can we characterize the information flow?and(ii)What is the mathematical form of the information flow?To answer these two questions,this study first hypothesizes that thermodynamic entropy is the appropriate measure of information for both MRSI and LULCM.This hypothesis is then supported by kinetic-theory-based experiments.Thereafter,upon such an entropy,a generalized Jarzynski equation is formulated to mathematically model the information flow,which contains such parameters as thermodynamic entropy of MRSI,thermodynamic entropy of LULCM,weighted F1-score(classification accuracy),and total number of classes.This generalized Jarzynski equation has been successfully validated by hypothesis-driven experiments where 694 Sentinel-2 images are classified into 10 classes by four classical classifiers.This study provides a way for linking thermodynamic laws and concepts to the characterization and understanding of information flow in land cover classification,opening a new door for constructing domain knowledge.
文摘The 'central dogma 'of molecular biology indicated that the direction of the genetic information flow is from DNA - RNA - protein. However, up to now, the central dogma has not obtained a sufficient theoretical support from cybernetics and information theory. In addition, some special cases in biology, such as, although the scrapie prion is irreversibly inactivated by alkali, five procedures with more specificity for modifying nucleic acids failed to cause inactivation and when a resting cell is activated by some factors and division occurs, protein synthesis has begun before DNA synthesis etc., are also very difficult to explain clearly by the central dogma. A broad outline of a mechanism for reverse translation can easily be 'designed', based on the normal translation process, and this serves both to prove that there is no fundamental theoretical reason for the central dogma, and to illustrate why the redundancy of genetic code is not a problem.This paper, based on some previous research work of authors, from the view of cybernetics, information theory and theoretical biology, explored the possibility of protein as a genetic information carrier, the probable pairing ways between ammo acids-codons, and the direction of genetic information flows etc., at theory, by comparing and analyzing theoretically the characteristics of information carriers existing in DNA and protein. The authors inferred that perhaps protein may join the informational transferring as a genetic information carrier; the direction of genetic information flows, besides the way described by the central dogma, seem also to have another type, that is, genetic information flowing from protein - DNA (RNA) - protein, which also includes the genetic information flow in the central dogma. Undoubtedly, the research on problems about the position and roles of protein during the genetic information transferring will have an important effect on the investigation and development of molecular biology, molecular genetics and gene engineering.
基金The Fund of Laoshan Laboratory under contract No.LSKJ202202700the Basic Scientific Fund for National Public Research Institutes of China under contract No.2024Q02+1 种基金the National Natural Science Foundation of China under contract Nos 42076023 and 42430402the Global Change and Air-Sea InteractionⅡProject under contract No.GASI-01-ATP-STwin.
文摘The Indonesian Throughflow(ITF)plays important roles in global ocean circulation and climate systems.Previous studies suggested the ITF interannual variability is driven by both the El Niño-Southern Oscillation(ENSO)and the Indian Ocean Dipole(IOD)events.The detailed processes of ENSO and/or IOD induced anomalies impacting on the ITF,however,are still not clear.In this study,this issue is investigated through causal relation,statistical,and dynamical analyses based on satellite observation.The results show that the driven mechanisms of ENSO on the ITF include two aspects.Firstly,the ENSO related wind field anomalies driven anomalous cyclonic ocean circulation in the western Pacific,and off equatorial upwelling Rossby waves propagating westward to arrive at the western boundary of the Pacific,both tend to induce negative sea surface height anomalies(SSHA)in the western Pacific,favoring ITF reduction since the develop of the El Niño through the following year.Secondly,the ENSO events modulate equatorial Indian Ocean zonal winds through Walker Circulation,which in turn trigger eastward propagating upwelling Kelvin waves and westward propagating downwelling Rossby waves.The Rossby waves are reflected into downwelling Kelvin waves,which then propagate eastward along the equator and the Sumatra-Java coast in the Indian Ocean.As a result,the wave dynamics tend to generate negative(positive)SSHA in the eastern Indian Ocean,and thus enhance(reduce)the ITF transport with time lag of 0-6 months(9-12 months),respectively.Under the IOD condition,the wave dynamics also tend to enhance the ITF in the positive IOD year,and reduce the ITF in the following year.
基金supported in part by National Natural Science Foundation of China(61502368,61303033,U1135002 and U1405255)the National High Technology Research and Development Program(863 Program)of China(No.2015AA017203)+1 种基金the Fundamental Research Funds for the Central Universities(XJS14072,JB150308)the Aviation Science Foundation of China(No.2013ZC31003,20141931001)
文摘Accelerate processor, efficient software and pervasive connections provide sensor nodes with more powerful computation and storage ability, which can offer various services to user. Based on these atomic services, different sensor nodes can cooperate and compose with each other to complete more complicated tasks for user. However, because of the regional characteristic of sensor nodes, merging data with different sensitivities become a primary requirement to the composite services, and information flow security should be intensively considered during service composition. In order to mitigate the great cost caused by the complexity of modeling and the heavy load of single-node verification to the energy-limited sensor node, in this paper, we propose a new distributed verification framework to enforce information flow security on composite services of smart sensor network. We analyze the information flows in composite services and specify security constraints for each service participant. Then we propose an algorithm over the distributed verification framework involving each sensor node to participate in the composite service verification based on the security constraints. The experimental results indicate that our approach can reduce the cost of verification and provide a better load balance.
基金Natural Science Research Project of Jiangsu Province Universities and Colleges(No.17KJD520005,Congdong Lv).
文摘Cloud computing provides services to users through Internet.This open mode not only facilitates the access by users,but also brings potential security risks.In cloud computing,the risk of data leakage exists between users and virtual machines.Whether direct or indirect data leakage,it can be regarded as illegal information flow.Methods,such as access control models can control the information flow,but not the covert information flow.Therefore,it needs to use the noninterference models to detect the existence of illegal information flow in cloud computing architecture.Typical noninterference models are not suitable to certificate information flow in cloud computing architecture.In this paper,we propose several information flow models for cloud architecture.One model is for transitive cloud computing architecture.The others are for intransitive cloud computing architecture.When concurrent access actions execute in the cloud architecture,we want that security domain and security domain do not affect each other,that there is no information flow between security domains.But in fact,there will be more or less indirect information flow between security domains.Our models are concerned with how much information is allowed to flow.For example,in the CIP model,the other domain can learn the sequence of actions.But in the CTA model,the other domain can’t learn the information.Which security model will be used in an architecture depends on the security requirements for that architecture.
基金The National Natural Science Foundation of China under contract Nos 41875061 and 41976188the“Double First-Class”Research Program of National University of Defense Technology under contract No.xslw05.
文摘Based on Bayesian network (BN) and information flow (IF),a new machine learning-based model named IFBN is put forward to interpolate missing time series of multiple ocean variables. An improved BN structural learning algorithm with IF is designed to mine causal relationships among ocean variables to build network structure. Nondirectional inference mechanism of BN is applied to achieve the synchronous interpolation of multiple missing time series. With the IFBN,all ocean variables are placed in a causal network visually,making full use of information about related variables to fill missing data. More importantly,the synchronous interpolation of multiple variables can avoid model retraining when interpolative objects change. Interpolation experiments show that IFBN has even better interpolation accuracy,effectiveness and stability than existing methods.
基金Supported by National Natural Science Foundation of China(Grant Nos.51475077,51005038)Science and Technology Foundation of Liaoning China(Grant Nos.201301002,2014028012)
文摘Multi-product collaborative development is adopted widely in manufacturing enterprise, while the present multi-project planning models don't take techni- cal/data interactions of multiple products into account. To decrease the influence of technical/data interactions on project progresses, the information flow scheduling models based on the extended DSM is presented. Firstly, infor- mation dependencies are divided into four types: series, parallel, coupling and similar. Secondly, different types of dependencies are expressed as DSM units, and the exten- ded DSM model is brought forward, described as a block matrix. Furthermore, the information flow scheduling methods is proposed, which involves four types of opera- tions, where partitioning and clustering algorithm are modified from DSM for ensuring progress of high-priority project, merging and converting is the specific computation of the extended DSM. Finally, the information flow scheduling of two machine tools development is analyzed with example, and different project priorities correspond to different task sequences and total coordination cost. The proposed methodology provides a detailed instruction for information flow scheduling in multi-product development, with specially concerning technical/data interactions.
文摘The information flow chart within product life cycle is given out based on collaborative production commerce (CPC) thoughts. In this chart, the separated information systems are integrated by means of enterprise knowledge assets that are promoted by CPC from production knowledge. The information flow in R&D process is analyzed in the environment of virtual R&D group and distributed PDM. In addition, the information flow throughout the manufacturing and marketing process is analyzed in CPC environment.
基金the National Natural Science Foundation of China,No. 30670543
文摘BACKGROUND:Neuro-rehabilitative training has been shown to promote motor function recovery in stroke patients,although the underlying mechanisms have not been fully clarified.OBJECTIVE:To investigate the effects of finger movement training on functional connectivity and information flow direction in cerebral motor areas of healthy people using electroencephalogram (EEG).DESIGN,TIME AND SETTING:A self-controlled,observational study was performed at the College of Life Science and Bioengineering,Beijing University of Technology between December 2008 and April 2009.PARTICIPANTS:Nineteen healthy adults,who seldom played musical instruments or keyboards,were included in the present study.METHODS:Specific finger movement training was performed,and all subjects were asked to separately press keys with their left or right hand fingers,according to instructions.The task comprised five sessions of test train test train-test.Thirty-six channel EEG signals were recorded in different test sessions prior to and after training.Data were statistically analyzed using one-way analysis of variance.MAIN OUTCOME MEASURES:The number of effective performances,correct ratio,average response time,average movement time,correlation coefficient between pairs of EEG channels,and information flow direction in motor regions were analyzed and compared between different training sessions.RESULTS:Motor function of all subjects was significantly improved in the third test comparedwith the first test (P〈 0.01).More than 80% of connections were strengthened in the motor-related areas following two training sessions,in particular the primary motor regions under the C4 electrode.Compared to the first test,a greater amount of information flowed from the Cz and Fcz electrodes (corresponding to supplementary motor area) to the C4 electrode in the third test.CONCLUSION:Finger task training increased motor ability in subjects by strengthening connections and changing information flow in the motor areas.These results provided a greater understanding of the mechanisms involved in motor rehabilitation.
基金This work is supported by National Natural Science Foundation of China (Grant No.60773093, 60873209, and 60970107), the Key Program for Basic Research of Shanghai (Grant No. 09JC1407900, 09510701600, 10511500100), IBM SUR Funding and IBM Research-China JP Funding, and Key Lab of Information Network Security, Ministry of Public Security.
文摘With the spread use of the computers, a new crime space and method are presented for criminals. Thus computer evidence plays a key part in criminal cases. Traditional computer evidence searches require that the computer specialists know what is stored in the given computer. Binary-based information flow tracking which concerns the changes of control flow is an effective way to analyze the behavior of a program. The existing systems ignore the modifications of the data flow, which may be also a malicious behavior. Thus the function recognition is introduced to improve the information flow tracking. Function recognition is a helpful technique recognizing the function body from the software binary to analyze the binary code. And that no false positive and no false negative in our experiments strongly proves that our approach is effective.
基金Supported by the National Natural Science Foundation of China(61003268,61103220,91118003,61173138,61170022)Hubei Provincial Natural Science Foundation(2010CDB08601)The Fundamental ResearchFunds for the Central Universities (3101038,274629)
文摘Flume, which implements decentralized information flow control (DIFC), allows a high security level process to "pre-create" secret files in a low security level directory. However, the pre-create mechanism makes some normal system calls unavailable, and moreover, it needs priori knowledge to create a large quantity of objects, which is difficult to estimate in practical operating systems. In this paper, we present an extended Flume file access control mechanism, named Effect, to substitute the mechanism of pre-create, which permits write operations (create, delete, and rename a file) on directories and creates a file access virtual layer that allocates operational views for each process with noninterference properties. In the end, we further present an analysis on the security of Effect. Our work makes it easier for multi-user to share confidential information in decentralized information flow control systems.
文摘The characteristics of highway transport and the application of information technology in highway transport service are combined; the information flows' promotion and substitution on highway transport are analyzed; and the highway transport development strategy based on information flow impact is proposed.
文摘The management of information flow for production improvement has always been a target in the research. In this paper, the focus is on the analysis model of the characteristics of information flow in shop floor operations based on the influence that dimension (support or medium), direction and the quality information flow have on the value of information flow using machine learning classification algorithms. The obtained results of classification algorithms used to analyze the value of information flow are Decision Trees (DT) and Random Forest (RF) with a score of 0.99% and the mean absolute error of 0.005. The results also show that the management of information flow using DT or RF shows that, the dimension of information such as digital information has the greatest value of information flow in shop floor operations when the shop floor is totally digitalized. Direction of information flow does not have any great influence on shop floor operations processes when the operations processes are digitalized or done by operators as machines.
文摘We develop a series of mathematical models to describe flow of information in different periods of time and the relationship between flow of information and inherent value. We optimize the diffusion mechanism of information based on model SEIR and improve the diffusion mechanism. In order to explore how inherent value of the information affects the flow of information, we simulate the model by using Matalab. We also use the data that the number of people is connected to Internet in Canada from the year 2009 to 2014 to analysis the model’s reliability. Then we use the model to predict the communication networks’ relationships and capacities around the year 2050. Last we do sensitivity analysis by making small changes in parameters of simulation experiment. The result of the experiment is helpful to model how public interest and opinion can be changed in complex network.
文摘Microblog is a new Internet featured product, which has seen a rapid development in recent years. Researchers from different countries are making various technical analyses on microblogging applications. In this study, through using the natural language processing(NLP) and data mining, we analyzed the information content transmitted via a microblog, users' social networks and their interactions, and carried out an empirical analysis on the dissemination process of one particular piece of information via Sina Weibo.Based on the result of these analyses, we attempt to develop a better understanding about the rule and mechanism of the informal information flow in microblogging.
基金supported in part by the National Natural Science Foundation of China(62225306,U2141235,52188102,and 62003145)the National Key Research and Development Program of China(2022ZD0119601)+1 种基金Guangdong Basic and Applied Research Foundation(2022B1515120069)the Science and Technology Project of State Grid Corporation of China(5100-202199557A-0-5-ZN).
文摘Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The structure of a large directed hierarchical network is often strongly influenced by reverse edges from lower-to higher-level nodes,such as lagging birds’howl in a flock or the opinions of lowerlevel individuals feeding back to higher-level ones in a social group.This study reveals that,for most large-scale real hierarchical networks,the majority of the reverse edges do not affect the synchronization process of the entire network;the synchronization process is influenced only by a small part of these reverse edges along specific paths.More surprisingly,a single effective reverse edge can slow down the synchronization of a huge hierarchical network by over 60%.The effect of such edges depends not on the network size but only on the average in-degree of the involved subnetwork.The overwhelming majority of active reverse edges turn out to have some kind of“bunching”effect on the information flows of hierarchical networks,which slows down synchronization processes.This finding refines the current understanding of the role of reverse edges in many natural,social,and engineering hierarchical networks,which might be beneficial for precisely tuning the synchronization rhythms of these networks.Our study also proposes an effective way to attack a hierarchical network by adding a malicious reverse edge to it and provides some guidance for protecting a network by screening out the specific small proportion of vulnerable nodes.
基金Supported by the Marine S&T Fund of Laoshan Laboratory(Qingdao)(No.LSKJ202204302)the National Natural Science Foundation of China(Nos.42090044,42376175,U2006211)。
文摘Bigeye tuna Thunnus obesus is an important migratory species that forages deeply,and El Niño events highly influence its distribution in the eastern Pacific Ocean.While sea surface temperature is widely recognized as the main factor affecting bigeye tuna(BET)distribution during El Niño events,the roles of different types of El Niño and subsurface oceanic signals,such as ocean heat content and mixed layer depth,remain unclear.We conducted A spatial-temporal analysis to investigate the relationship among BET distribution,El Niño events,and the underlying oceanic signals to address this knowledge gap.We used monthly purse seine fisheries data of BET in the eastern tropical Pacific Ocean(ETPO)from 1994 to 2012 and extracted the central-Pacific El Niño(CPEN)indices based on Niño 3 and Niño 4indexes.Furthermore,we employed Explainable Artificial Intelligence(XAI)models to identify the main patterns and feature importance of the six environmental variables and used information flow analysis to determine the causality between the selected factors and BET distribution.Finally,we analyzed Argo datasets to calculate the vertical,horizontal,and zonal mean temperature differences during CPEN and normal years to clarify the oceanic thermodynamic structure differences between the two types of years.Our findings reveal that BET distribution during the CPEN years is mainly driven by advection feedback of subsurface warmer thermal signals and vertically warmer habitats in the CPEN domain area,especially in high-yield fishing areas.The high frequency of CPEN events will likely lead to the westward shift of fisheries centers.