To combat the well-known state-space explosion problem in Prop ositional Linear T emp o- ral Logic (PLTL) model checking, a novel algo- rithm capable of translating PLTL formulas into Nondeterministic Automata (NA...To combat the well-known state-space explosion problem in Prop ositional Linear T emp o- ral Logic (PLTL) model checking, a novel algo- rithm capable of translating PLTL formulas into Nondeterministic Automata (NA) in an efficient way is proposed. The algorithm firstly transforms PLTL formulas into their non-free forms, then it further translates the non-free formulas into their Normal Forms (NFs), next constructs Normal Form Graphs (NFGs) for NF formulas, and it fi- nally transforms NFGs into the NA which ac- cepts both finite words and int-mite words. The experimental data show that the new algorithm re- duces the average number of nodes of target NA for a benchmark formula set and selected formulas in the literature, respectively. These results indi- cate that the PLTL model checking technique em- ploying the new algorithm generates a smaller state space in verification of concurrent systems.展开更多
The entering into big data era gives rise to a novel discipline called Data Science.Data Science is interdisciplinary in its nature,and the existing relevant studies can be categorized into domain-independent studies ...The entering into big data era gives rise to a novel discipline called Data Science.Data Science is interdisciplinary in its nature,and the existing relevant studies can be categorized into domain-independent studies and domain-dependent studies.The domain-dependent studies and domain-independent ones are evolving into Domain-general Data Science and Domain-specific Data Science.Domain-general Data Science emphasizes Data Science in a general sense,involving concepts,theories,methods,technologies,and tools.Domain-specific Data Science is a variant of Domain-general Data Science and varies from one domain to another.The most popular Domain-specific Data Science includes Data journalism,Industrial Data Science,Business Data Science,Health Data Science,Biological Data Science,Social Data Science,and Agile Data Science.The difference between Domain-general Data Science and Domain-specific Data Science roots in their thinking paradigms:DGDS conforms to data-centered thinking,while DSDS is in line with knowledge-centered thinking.As a result,DGDS focuses on the theoretical studies,while DSDS is centered on applied ones.However,DSDS and DGDS possess complementary advantages.Theoretical Data Science(TDS)is a new branch of Data Science that employs mathematical models and abstractions of data objects and systems to rationalize,explain and predict big data phenomena.TDS will bridge the gap between DGDS and DSDS.TDS contrasts with DSDS,which uses casual analysis,as well as DGDS,which employs data-centered thinking to deal with big data problems in that it balances the usability and the interpretability of Data Science practices.The main concerns of TDS are concentrated on integrating the data-centered thinking with the knowledge-centered thinking as well as transforming a correlation analysis into the casual analysis.Hence,TDS can bridge the gaps between DGDS and DSDS,and balance the usability and the interpretability of big data solutions.The studies of TDS should be focused on the following research purpose:to develop theoretical studies of TDS,to take advantages of active property of big data,to embrace design of experiments,to enhance causality analysis,and to develop data products.展开更多
With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such disti...With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.展开更多
We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability o...We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions; pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.展开更多
基金The first author of this paper would like to thank the follow- ing scholars, Prof. Joseph Sifakis, 2007 Turing Award Winner, for his invaluable help with my research and Dr. Kevin Lu at Brunel University, UK for his excellent suggestions on this paper. This work was supported by the National Natural Sci- ence Foundation of China under Grant No.61003079 the Chi- na Postdoctoral Science Foundation under Grant No. 2012M511588.
文摘To combat the well-known state-space explosion problem in Prop ositional Linear T emp o- ral Logic (PLTL) model checking, a novel algo- rithm capable of translating PLTL formulas into Nondeterministic Automata (NA) in an efficient way is proposed. The algorithm firstly transforms PLTL formulas into their non-free forms, then it further translates the non-free formulas into their Normal Forms (NFs), next constructs Normal Form Graphs (NFGs) for NF formulas, and it fi- nally transforms NFGs into the NA which ac- cepts both finite words and int-mite words. The experimental data show that the new algorithm re- duces the average number of nodes of target NA for a benchmark formula set and selected formulas in the literature, respectively. These results indi- cate that the PLTL model checking technique em- ploying the new algorithm generates a smaller state space in verification of concurrent systems.
基金the Ministry of education of Humanities and Social Science project(Project No.20YJA870003)
文摘The entering into big data era gives rise to a novel discipline called Data Science.Data Science is interdisciplinary in its nature,and the existing relevant studies can be categorized into domain-independent studies and domain-dependent studies.The domain-dependent studies and domain-independent ones are evolving into Domain-general Data Science and Domain-specific Data Science.Domain-general Data Science emphasizes Data Science in a general sense,involving concepts,theories,methods,technologies,and tools.Domain-specific Data Science is a variant of Domain-general Data Science and varies from one domain to another.The most popular Domain-specific Data Science includes Data journalism,Industrial Data Science,Business Data Science,Health Data Science,Biological Data Science,Social Data Science,and Agile Data Science.The difference between Domain-general Data Science and Domain-specific Data Science roots in their thinking paradigms:DGDS conforms to data-centered thinking,while DSDS is in line with knowledge-centered thinking.As a result,DGDS focuses on the theoretical studies,while DSDS is centered on applied ones.However,DSDS and DGDS possess complementary advantages.Theoretical Data Science(TDS)is a new branch of Data Science that employs mathematical models and abstractions of data objects and systems to rationalize,explain and predict big data phenomena.TDS will bridge the gap between DGDS and DSDS.TDS contrasts with DSDS,which uses casual analysis,as well as DGDS,which employs data-centered thinking to deal with big data problems in that it balances the usability and the interpretability of Data Science practices.The main concerns of TDS are concentrated on integrating the data-centered thinking with the knowledge-centered thinking as well as transforming a correlation analysis into the casual analysis.Hence,TDS can bridge the gaps between DGDS and DSDS,and balance the usability and the interpretability of big data solutions.The studies of TDS should be focused on the following research purpose:to develop theoretical studies of TDS,to take advantages of active property of big data,to embrace design of experiments,to enhance causality analysis,and to develop data products.
文摘With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.
文摘We briefly survey a number of important recent uchievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions; pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.