In order to deal with the limitations during the register transfer level verification, a new functional verification method based on the random testing for the system-level of system-on-chip is proposed.The validity o...In order to deal with the limitations during the register transfer level verification, a new functional verification method based on the random testing for the system-level of system-on-chip is proposed.The validity of this method is proven theoretically.Specifically, testcases are generated according to many approaches of randomization.Moreover, the testbench for the system-level verification according to the proposed method is designed by using advanced modeling language.Therefore, under the circumstances that the testbench generates testcases quickly, the hardware/software co-simulation and co-verification can be implemented and the hardware/software partitioning planning can be evaluated easily.The comparison method is put to use in the evaluation approach of the testing validity.The evaluation result indicates that the efficiency of the partition testing is better than that of the random testing only when one or more subdomains are covered over with the area of errors, although the efficiency of the random testing is generally better than that of the partition testing.The experimental result indicates that this method has a good performance in the functional coverage and the cost of testing and can discover the functional errors as soon as possible.展开更多
With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dyn...With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtuMized resources are provided as services. With virtualization technology, cloud computing offers diverse services (such as virtual computing, virtual storage, virtual bandwidth, etc.) for the public by means of multi-tenancy mode. Although users are enjoying the capabilities of super-computing and mass storage supplied by cloud computing, cloud security still remains as a hot spot problem, which is in essence the trust management between data owners and storage service providers. In this paper, we propose a data coloring method based on cloud watermarking to recognize and ensure mutual reputations. The experimental results show that the robustness of reverse cloud generator can guarantee users' embedded social reputation identifications. Hence, our work provides a reference solution to the critical problem of cloud security.展开更多
Large-scale object-oriented(OO) software systems have recently been found to share global network characteristics such as small world and scale free,which go beyond the scope of traditional software measurement and ...Large-scale object-oriented(OO) software systems have recently been found to share global network characteristics such as small world and scale free,which go beyond the scope of traditional software measurement and assessment methodologies.To measure the complexity at various levels of granularity,namely graph,class(and object) and source code,we propose a hierarchical set of metrics in terms of coupling and cohesion-the most important characteristics of software,and analyze a sample of 12 open-source OO software systems to empirically validate the set.Experimental results of the correlations between cross-level metrics indicate that the graph measures of our set complement traditional software metrics well from the viewpoint of network thinking,and provide more effective information about fault-prone classes in practice.展开更多
Despite the tremendous effort made by industry and academia,we are still searching for metrics that can characterize Cyberspace and system security risks. In this paper,we study the class of security risks that are in...Despite the tremendous effort made by industry and academia,we are still searching for metrics that can characterize Cyberspace and system security risks. In this paper,we study the class of security risks that are inherent to the dependence structure in software with vulnerabilities and exhibit a "cascading" effect. We present a measurement framework for evaluating these metrics,and report a preliminary case study on evaluating the dependence-induced security risks in the Apache HTTP Server. The experiment results show that our framework can not only clearly analyze the root cause of the security risks but also quantitatively evaluate the attack consequence of the risks.展开更多
The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resou...The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.展开更多
Frequent Pattern mining plays an essential role in data mining.Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach.However,candidate set generation is still costly,especially ...Frequent Pattern mining plays an essential role in data mining.Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach.However,candidate set generation is still costly,especially when there exist prolific patterns and/or long patterns.We introduce a novel frequent pattern growth(FP-growth)method,which is efficient and scalable for mining both long and short frequent patterns without candidate generation.And build a new projection frequent pattern tree(PFP-tree)algorithm,which not only heirs all the advantages in the FP-growth method,but also avoids it's bottleneck in database size dependence when constructing the frequent pattern tree(FP-tree).Efficiency of mining is achieved by introducing the projection technique,which avoid serial scan each frequent item in the database,the cost is mainly related to the depth of the tree,namely the number of frequent items of the longest transaction in the database,not the sum of all the frequent items in the database,which hugely shortens the time of tree-construction.Our performance study shows that the PFP-tree method is efficient and scalable for mining large databases or data warehouses,and is even about an order of magnitude faster than the FP-growth method.展开更多
Service-Oriented Software Engineering (SOSE) presents new challenges; in particular, how to promote interoperability and cooperation among loosely-coupled service resources. This is critical for service resource sha...Service-Oriented Software Engineering (SOSE) presents new challenges; in particular, how to promote interoperability and cooperation among loosely-coupled service resources. This is critical for service resource sharing and for implementing on-demand services. This paper discusses key technologies of service virtualization, including encapsulation of service interoperability (for available resources); ontology-based Role, Goal, Process, and Service (RGPS) metamodeling (for interoperable aggregation and organization of virtualization services); registration and repository management of Metamodel Framework for Interoperability (MFI) (for virtualization service management); and virtualization service ontology and its represented association with RGP& Latest progress of the MFI and ISQ standards is also discussed.展开更多
The SCR(Software Cost Reduction)requirements method is aneffective method for specifying software system requirements.This paper presents aformal model analyzing SCR-style requirements.The analysis model mainly applie...The SCR(Software Cost Reduction)requirements method is aneffective method for specifying software system requirements.This paper presents aformal model analyzing SCR-style requirements.The analysis model mainly appliesstate translation rules,semantic computing rules and attributes to define formal se-mantics of a tabular notation in the SCR requirements method,and may be used toanalyze requirements specifications to be specified by the SCR requirements method.Using a simple example,this paper introduces how to analyze consistency and com-pleteness of requirements specifications.展开更多
Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems.The time complexity of the algorithms forcombinatorial optimisation has not been well unde...Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems.The time complexity of the algorithms forcombinatorial optimisation has not been well understood.This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximumcardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.展开更多
The quality of GIServices(QoGIS)is an important consideration for services sharing and interoperation.However,QoGIS is a complex concept and difficult to be evaluated reasonably.Most of the current studies have focuse...The quality of GIServices(QoGIS)is an important consideration for services sharing and interoperation.However,QoGIS is a complex concept and difficult to be evaluated reasonably.Most of the current studies have focused on static and non-scalable evaluation methods but have ignored location sensitivity subsequently resulting in the inaccurate QoGIS values.For intensive geodata and computation,GIServices are more sensitive to the location factor than general services.This paper proposes a location-aware GIServices quality prediction model via collaborative filtering(LAGCF).The model uses a mixed CF method based on time zone feature from the perspectives of both user and GIServices.Time zone is taken as the location factor and mapped into the prediction process.A time zone-adjusted Pearson correlation coefficient algorithm was designed to measure the similarity between the GIServices and the target,helping to identify highly similar GIServices.By adopting a coefficient of confidence in the final generation phase,the value of the QoGIS most similar to the target services will play a dominant role in the comprehensive result.Two series of experiments on large-scale QoGIS data were implemented to verify the effectivity of LAGCF.The results showed that LAGCF can improve the accuracy of QoGIS prediction significantly.展开更多
Semantic refinement of stakeholders' requirements is a fundamental issue in requirements engineering. Facing with the on-demand collaboration problem among the heterogeneous, autonomous, and dynamic service resources...Semantic refinement of stakeholders' requirements is a fundamental issue in requirements engineering. Facing with the on-demand collaboration problem among the heterogeneous, autonomous, and dynamic service resources in the Web, service requirements refinement becomes extremely important, and the key issue in service requirements refinement is semantic interoperability aggregation. A method for creating connecting ontologies driven by requirement sign ontology is proposed. Based on connecting ontologies, a method for semantic interoperability aggregation in requirements refinement is proposed. In addition, we discover that the necessary condition for semantic interoperability is semantic similarity, and the sufficient condition is the coverability of the agreed mediation ontology. Based on this viewpoint, a metric framework for calculating semantic interoperability capability is proposed. This methodology can provide a semantic representation mechanism for refining users' requirements; meanwhile, since users' requirements in the Web usually originate from different domains, it can also provide semantic interoperability guidance for networked service discovery, and is an effective approach for the realization of on-demand service integration. The methodology will be beneficial in service-oriented software engineering and cloud computing.展开更多
Patent prior art search uses dispersed information to retrieve all the relevant documents with strong ambiguity from the massive patent database. This challenging task con-sists in patent reduction and patent expansio...Patent prior art search uses dispersed information to retrieve all the relevant documents with strong ambiguity from the massive patent database. This challenging task con-sists in patent reduction and patent expansion. Existing stud-ies on patent reduction ignore the relevance between techni-cal characteristics and technical domains, and result in am-biguous queries. Works on patent expansion expand terms from external resource by selecting words with similar dis-tribution or similar semantics. However, this splits the rele-vance between the distribution and semantics of the terms. Besides, common repository hardly meets the requirement of patent expansion for uncommon semantics and unusual terms. In order to solve these problems, we first present a novel composite-domain perspective model which converts the technical characteristic of a query patent to a specific composite classified domain and generates aspect queries. We then implement patent expansion with double consistency by combining distribution and semantics simultaneously. We also propose to train semantic vector spaces via word em-bedding under the specific classified domains, so as to pro-vide domain-aware expanded resource. Finally, multiple re-trieval results of the same topic are merged based on perspec-tive weight and rank in the results. Our experimental results on CLEP-IP 2010 demonstrate that our method is very effec-tive. It reaches about 5.43% improvement in recall and nearly 12.38% improvement in PRES over the state-of-the-art. Our work also achieves the best performance balance in terms of recall, MAP and PRES.展开更多
Resources shared in e-Science have critical requirements on security.Thus subjective trust management is essential to guarantee users' collaborations and communications on such a promising infrastructure.As an import...Resources shared in e-Science have critical requirements on security.Thus subjective trust management is essential to guarantee users' collaborations and communications on such a promising infrastructure.As an important nature of subjective trust,uncertainty should be preserved and exhibited in trust definition,representation and evolution.Consider the drawbacks of existing mechanisms based on random mathematics and fuzzy theory,this paper designs an uncertainty enhanced trust evolution strategy based on cloud model theory.We define subjective trust as trust cloud.Then we propose new algorithms to propagate,aggregate and update trust.Furthermore,based on the concept of similar cloud,a method to assess trust level is put forward.The simulation results show the effiectiveness,rationality and efficiency of our proposed strategy.展开更多
Market making (MM) strategies have played an important role in the electronic stock market. However, the MM strategies without any forecasting power are not safe while trading. In this paper, we design and implement...Market making (MM) strategies have played an important role in the electronic stock market. However, the MM strategies without any forecasting power are not safe while trading. In this paper, we design and implement a twotier framework, which includes a trading signal generator based on a supervised learning approach and an event-driven MM strategy. The proposed generator incorporates the information within order book microstructure and market news to provide directional predictions. The MM strategy in the second tier trades on the signals and prevents itself from profit loss led by market trending. Using half a year price tick data from Tokyo Stock Exchange (TSE) and Shanghai Stock Exchange (SSE), and corresponding Thomson Reuters news of the same time period, we conduct the back-testing and simulation on an industrial near-to-reality simulator. From the empirical results, we find that 1) strategies with signals perform better than strategies without any signal in terms of average daily profit and loss (PnL) and sharpe ratio (SR), and 2) correct predictions do help MM strategies readjust their quoting along with market trending, which avoids the strategies triggering stop loss procedure that further realizes the paper loss.展开更多
Mobile agent has shown its promise as a powerful means to complement and enhance existing technology in various application areas. In particular, existing work has demonstrated that MA can simplify the development and...Mobile agent has shown its promise as a powerful means to complement and enhance existing technology in various application areas. In particular, existing work has demonstrated that MA can simplify the development and improve the performance of certain classes of distributed applications, especially for those running on a wide-area, heterogeneous, and dynamic networking environment like the Internet. In our previous work, we extended the application of MA to the design of distributed control functions, which require the maintenance of logical relationship among and/or coordination of processing entities in a distributed system. A novel framework is presented for structuring and building distributed systems, which use cooperating mobile agents as an aid to carry out coordination and cooperation tasks in distributed systems. The framework has been used for designing various distributed control functions such as load balancing and mutual ex- clusion in our previous work. In this paper, we use the framework to propose a novel approach to detecting deadlocks in distributed system by using mobile agents, which demonstrates the advantage of being adaptive and flexible of mobile agents. We first describe the MAEDD (Mobile Agent Enabled Deadlock Detection) scheme, in which mobile agents are dispatched to collect and analyze deadlock information distributed across the network sites and, based on the analysis, to detect and resolve deadlocks. Then the design of an adaptive hybrid algorithm derived from the framework is presented. The algorithm can dynamically adapt itself to the changes in system state by using different deadlock detection strategies. The performance of the proposed algorithm has been evaluated using simulations. The results show that the algorithm can outperform existing algorithms that use a fixed deadlock detection strategy.展开更多
基金supported by the National High Technology Research and Development Program of China (863 Program) (2002AA1Z1490)Specialized Research Fund for the Doctoral Program of Higher Education (20040486049)the University Cooperative Research Fund of Huawei Technology Co., Ltd
文摘In order to deal with the limitations during the register transfer level verification, a new functional verification method based on the random testing for the system-level of system-on-chip is proposed.The validity of this method is proven theoretically.Specifically, testcases are generated according to many approaches of randomization.Moreover, the testbench for the system-level verification according to the proposed method is designed by using advanced modeling language.Therefore, under the circumstances that the testbench generates testcases quickly, the hardware/software co-simulation and co-verification can be implemented and the hardware/software partitioning planning can be evaluated easily.The comparison method is put to use in the evaluation approach of the testing validity.The evaluation result indicates that the efficiency of the partition testing is better than that of the random testing only when one or more subdomains are covered over with the area of errors, although the efficiency of the random testing is generally better than that of the partition testing.The experimental result indicates that this method has a good performance in the functional coverage and the cost of testing and can discover the functional errors as soon as possible.
基金supported by National Basic Research Program of China (973 Program) (No. 2007CB310800)China Postdoctoral Science Foundation (No. 20090460107 and No. 201003794)
文摘With the development of Internet technology and human computing, the computing environment has changed dramatically over the last three decades. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtuMized resources are provided as services. With virtualization technology, cloud computing offers diverse services (such as virtual computing, virtual storage, virtual bandwidth, etc.) for the public by means of multi-tenancy mode. Although users are enjoying the capabilities of super-computing and mass storage supplied by cloud computing, cloud security still remains as a hot spot problem, which is in essence the trust management between data owners and storage service providers. In this paper, we propose a data coloring method based on cloud watermarking to recognize and ensure mutual reputations. The experimental results show that the robustness of reverse cloud generator can guarantee users' embedded social reputation identifications. Hence, our work provides a reference solution to the critical problem of cloud security.
基金Supported by the National Grand Fundamental Research 973 Program of China under Grant No.2007CB310800the National Natural Science Foundation of China under Grant Nos.60873083 and 60803025+2 种基金the Research Fund for the Doctoral Program of Higher Education of China under Grant No.20090141120022the Natural Science Foundation of Hubei Province of China under Grant Nos.2008ABA379 and 2008CDB351the Fundamental Research Funds for the Central Universities of China under Grant No.6082005
文摘Large-scale object-oriented(OO) software systems have recently been found to share global network characteristics such as small world and scale free,which go beyond the scope of traditional software measurement and assessment methodologies.To measure the complexity at various levels of granularity,namely graph,class(and object) and source code,we propose a hierarchical set of metrics in terms of coupling and cohesion-the most important characteristics of software,and analyze a sample of 12 open-source OO software systems to empirically validate the set.Experimental results of the correlations between cross-level metrics indicate that the graph measures of our set complement traditional software metrics well from the viewpoint of network thinking,and provide more effective information about fault-prone classes in practice.
基金supported by Natural Science Foundation of China under award No.61303024Natural Science Foundation of Jiangsu Province under award No.BK20130372+3 种基金National 973 Program of China under award No.2014CB340600National High Tech 863 Program of China under award No.2015AA016002supported by Natural Science Foundation of China under award No.61272452supported in part by ARO Grant # W911NF-12-1-0286 and NSF Grant #1111925
文摘Despite the tremendous effort made by industry and academia,we are still searching for metrics that can characterize Cyberspace and system security risks. In this paper,we study the class of security risks that are inherent to the dependence structure in software with vulnerabilities and exhibit a "cascading" effect. We present a measurement framework for evaluating these metrics,and report a preliminary case study on evaluating the dependence-induced security risks in the Apache HTTP Server. The experiment results show that our framework can not only clearly analyze the root cause of the security risks but also quantitatively evaluate the attack consequence of the risks.
基金supported by National Key Basic Research Program of China (973 Program) under Grant No.2007CB310804China Post-doctoral Science Foundation under Grants No.20090460107, 201003794
文摘The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.
基金Supported by the National Natural Saience Foundation of China(90104005)
文摘Frequent Pattern mining plays an essential role in data mining.Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach.However,candidate set generation is still costly,especially when there exist prolific patterns and/or long patterns.We introduce a novel frequent pattern growth(FP-growth)method,which is efficient and scalable for mining both long and short frequent patterns without candidate generation.And build a new projection frequent pattern tree(PFP-tree)algorithm,which not only heirs all the advantages in the FP-growth method,but also avoids it's bottleneck in database size dependence when constructing the frequent pattern tree(FP-tree).Efficiency of mining is achieved by introducing the projection technique,which avoid serial scan each frequent item in the database,the cost is mainly related to the depth of the tree,namely the number of frequent items of the longest transaction in the database,not the sum of all the frequent items in the database,which hugely shortens the time of tree-construction.Our performance study shows that the PFP-tree method is efficient and scalable for mining large databases or data warehouses,and is even about an order of magnitude faster than the FP-growth method.
基金funded by the National Basic Research Program of China ("973" Program) under Grant No. 2007CB310801the National Natural Science Foundation of China under Grant No. 60970017, 60873083, 60803025, and 60903034the Foundation for Distinguished Young Scientists of Hubei Province of China under Grant No. 2008CDB351
文摘Service-Oriented Software Engineering (SOSE) presents new challenges; in particular, how to promote interoperability and cooperation among loosely-coupled service resources. This is critical for service resource sharing and for implementing on-demand services. This paper discusses key technologies of service virtualization, including encapsulation of service interoperability (for available resources); ontology-based Role, Goal, Process, and Service (RGPS) metamodeling (for interoperable aggregation and organization of virtualization services); registration and repository management of Metamodel Framework for Interoperability (MFI) (for virtualization service management); and virtualization service ontology and its represented association with RGP& Latest progress of the MFI and ISQ standards is also discussed.
基金Acknowledgements: This work was partially supported by National Basic Research Program of China (973 Program No. 2007CB310806), Hubei Province Natural Science Foundation of China (No. 2007ABA038), Doctor Subject Fund of Education Ministry (No. 20070486064), the 111 Project of High School (No.B07037).
基金The work of this paper is supported by the National Natural Science Foundation of China.
文摘The SCR(Software Cost Reduction)requirements method is aneffective method for specifying software system requirements.This paper presents aformal model analyzing SCR-style requirements.The analysis model mainly appliesstate translation rules,semantic computing rules and attributes to define formal se-mantics of a tabular notation in the SCR requirements method,and may be used toanalyze requirements specifications to be specified by the SCR requirements method.Using a simple example,this paper introduces how to analyze consistency and com-pleteness of requirements specifications.
基金supported by Engineering and Physical Sciences Research Council(GR/R52541/01)State Key Lab of Software Engineering at Wuhan University
文摘Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems.The time complexity of the algorithms forcombinatorial optimisation has not been well understood.This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem,to find the maximumcardinality matching in a graph.It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.
基金National Natural Science Foundation of China[grant number 41401464]Open Foundation of LIESMARS[grant number 15I02]Natural Science Foundation of Hubei Province[grant number 2016CFC769].
文摘The quality of GIServices(QoGIS)is an important consideration for services sharing and interoperation.However,QoGIS is a complex concept and difficult to be evaluated reasonably.Most of the current studies have focused on static and non-scalable evaluation methods but have ignored location sensitivity subsequently resulting in the inaccurate QoGIS values.For intensive geodata and computation,GIServices are more sensitive to the location factor than general services.This paper proposes a location-aware GIServices quality prediction model via collaborative filtering(LAGCF).The model uses a mixed CF method based on time zone feature from the perspectives of both user and GIServices.Time zone is taken as the location factor and mapped into the prediction process.A time zone-adjusted Pearson correlation coefficient algorithm was designed to measure the similarity between the GIServices and the target,helping to identify highly similar GIServices.By adopting a coefficient of confidence in the final generation phase,the value of the QoGIS most similar to the target services will play a dominant role in the comprehensive result.Two series of experiments on large-scale QoGIS data were implemented to verify the effectivity of LAGCF.The results showed that LAGCF can improve the accuracy of QoGIS prediction significantly.
基金supported by the National Basic Research 973 Program of China under Grant No.2007CB310801the National Natural Science Foundation of China under Grant Nos.60970017 and 60903034
文摘Semantic refinement of stakeholders' requirements is a fundamental issue in requirements engineering. Facing with the on-demand collaboration problem among the heterogeneous, autonomous, and dynamic service resources in the Web, service requirements refinement becomes extremely important, and the key issue in service requirements refinement is semantic interoperability aggregation. A method for creating connecting ontologies driven by requirement sign ontology is proposed. Based on connecting ontologies, a method for semantic interoperability aggregation in requirements refinement is proposed. In addition, we discover that the necessary condition for semantic interoperability is semantic similarity, and the sufficient condition is the coverability of the agreed mediation ontology. Based on this viewpoint, a metric framework for calculating semantic interoperability capability is proposed. This methodology can provide a semantic representation mechanism for refining users' requirements; meanwhile, since users' requirements in the Web usually originate from different domains, it can also provide semantic interoperability guidance for networked service discovery, and is an effective approach for the realization of on-demand service integration. The methodology will be beneficial in service-oriented software engineering and cloud computing.
基金This work was supported by the National Natural Science Foundation of China (Grant Nos. 61232002, 61572376)the Science and Technology Support Program of Hubei Province (2015BAA127)the Wuhan Innovation Team Project (2014070504020237).
文摘Patent prior art search uses dispersed information to retrieve all the relevant documents with strong ambiguity from the massive patent database. This challenging task con-sists in patent reduction and patent expansion. Existing stud-ies on patent reduction ignore the relevance between techni-cal characteristics and technical domains, and result in am-biguous queries. Works on patent expansion expand terms from external resource by selecting words with similar dis-tribution or similar semantics. However, this splits the rele-vance between the distribution and semantics of the terms. Besides, common repository hardly meets the requirement of patent expansion for uncommon semantics and unusual terms. In order to solve these problems, we first present a novel composite-domain perspective model which converts the technical characteristic of a query patent to a specific composite classified domain and generates aspect queries. We then implement patent expansion with double consistency by combining distribution and semantics simultaneously. We also propose to train semantic vector spaces via word em-bedding under the specific classified domains, so as to pro-vide domain-aware expanded resource. Finally, multiple re-trieval results of the same topic are merged based on perspec-tive weight and rank in the results. Our experimental results on CLEP-IP 2010 demonstrate that our method is very effec-tive. It reaches about 5.43% improvement in recall and nearly 12.38% improvement in PRES over the state-of-the-art. Our work also achieves the best performance balance in terms of recall, MAP and PRES.
基金Supported by the National Natural Science Foundation of China under Grant No.60703048the Open Foundation of State Key Lab of Software Engineering of Wuhan University under Grant No.SKLSE20080720the Open Foundation of State Key Laboratory for Novel Software Technology of Nanjing University under Grant No.KFKT2009B22
文摘Resources shared in e-Science have critical requirements on security.Thus subjective trust management is essential to guarantee users' collaborations and communications on such a promising infrastructure.As an important nature of subjective trust,uncertainty should be preserved and exhibited in trust definition,representation and evolution.Consider the drawbacks of existing mechanisms based on random mathematics and fuzzy theory,this paper designs an uncertainty enhanced trust evolution strategy based on cloud model theory.We define subjective trust as trust cloud.Then we propose new algorithms to propagate,aggregate and update trust.Furthermore,based on the concept of similar cloud,a method to assess trust level is put forward.The simulation results show the effiectiveness,rationality and efficiency of our proposed strategy.
基金This work was supported by the National Natural Science Foundation of China (Grant Nos. 61173011, 61103125). Thanks for Charles River Advisors Ltd. who provide their commercial exchange simulator for research use. Xiaotie Deng is supported by the National Natural Science Foundation of China (Grant No. 61173011) and a 985 project of Shanghai Jiaotong University, China.
文摘Market making (MM) strategies have played an important role in the electronic stock market. However, the MM strategies without any forecasting power are not safe while trading. In this paper, we design and implement a twotier framework, which includes a trading signal generator based on a supervised learning approach and an event-driven MM strategy. The proposed generator incorporates the information within order book microstructure and market news to provide directional predictions. The MM strategy in the second tier trades on the signals and prevents itself from profit loss led by market trending. Using half a year price tick data from Tokyo Stock Exchange (TSE) and Shanghai Stock Exchange (SSE), and corresponding Thomson Reuters news of the same time period, we conduct the back-testing and simulation on an industrial near-to-reality simulator. From the empirical results, we find that 1) strategies with signals perform better than strategies without any signal in terms of average daily profit and loss (PnL) and sharpe ratio (SR), and 2) correct predictions do help MM strategies readjust their quoting along with market trending, which avoids the strategies triggering stop loss procedure that further realizes the paper loss.
文摘Mobile agent has shown its promise as a powerful means to complement and enhance existing technology in various application areas. In particular, existing work has demonstrated that MA can simplify the development and improve the performance of certain classes of distributed applications, especially for those running on a wide-area, heterogeneous, and dynamic networking environment like the Internet. In our previous work, we extended the application of MA to the design of distributed control functions, which require the maintenance of logical relationship among and/or coordination of processing entities in a distributed system. A novel framework is presented for structuring and building distributed systems, which use cooperating mobile agents as an aid to carry out coordination and cooperation tasks in distributed systems. The framework has been used for designing various distributed control functions such as load balancing and mutual ex- clusion in our previous work. In this paper, we use the framework to propose a novel approach to detecting deadlocks in distributed system by using mobile agents, which demonstrates the advantage of being adaptive and flexible of mobile agents. We first describe the MAEDD (Mobile Agent Enabled Deadlock Detection) scheme, in which mobile agents are dispatched to collect and analyze deadlock information distributed across the network sites and, based on the analysis, to detect and resolve deadlocks. Then the design of an adaptive hybrid algorithm derived from the framework is presented. The algorithm can dynamically adapt itself to the changes in system state by using different deadlock detection strategies. The performance of the proposed algorithm has been evaluated using simulations. The results show that the algorithm can outperform existing algorithms that use a fixed deadlock detection strategy.
基金partially supported by the National Natural Science Foundation of China (Grant Nos.61332006and 61232002) the National High-Tech Research and Development Program (863Program) of China (2015AA015303),and Infosys.