Finding the optimum solution for dispatching in concrete delivery is computationally intractable because it is a NP-hard (non-deterministic polynomial-time hard) problem. Heuristic methods are required to obtain sat...Finding the optimum solution for dispatching in concrete delivery is computationally intractable because it is a NP-hard (non-deterministic polynomial-time hard) problem. Heuristic methods are required to obtain satisfactory solutions. Inefficiencies in mathematical modeling still make concrete dispatching difficult to solve. In reality, complex dispatching systems are mostly handled by human experts, who are able to manage the assigned tasks well. However, the high dependency on human expertise is a considerable challenge for RMC (ready mixed concrete) companies. In this paper, a logical reconstruction of an expert's decision making is achieved by two machine learning techniques: decision tree and rule induction. This paper focuses on the expert dispatcher's prioritization of customer orders. The proposed method has been tested on a simulation model consisting of a batch plant and three customers per day. The scenarios generated by the simulation model were given to a dispatch manager who was asked to prioritize the customers in each day. The scenarios and the decisions were then input to the machine learning programs, which created generalizations of the expert's decisions. Both decision trees and rules approach 80% accuracy in reproducing the human performance.展开更多
Mobile devices are resource-limited, and task migration has become an important and attractive feature of mobile clouds. To validate task migration, we propose a novel approach to the simulation of task migration in a...Mobile devices are resource-limited, and task migration has become an important and attractive feature of mobile clouds. To validate task migration, we propose a novel approach to the simulation of task migration in a pervasive cloud environment. Our approach is based on Colored Petri Net(CPN). In this research, we expanded the semantics of a CPN and created two task migration models with different task migration policies: one that took account of context information and one that did not. We evaluated the two models using CPN-based simulation and analyzed their task migration accessibility, integrity during the migration process, reliability, and the stability of the pervasive cloud system after task migration. The energy consumption and costs of the two models were also investigated. Our results suggest that CPN with context sensing task migration can minimize energy consumption while preserving good overall performance.展开更多
Graphs are widely used for modeling complicated data such as social networks,chemical compounds,protein interactions and semantic web.To effiectively understand and utilize any collection of graphs,a graph database th...Graphs are widely used for modeling complicated data such as social networks,chemical compounds,protein interactions and semantic web.To effiectively understand and utilize any collection of graphs,a graph database that efficiently supports elementary querying mechanisms is crucially required.For example,Subgraph and Supergraph queries are important types of graph queries which have many applications in practice.A primary challenge in computing the answers of graph queries is that pair-wise comparisons of graphs are usually hard problems.Relational database management systems(RDBMSs) have repeatedly been shown to be able to efficiently host different types of data such as complex objects and XML data.RDBMSs derive much of their performance from sophisticated optimizer components which make use of physical properties that are specific to the relational model such as sortedness,proper join ordering and powerful indexing mechanisms.In this article,we study the problem of indexing and querying graph databases using the relational infrastructure.We present a purely relational framework for processing graph queries.This framework relies on building a layer of graph features knowledge which capture metadata and summary features of the underlying graph database.We describe different querying mechanisms which make use of the layer of graph features knowledge to achieve scalable performance for processing graph queries.Finally,we conduct an extensive set of experiments on real and synthetic datasets to demonstrate the efficiency and the scalability of our techniques.展开更多
In general the problem of verifying whether a structured business process is compliant with a given set of regulations is NP-hard. The present paper focuses on identifying a tractable subset of this problem, namely ve...In general the problem of verifying whether a structured business process is compliant with a given set of regulations is NP-hard. The present paper focuses on identifying a tractable subset of this problem, namely verifying whether a structured business process is compliant with a single global obligation. Global obligations are those whose validity spans for the entire execution of a business process. We identify two types of obligations: achievement and maintenance.In the present paper we firstly define an abstract framework capable to model the problem and secondly we define procedures and algorithms to deal with the compliance problem of checking the compliance of a structured business process with respect to a single global obligation. We show that the algorithms proposed in the paper run in polynomial time.展开更多
It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, an...It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The proba- bility distributions defined over the graphs capture the statistical variability of these structures. These proba- bility models can be learnt from training data with lim- ited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normaliza- tion constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain compu- rationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple- instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that rained by using better bounds better results can be ob- and approximations.展开更多
文摘Finding the optimum solution for dispatching in concrete delivery is computationally intractable because it is a NP-hard (non-deterministic polynomial-time hard) problem. Heuristic methods are required to obtain satisfactory solutions. Inefficiencies in mathematical modeling still make concrete dispatching difficult to solve. In reality, complex dispatching systems are mostly handled by human experts, who are able to manage the assigned tasks well. However, the high dependency on human expertise is a considerable challenge for RMC (ready mixed concrete) companies. In this paper, a logical reconstruction of an expert's decision making is achieved by two machine learning techniques: decision tree and rule induction. This paper focuses on the expert dispatcher's prioritization of customer orders. The proposed method has been tested on a simulation model consisting of a batch plant and three customers per day. The scenarios generated by the simulation model were given to a dispatch manager who was asked to prioritize the customers in each day. The scenarios and the decisions were then input to the machine learning programs, which created generalizations of the expert's decisions. Both decision trees and rules approach 80% accuracy in reproducing the human performance.
文摘Mobile devices are resource-limited, and task migration has become an important and attractive feature of mobile clouds. To validate task migration, we propose a novel approach to the simulation of task migration in a pervasive cloud environment. Our approach is based on Colored Petri Net(CPN). In this research, we expanded the semantics of a CPN and created two task migration models with different task migration policies: one that took account of context information and one that did not. We evaluated the two models using CPN-based simulation and analyzed their task migration accessibility, integrity during the migration process, reliability, and the stability of the pervasive cloud system after task migration. The energy consumption and costs of the two models were also investigated. Our results suggest that CPN with context sensing task migration can minimize energy consumption while preserving good overall performance.
文摘Graphs are widely used for modeling complicated data such as social networks,chemical compounds,protein interactions and semantic web.To effiectively understand and utilize any collection of graphs,a graph database that efficiently supports elementary querying mechanisms is crucially required.For example,Subgraph and Supergraph queries are important types of graph queries which have many applications in practice.A primary challenge in computing the answers of graph queries is that pair-wise comparisons of graphs are usually hard problems.Relational database management systems(RDBMSs) have repeatedly been shown to be able to efficiently host different types of data such as complex objects and XML data.RDBMSs derive much of their performance from sophisticated optimizer components which make use of physical properties that are specific to the relational model such as sortedness,proper join ordering and powerful indexing mechanisms.In this article,we study the problem of indexing and querying graph databases using the relational infrastructure.We present a purely relational framework for processing graph queries.This framework relies on building a layer of graph features knowledge which capture metadata and summary features of the underlying graph database.We describe different querying mechanisms which make use of the layer of graph features knowledge to achieve scalable performance for processing graph queries.Finally,we conduct an extensive set of experiments on real and synthetic datasets to demonstrate the efficiency and the scalability of our techniques.
文摘In general the problem of verifying whether a structured business process is compliant with a given set of regulations is NP-hard. The present paper focuses on identifying a tractable subset of this problem, namely verifying whether a structured business process is compliant with a single global obligation. Global obligations are those whose validity spans for the entire execution of a business process. We identify two types of obligations: achievement and maintenance.In the present paper we firstly define an abstract framework capable to model the problem and secondly we define procedures and algorithms to deal with the compliance problem of checking the compliance of a structured business process with respect to a single global obligation. We show that the algorithms proposed in the paper run in polynomial time.
文摘It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The proba- bility distributions defined over the graphs capture the statistical variability of these structures. These proba- bility models can be learnt from training data with lim- ited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normaliza- tion constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain compu- rationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple- instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that rained by using better bounds better results can be ob- and approximations.