Cardinality matrix problems are the underlying structure of several real world problems such as r... more Cardinality matrix problems are the underlying structure of several real world problems such as rostering, sports scheduling , and timetabling. These are hard computational problems given their inherent combinatorial structure. Constraint based approaches have been shown to outperform other approaches for solving these problems. In this paper we propose the cardinality matrix constraint, a specialized global constraint for cardinality matrix problems. The cardinality matrix constraint takes advantage of the intrinsic structure of the cardinality matrix problems. It uses a global cardinality constraint per row and per column and one cardinality (0,1)-matrix constraint per symbol. This latter constraint corresponds to solving a special case of a network flow problem, the transportation problem, which effectively captures the interactions between rows, columns, and symbols of cardinality matrix problems. Our results show that the cardinality matrix constraint outperforms standard constraint based formulations of cardinality matrix problems.
Abstract In recent years we have seen an increasing interest in combining CSP and LP based techni... more Abstract In recent years we have seen an increasing interest in combining CSP and LP based techniques for solving hard computational problems. While considerable progress has been made in the integration of these techniques for solving problems that exhibit a mixture of ...
We introduce a graph coloring challenge benchmark based on the problem of completing Latin square... more We introduce a graph coloring challenge benchmark based on the problem of completing Latin squares. We show how the hardness of the instances can be finely controlled by varying the fraction of precolored squares. We compare three complete (exact) solution strategies on this benchmark: (1) a Constraint Satisfaction (CSP) based approach, (2) a hybrid Linear Programming / CSP approach, and (3) a Boolean Satisfibility (SAT) approach. None of these methods uniformly dominates the others on this domain. The CSP and hybrid approaches lead to more compact encodings. The hybrid approach uses a randomized rounding LP based approximation that considerably boosts the performance of the CSP strategy on hard instances. However, the SAT-based approach can solve some of the hardest problem instances, provided the SAT encoding remains manageable. We will discuss the various tradeoffs between these approaches in detail.
We study the runtime distributions of backtrack procedures for propositional satisfiability and c... more We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by very long tails or "heavy tails". We will show that these distributions are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We also show how random restarts can effectively eliminate heavy-tailed behavior. Furthermore, for harder problem instances, we observe long tails on the left-hand side of the distribution, which is indicative of a non-negligible fraction of relatively short, successful runs. A rapid restart strategy eliminates heavy-tailed behavior and takes advantage of short runs, significantly reducing expected solution time. We demonstrate speedups of up to two orders of magnitude on SAT and CSP encodings of hard problems in planning, scheduling, and circuit synthesis.
Combinatorial search methods often exhibit a large variability in performance. We study the cost ... more Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or “heavy tails”. We will show that these distributions are best characterized by a general class of distributions that have no moments (i.e., an infinite mean, variance, etc.). Such non-standard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We believe this is the first finding of these distributions in a purely computational setting. We also show how random restarts can effectively eliminate heavy-tailed behavior, thereby dramatically improving the overall performance of a search procedure.
New methods to generate hard random problem instances have driven progress on algorithms for dedu... more New methods to generate hard random problem instances have driven progress on algorithms for deduction and constraint satisfaction. Recently Achlioptas et al. (AAAI 2000) introduced a new generator based on Latin squares that creates only satisfiable problems, and so can be used to accurately test incomplete (one sided) solvers. We investigate how this and other generators are biased away from the uniform distribution of satisfiable problems and show how they can be improved by imposing a balance condition. More generally, we show that the generator is one member of a family of related models that generate distributions ranging from ones that are everywhere tractable to ones that exhibit a sharp hardness threshold. We also discuss the critical role of the problem encoding in the performance of both systematic and local search solvers.
This paper describes our ongoing work on an interesting distributed constraint satisfaction probl... more This paper describes our ongoing work on an interesting distributed constraint satisfaction problem (DCSP), SensorCSP, that is based on a system of wireless sensors tracking multiple mobile nodes. We present some preliminary results showing that the source of combinatorial complexity in this problem is closely linked to the level of communication in the system. This DCSP lends itself naturally to two models -one in which agents are associated with the sensors, and one in which agents are associated with the mobile nodes. We show that these models are duals of each other, and discuss how they differ in the number of intra and inter-agent constraints and how this might affect the cost of finding a distributed solution. We also suggest that a careful distinction must be made between explicit and implicit inter-agent constraints in this problem domain as this might affect the communication costs and the scalability of a distributed solution.
There has been significant recent progress in reasoning and constraint processing methods. In are... more There has been significant recent progress in reasoning and constraint processing methods. In areas such as planning and finite model-checking, current solution techniques can handle combinatorial problems with up to a million variables and five million constraints. The good scaling behavior of these methods appears to defy what one would expect based on a worst-case complexity analysis. In order to bridge this gap between theory and practice, we propose a new framework for studying the complexity of these techniques on practical problem instances. In particular, our approach incorporates general structural properties observed in practical problem instances into the formal complexity analysis. We introduce a notion of "backdoors", which are small sets of variables that capture the overall combinatorics of the problem instance. We provide empirical results showing the existence of such backdoors in real-world problems. We then present a series of complexity results that explain the good scaling behavior of current reasoning and constraint methods observed on practical problem instances.
In recent years we have seen significant progress in the area of Boolean satisfiability (SAT) sol... more In recent years we have seen significant progress in the area of Boolean satisfiability (SAT) solving and its applications. As a new challenge, the community is now moving to investigate whether similar advances can be made in the use of Quantified Boolean Formulas (QBF). QBF provides a natural framework for capturing problem solving and planning in multiagent settings. However, contrarily to single-agent planning, which can be effectively formulated as SAT, we show that a QBF approach to planning in a multi-agent setting leads to significant unexpected computational difficulties. We identify as a key difficulty of the QBF approach the fact that QBF solvers often end up exploring a much larger search space than the natural search space of the original problem. This is in contrast to the experience with SAT approaches. We also show how one can alleviate these problems by introducing two special QBF formulations and a new QBF solution strategy. We present experiments that show the effectiveness of our approach in terms of a significant improvement in performance compared to earlier work in this area. Our work also provides a general methodology for formulating adversarial scenarios in QBF.
A major difficulty in evaluating incomplete local search style algorithms for constraint satisfac... more A major difficulty in evaluating incomplete local search style algorithms for constraint satisfaction problems is the need for a source of hard problem instances that are guaranteed to be satisfiable. A standard approach to evaluate incomplete search methods has been to use a general problem generator and a complete search method to filter out the unsatisfiable instances. Unfortunately, this approach cannot be used to create problem instances that are beyond the reach of complete search methods. So far, it has proven to be surprisingly difficult to develop a direct generator for satisfiable instances only. In this paper, we propose a generator that only outputs satisfiable problem instances. We also show how one can finely control the hardness of the satisfiable instances by establishing a connection between problem hardness and a new kind of phase transition phenomenon in the space of problem instances. Finally, we use our problem distribution to show the easy-hard-easy pattern in search complexity for local search procedures, analogous to the previously reported pattern for complete search methods.
Recent progress on search and reasoning procedures has been driven by experimentation on computat... more Recent progress on search and reasoning procedures has been driven by experimentation on computation-ally hard problem instances. Hard random prob-lem distributions are an important source of such in-stances. Challenge problems from the area of nite algebra have also ...
Unpredictability in the running time of complete search procedures can often be explained by the ... more Unpredictability in the running time of complete search procedures can often be explained by the phenomenon of "heavy-tailed cost distributions", meaning that at any time during the experiment there is a non-negligible probability of hitting a problem that requires exponentially more time to solve than any that has been encountered before ). We present a general method for introducing controlled randomization into complete search algorithms. The "boosted" search methods provably eliminate heavy-tails to the right of the median. Furthermore, they can take advantage of heavy-tails to the left of the median (that is, a nonnegligible chance of very short runs) to dramatically shorten the solution time. We demonstrate speedups of several orders of magnitude for state-of-the-art complete search procedures running on hard, real-world problems.
Much progress has been made in terms of boosting the effectiveness of backtrack style search meth... more Much progress has been made in terms of boosting the effectiveness of backtrack style search methods. In addition, during the last decade, a much better understanding of problem hardness, typical case complexity, and backtrack search behavior has been obtained. One example of a recent insight into backtrack search concerns so-called heavy-tailed behavior in randomized versions of backtrack search. Such heavy-tails explain the large variance in runtime often observed in practice. However, heavy-tailed behavior does certainly not occur on all instances. This has led to a need for a more precise characterization of when heavy-tailedness does and when it does not occur in backtrack search. In this paper, we provide such a characterization. We identify different statistical regimes of the tail of the runtime distributions of randomized backtrack search methods and show how they are correlated with the Bsophistication^of the search procedure combined with the inherent hardness of the instances. We also show that the runtime distribution regime is highly correlated with the distribution of the depth of inconsistent subtrees discovered during the search. In particular, we show that an exponential distribution of the depth of inconsistent subtrees combined with a search space that grows exponentially with the depth of the inconsistent subtrees implies heavy-tailed behavior.
Recently, it has been found that the cost distributions of randomized backtrack search in combina... more Recently, it has been found that the cost distributions of randomized backtrack search in combinatorial domains are often heavytailed. Such heavy-tailed distributions explain the high variability observed when using backtrack-style procedures. A good understanding of this phenomenon can lead to better search techniques. For example, restart strategies provide a good mechanism for eliminating the heavytailed behavior and boosting the overall search performance. Several state-of-the-art SAT solvers now incorporate such restart mechanisms. The study of heavy-tailed phenomena in combinatorial search has so far been been largely based on empirical data. We introduce several abstract tree search models, and show formally how heavy-tailed cost distribution can arise in backtrack search. We also discuss how these insights may facilitate the development of better combinatorial search methods.
We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that... more We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that arises in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP) algorithms in a truly distributed setting, we use a discrete-event network simulator, which allows us to model the impact of different network traffic conditions on the performance of the algorithms. We consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak-commitment search (AWC). In our study of different network traffic distributions, we found that random delays, in some cases combined with a dynamic decentralized restart strategy, can improve the performance of DisCSP algorithms. More interestingly, we also found that the active introduction of message delays by agents can improve performance and robustness while reducing the overall network load. Finally, our work confirms that AWC performs better than ABT on satisfiable instances. On unsatis-fiable instances, however, the performance of AWC is considerably worse than ABT.
We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that... more We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that arises in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP) algorithms in a truly distributed setting, we use a discrete-event network simulator, which allows us to model the impact of different network traffic conditions on the performance of the algorithms. We consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak commitment search (AWC). In our study of different network traffic distributions, we found that, random delays, in some cases combined with a dynamic decentralized restart strategy, can improve the performance of DisCSP algorithms. More interestingly, we also found that the active introduction of message delays by agents can improve performance and robustness, while reducing the overall network load. Finally, our work confirms that AWC performs better than ABT on satisfiable instances. However, on unsatisfiable instances, the performance of AWC is considerably worse than ABT.
We describe research and results centering on the construction and use of Bayesian models that ca... more We describe research and results centering on the construction and use of Bayesian models that can predict the run time of problem solvers. Our efforts are motivated by observations of high variance in the time required to solve instances for several challenging problems. The methods have application to the decision-theoretic control of hard search and reasoning algorithms. We illustrate the approach with a focus on the task of predicting run time for general and domain-specific solvers on a hard class of structured constraint satisfaction problems. We review the use of learned models to predict the ultimate length of a trial, based on observing the behavior of the search algorithm during an early phase of a problem session. Finally, we discuss how we can employ the models to inform dynamic run-time decisions.
¥ This paper complements the work reported in . In we formally introduced the notion of backdoor ... more ¥ This paper complements the work reported in . In we formally introduced the notion of backdoor and characterize the complexity of several algorithms designed to take advantage of backdoors, including restart strategies. In this paper, we focus on the connection between heavy-tails and backdoors.
Cardinality matrix problems are the underlying structure of several real world problems such as r... more Cardinality matrix problems are the underlying structure of several real world problems such as rostering, sports scheduling , and timetabling. These are hard computational problems given their inherent combinatorial structure. Constraint based approaches have been shown to outperform other approaches for solving these problems. In this paper we propose the cardinality matrix constraint, a specialized global constraint for cardinality matrix problems. The cardinality matrix constraint takes advantage of the intrinsic structure of the cardinality matrix problems. It uses a global cardinality constraint per row and per column and one cardinality (0,1)-matrix constraint per symbol. This latter constraint corresponds to solving a special case of a network flow problem, the transportation problem, which effectively captures the interactions between rows, columns, and symbols of cardinality matrix problems. Our results show that the cardinality matrix constraint outperforms standard constraint based formulations of cardinality matrix problems.
Abstract In recent years we have seen an increasing interest in combining CSP and LP based techni... more Abstract In recent years we have seen an increasing interest in combining CSP and LP based techniques for solving hard computational problems. While considerable progress has been made in the integration of these techniques for solving problems that exhibit a mixture of ...
We introduce a graph coloring challenge benchmark based on the problem of completing Latin square... more We introduce a graph coloring challenge benchmark based on the problem of completing Latin squares. We show how the hardness of the instances can be finely controlled by varying the fraction of precolored squares. We compare three complete (exact) solution strategies on this benchmark: (1) a Constraint Satisfaction (CSP) based approach, (2) a hybrid Linear Programming / CSP approach, and (3) a Boolean Satisfibility (SAT) approach. None of these methods uniformly dominates the others on this domain. The CSP and hybrid approaches lead to more compact encodings. The hybrid approach uses a randomized rounding LP based approximation that considerably boosts the performance of the CSP strategy on hard instances. However, the SAT-based approach can solve some of the hardest problem instances, provided the SAT encoding remains manageable. We will discuss the various tradeoffs between these approaches in detail.
We study the runtime distributions of backtrack procedures for propositional satisfiability and c... more We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by very long tails or "heavy tails". We will show that these distributions are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We also show how random restarts can effectively eliminate heavy-tailed behavior. Furthermore, for harder problem instances, we observe long tails on the left-hand side of the distribution, which is indicative of a non-negligible fraction of relatively short, successful runs. A rapid restart strategy eliminates heavy-tailed behavior and takes advantage of short runs, significantly reducing expected solution time. We demonstrate speedups of up to two orders of magnitude on SAT and CSP encodings of hard problems in planning, scheduling, and circuit synthesis.
Combinatorial search methods often exhibit a large variability in performance. We study the cost ... more Combinatorial search methods often exhibit a large variability in performance. We study the cost profiles of combinatorial search procedures. Our study reveals some intriguing properties of such cost profiles. The distributions are often characterized by very long tails or “heavy tails”. We will show that these distributions are best characterized by a general class of distributions that have no moments (i.e., an infinite mean, variance, etc.). Such non-standard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We believe this is the first finding of these distributions in a purely computational setting. We also show how random restarts can effectively eliminate heavy-tailed behavior, thereby dramatically improving the overall performance of a search procedure.
New methods to generate hard random problem instances have driven progress on algorithms for dedu... more New methods to generate hard random problem instances have driven progress on algorithms for deduction and constraint satisfaction. Recently Achlioptas et al. (AAAI 2000) introduced a new generator based on Latin squares that creates only satisfiable problems, and so can be used to accurately test incomplete (one sided) solvers. We investigate how this and other generators are biased away from the uniform distribution of satisfiable problems and show how they can be improved by imposing a balance condition. More generally, we show that the generator is one member of a family of related models that generate distributions ranging from ones that are everywhere tractable to ones that exhibit a sharp hardness threshold. We also discuss the critical role of the problem encoding in the performance of both systematic and local search solvers.
This paper describes our ongoing work on an interesting distributed constraint satisfaction probl... more This paper describes our ongoing work on an interesting distributed constraint satisfaction problem (DCSP), SensorCSP, that is based on a system of wireless sensors tracking multiple mobile nodes. We present some preliminary results showing that the source of combinatorial complexity in this problem is closely linked to the level of communication in the system. This DCSP lends itself naturally to two models -one in which agents are associated with the sensors, and one in which agents are associated with the mobile nodes. We show that these models are duals of each other, and discuss how they differ in the number of intra and inter-agent constraints and how this might affect the cost of finding a distributed solution. We also suggest that a careful distinction must be made between explicit and implicit inter-agent constraints in this problem domain as this might affect the communication costs and the scalability of a distributed solution.
There has been significant recent progress in reasoning and constraint processing methods. In are... more There has been significant recent progress in reasoning and constraint processing methods. In areas such as planning and finite model-checking, current solution techniques can handle combinatorial problems with up to a million variables and five million constraints. The good scaling behavior of these methods appears to defy what one would expect based on a worst-case complexity analysis. In order to bridge this gap between theory and practice, we propose a new framework for studying the complexity of these techniques on practical problem instances. In particular, our approach incorporates general structural properties observed in practical problem instances into the formal complexity analysis. We introduce a notion of "backdoors", which are small sets of variables that capture the overall combinatorics of the problem instance. We provide empirical results showing the existence of such backdoors in real-world problems. We then present a series of complexity results that explain the good scaling behavior of current reasoning and constraint methods observed on practical problem instances.
In recent years we have seen significant progress in the area of Boolean satisfiability (SAT) sol... more In recent years we have seen significant progress in the area of Boolean satisfiability (SAT) solving and its applications. As a new challenge, the community is now moving to investigate whether similar advances can be made in the use of Quantified Boolean Formulas (QBF). QBF provides a natural framework for capturing problem solving and planning in multiagent settings. However, contrarily to single-agent planning, which can be effectively formulated as SAT, we show that a QBF approach to planning in a multi-agent setting leads to significant unexpected computational difficulties. We identify as a key difficulty of the QBF approach the fact that QBF solvers often end up exploring a much larger search space than the natural search space of the original problem. This is in contrast to the experience with SAT approaches. We also show how one can alleviate these problems by introducing two special QBF formulations and a new QBF solution strategy. We present experiments that show the effectiveness of our approach in terms of a significant improvement in performance compared to earlier work in this area. Our work also provides a general methodology for formulating adversarial scenarios in QBF.
A major difficulty in evaluating incomplete local search style algorithms for constraint satisfac... more A major difficulty in evaluating incomplete local search style algorithms for constraint satisfaction problems is the need for a source of hard problem instances that are guaranteed to be satisfiable. A standard approach to evaluate incomplete search methods has been to use a general problem generator and a complete search method to filter out the unsatisfiable instances. Unfortunately, this approach cannot be used to create problem instances that are beyond the reach of complete search methods. So far, it has proven to be surprisingly difficult to develop a direct generator for satisfiable instances only. In this paper, we propose a generator that only outputs satisfiable problem instances. We also show how one can finely control the hardness of the satisfiable instances by establishing a connection between problem hardness and a new kind of phase transition phenomenon in the space of problem instances. Finally, we use our problem distribution to show the easy-hard-easy pattern in search complexity for local search procedures, analogous to the previously reported pattern for complete search methods.
Recent progress on search and reasoning procedures has been driven by experimentation on computat... more Recent progress on search and reasoning procedures has been driven by experimentation on computation-ally hard problem instances. Hard random prob-lem distributions are an important source of such in-stances. Challenge problems from the area of nite algebra have also ...
Unpredictability in the running time of complete search procedures can often be explained by the ... more Unpredictability in the running time of complete search procedures can often be explained by the phenomenon of "heavy-tailed cost distributions", meaning that at any time during the experiment there is a non-negligible probability of hitting a problem that requires exponentially more time to solve than any that has been encountered before ). We present a general method for introducing controlled randomization into complete search algorithms. The "boosted" search methods provably eliminate heavy-tails to the right of the median. Furthermore, they can take advantage of heavy-tails to the left of the median (that is, a nonnegligible chance of very short runs) to dramatically shorten the solution time. We demonstrate speedups of several orders of magnitude for state-of-the-art complete search procedures running on hard, real-world problems.
Much progress has been made in terms of boosting the effectiveness of backtrack style search meth... more Much progress has been made in terms of boosting the effectiveness of backtrack style search methods. In addition, during the last decade, a much better understanding of problem hardness, typical case complexity, and backtrack search behavior has been obtained. One example of a recent insight into backtrack search concerns so-called heavy-tailed behavior in randomized versions of backtrack search. Such heavy-tails explain the large variance in runtime often observed in practice. However, heavy-tailed behavior does certainly not occur on all instances. This has led to a need for a more precise characterization of when heavy-tailedness does and when it does not occur in backtrack search. In this paper, we provide such a characterization. We identify different statistical regimes of the tail of the runtime distributions of randomized backtrack search methods and show how they are correlated with the Bsophistication^of the search procedure combined with the inherent hardness of the instances. We also show that the runtime distribution regime is highly correlated with the distribution of the depth of inconsistent subtrees discovered during the search. In particular, we show that an exponential distribution of the depth of inconsistent subtrees combined with a search space that grows exponentially with the depth of the inconsistent subtrees implies heavy-tailed behavior.
Recently, it has been found that the cost distributions of randomized backtrack search in combina... more Recently, it has been found that the cost distributions of randomized backtrack search in combinatorial domains are often heavytailed. Such heavy-tailed distributions explain the high variability observed when using backtrack-style procedures. A good understanding of this phenomenon can lead to better search techniques. For example, restart strategies provide a good mechanism for eliminating the heavytailed behavior and boosting the overall search performance. Several state-of-the-art SAT solvers now incorporate such restart mechanisms. The study of heavy-tailed phenomena in combinatorial search has so far been been largely based on empirical data. We introduce several abstract tree search models, and show formally how heavy-tailed cost distribution can arise in backtrack search. We also discuss how these insights may facilitate the development of better combinatorial search methods.
We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that... more We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that arises in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP) algorithms in a truly distributed setting, we use a discrete-event network simulator, which allows us to model the impact of different network traffic conditions on the performance of the algorithms. We consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak-commitment search (AWC). In our study of different network traffic distributions, we found that random delays, in some cases combined with a dynamic decentralized restart strategy, can improve the performance of DisCSP algorithms. More interestingly, we also found that the active introduction of message delays by agents can improve performance and robustness while reducing the overall network load. Finally, our work confirms that AWC performs better than ABT on satisfiable instances. On unsatis-fiable instances, however, the performance of AWC is considerably worse than ABT.
We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that... more We introduce SensorDCSP, a naturally distributed benchmark based on a real-world application that arises in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP) algorithms in a truly distributed setting, we use a discrete-event network simulator, which allows us to model the impact of different network traffic conditions on the performance of the algorithms. We consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak commitment search (AWC). In our study of different network traffic distributions, we found that, random delays, in some cases combined with a dynamic decentralized restart strategy, can improve the performance of DisCSP algorithms. More interestingly, we also found that the active introduction of message delays by agents can improve performance and robustness, while reducing the overall network load. Finally, our work confirms that AWC performs better than ABT on satisfiable instances. However, on unsatisfiable instances, the performance of AWC is considerably worse than ABT.
We describe research and results centering on the construction and use of Bayesian models that ca... more We describe research and results centering on the construction and use of Bayesian models that can predict the run time of problem solvers. Our efforts are motivated by observations of high variance in the time required to solve instances for several challenging problems. The methods have application to the decision-theoretic control of hard search and reasoning algorithms. We illustrate the approach with a focus on the task of predicting run time for general and domain-specific solvers on a hard class of structured constraint satisfaction problems. We review the use of learned models to predict the ultimate length of a trial, based on observing the behavior of the search algorithm during an early phase of a problem session. Finally, we discuss how we can employ the models to inform dynamic run-time decisions.
¥ This paper complements the work reported in . In we formally introduced the notion of backdoor ... more ¥ This paper complements the work reported in . In we formally introduced the notion of backdoor and characterize the complexity of several algorithms designed to take advantage of backdoors, including restart strategies. In this paper, we focus on the connection between heavy-tails and backdoors.
Uploads
Papers by Carla Gomes