|
Post by Lars Ericson on Aug 3, 2015 11:34:06 GMT -8
This is kind of interesting. Running 11 threads on a Xeon E5-2620 with plenty of memory, I get these results for Clingo running HunterBig.lp: Goal | Successful Thread
| All Threads
| 90
| 120
| 1160 | 92 | 144 | 1424 | 94 | 411
| 4287 | 96 | 723 | 7758 | 98 | 1739 | 18782 | 100
| 3171
| 34359
|
command to Clingo is clingo --parallel-mode 11,compete hb.lp
|
|
|
Post by Andrew Rose on Aug 7, 2015 7:22:09 GMT -8
What are the units?
|
|
|
Post by Lars Ericson on Aug 7, 2015 11:11:04 GMT -8
Seconds, sorry. Like 34,000 seconds. Longer than your average gaming clock. This leads to the topic of my P-Complete post in this Implementation section. There is work being done to parallelize Clingo on GPU. However Kaminski is saying the core algorithm in Clasp is not amenable to GPU-style threads with relatively low dependency between threads, because P-Complete. There wasn't much talk about complexity in the Coursera course. It is nice to see that we can mechanically reduce a GDL into a Clingo LP, and search for a model (move leading to goal 100), and nice to know that the reduction results in a P-Complete problem. If you wanted to be academic about it, you could then ask what subsets or patterns of GDL usage lead to sub-P-complete problems in LP, and what GDL simplifications such as factoring are not happening in the Clingo solver, and could they be adapted for use inside Clingo, and would that lead to faster solutions in some cases, and so on. There's enough work there for multiple PhD theses in the intersection of GGP, ASP, complexity theory and computational symbolic logic. I didn't see much talk in Coursera course boards on the complexity of decision procedures in logic sublanguages. For example, the work of Cantone, Schwartz, Davis and Omodeo in Computational Logic and Set Theory: Applying Formalized Logic to Analysis, Set Theory for Computing and Computable Set Theory: Volume 1. Or Decision Procedures: An Algorithmic Point of View by Kroening, Strichman and Bryant. Or this article by Melven Robert Krom, A decision procedure for a class of formulas of first order predicate calculus. Which is to say there is a lot of hard-core work in the literature on decidable sublanguages done by somewhat more hard-core logicians, i.e. someone who would subscribe to the Journal of Symbolic Logic as a matter of course. Somebody might/should bite my head off, but the Stanford Logic Group doesn't really seem to have that kind of pedigree, i.e. training and a chain of thesis advisors and grand-thesis-advisors for the faculty that leads back to people like Stephen Kleene, Alonzo Church, Jacques Herbrand, or more recent names like Dana Scott, Martin Davis, Peter Andrews and Jack Schwartz. Davis, Andrews, Scott and Kleene were students of Church. But just because you weren't born there doesn't mean you can't sample the cuisine.
|
|
|
Post by Steve Draper on Aug 8, 2015 5:41:45 GMT -8
In general the problems must be (capable of being) NP-complete. You can see this readily simply from the fact that you can represent puzzles like Hamilton (find a Hamiltonian cycle) which is itself an NP-complete problem. It may be that the solution via Clingo means you wind up with something in P-complete by nature of the added transformation step, but since (in general) the overlying problem is only contained in NP-complete that will dominate
|
|
|
Post by Lars Ericson on Aug 8, 2015 6:00:07 GMT -8
You want to be able to play any game you want, and the more interesting games will have higher complexity for a general purpose solver. The goal for GGP would be to recognize sublanguages of GDL that have faster solvers. This reduces to finding sublanguages of ASP that have faster solvers. But constraining first by sublanguages of GDL might lead to more focussed sublanguages of interest in ASP. In any event, by doing the GDL to ASP translation, you get for free all of the research on ASP solvers embedded in CLINGO. Not every game will be hard, although you can express hard problems as games.
Although, to be honest, many of those "decidable" sublanguages in logic have double exponential time cost decision procedures, so you need a D-Wave and a cup of coffee when you run one.
|
|