Format of the Competition
There will be three main classes of problems on which solvers will compete:
- [MGS] (Modeling, Grounding, Solving)
- Benchmarks consist of a problem statement, a set of instances given as sets of ground facts, and the names of the predicates (and their arguments) encoding solutions. The overall performance (incl. grounding and solving) will be measured.
- [SCore] (Solver, Core language)
- Benchmarks are ground programs in the format common to lparse and dlv, aggregates are not allowed. Instances are classified further into two subgroups: normal and disjunctive. The performance of solvers on ground programs will be measured.
- [SLparse] (Solver, Lparse language)
- Benchmarks are ground programs in the lparse format, aggregates are allowed. The performance of solvers on ground programs in lparse format will be measured.
We will only consider decision problems in this first edition of the competition. Thus, every solver must indicate whether the benchmark instance at hand has an answer set (SAT) or not (UNSAT).
For each SAT instance, each solver must output a certificate (that is, an answer set). Each solver's output must conform to the following formats:
- SAT - if an answer set exists
Answer Set: atom1 atom2 ... atomN
The output is one line containing the keywords `Answer Set:` and the names of the atoms in the answer set. Each atom's name is preceded by a single space. Spaces must not occur within atom names.
- UNSAT - if no answer set exists
No Answer Set
The output is one line containing the keywords `No Answer Set'.
We will report the number of instances solved within allocated time (which we will fix at 500 - 1000 seconds) and use it as the primary measure of performance. Actual running time will only be used as a tie breaker. We will check the correctness of solutions. Solvers discovered to have bugs will be disqualified. |