Competition results
This page presents the results of the First Answer Set Programming System Competition. The placement of systems was determined according to the number of instances solved within the allocated time and space as the primary measure of performance. Run times were used as tie breakers. Given that each system was allowed to participate in each competition category in three variants, we decided to allocate only one place to each system. The place of a system is determined by its best performing variant, represented by a call script.
Solvers
Participating solvers and corresponding call scripts (² used in both SCore and SCore-∨; ³ used in SCore-∨ only). See Format of the Competition for more details on competition types.
Solver |
Affiliation |
MGS |
SCore |
SLparse |
asper | Angers | | ASPeR-call-script ASPeRS30-call-script ASPeRS20-call-script | |
assat | Hongkong | script.assat.normal | | script.assat.lparse-output |
clasp | Potsdam | clasp_cmp_score clasp_cmp_score2 clasp_score_def | clasp_cmp_score_glp clasp_cmp_score_glp2 clasp_score_glp_def | clasp_cmp_slparse clasp_cmp_slparse2 clasp_slparse_def |
cmodels | Texas | default scriptAtomreasonLp scriptEloopLp | defaultGlparse.sh scriptAtomreasonGlparse scriptEloopGlparse disjGlparseDefault³ disjGparseEloop³ disjGparseVerMin³ | groundedDefault scriptAtomreasonGr scriptEloopGr |
dlv | Vienna/Rende | dlv-contest-special dlv-contest | dlv-contest-special² dlv-contest² | |
gnt | Helsinki | gnt dencode+gnt dencode_bc+gnt | gnt_score² dencode+gnt_score² dencode_bc+gnt_score² | gnt_slparse dencode+gnt_slparse dencode_bc+gnt_slparse |
lp2sat | Helsinki | | lp2sat+minisat wf+lp2sat+minisat lp2sat+siege | |
nomore | Potsdam | nomore-default nomore-localprop nomore-D | nomore-default-SCore nomore-localprop-SCore nomore-D-SCore | nomore-default-slparse nomore-localprop-slparse nomore-D-slparse |
pbmodels | Kentucky | pbmodels-minisat+-MGS pbmodels-pueblo-MGS pbmodels-wsatcc-MGS | pbmodels-minisat+-SCore pbmodels-pueblo-SCore pbmodels-wsatcc-SCore | pbmodels-minisat+-SLparse pbmodels-pueble-SLparse pbmodels-wsatcc-SLparse |
smodels | Helsinki | smodels smodels_rs smodels_rsn | smodels_score smodels_rs_score smodels_rsn_score | smodels_slparse smodels_rs_slparse smodels_rsn_slparse |
Benchmarks
The ASP community has contributed significantly to this ASP competition by providing numerous benchmarks. The page Submitted Benchmarks gives an overview, listing the respective benchmark classes, the accompanying number of instances, and the names of the contributors.
For the competition, benchmarks were selected by a fixed yet random scheme, described in following algorithm.
The available benchmark classes are predefined in Classes . Their instances that have been run already are in Used , the ones not run so far are in Fresh .
As an invariant, we have Used ∩ Fresh = ∅.
For a benchmark class C in Classes , we access used and fresh instances via Used[C] and Fresh[C] , respectively.
The maximum number max of instances per class aims at having approximately 100 benchmark instances overall in the evaluation, that is, |Classes| ⋅ max ≈ 100 (if feasible).
A benchmark instance in Used[C] is considered suitable, that is, it is in Suitable(Used[C]) , if some call script (explained below) was able to solve it and not more than three call scripts solved it in less than one second.
Function Select(n, Fresh[C]) randomly determines n fresh benchmark instances from class C if available, i.e., if n ≤ Fresh[C] , in order to end up with max suitable instances of class C .
Procedure Run(ToRun) runs all call scripts on the benchmark instances scheduled in ToRun .
When ToRun runs empty, no fresh benchmark instances are selectable.
If a benchmark class yields less than max suitable instances, the algorithm needs to be re-invoked with an increased max value for the other benchmark classes; thus, the algorithm is "only" semiautomatic.
Input:
Classes --- set of benchmark classes
Used --- set of benchmark instances run already
Fresh --- set of benchmark instances not run so far
max --- maximum number of suitable benchmark instances per benchmark class
Repeat
ToRun = {}
ForEach (C in Classes)
done = |Suitable(Used[C])|
If(done < max) ToRun = ToRun + Select(max - done, Fresh[C])
EndForEach
Until(ToRun = {})
Downloads
Competition Winners
|
MGS |
SCore |
SCore-∨ |
SLparse |
FIRST PLACE WINNER |
dlv |
clasp |
dlv |
clasp |
SECOND PLACE WINNER |
pbmodels |
smodels |
cmodels |
pbmodels |
THIRD PLACE WINNER |
clasp |
cmodels |
gnt |
smodels |
PHOTOS (click to enlarge) |
|
|
|
|
Summary
|