Summary of all benchmark classes in the SCore-∨ competition
This section presents the results for the benchmark classes of the First
Answer Set Programming System Competition.
Each single run was limited to 600 seconds execution time and 448 MB RAM memory
usage.
For the sake of completeness, we provide similar data for all benchmarks used in
each competition class. In addition, the tables give the number of instances
selected from each class ("#") and provide additional information on the
satisfiable and unsatisfiable instances.
Results for SAT and UNSAT instances
Benchmark Class | # | Solved | % | SAT | % | UNSAT | % | min | max | avg |
Grammar Based Information Extraction | 15 | 120/120 | 100.00 | 64/64 | 100.00 | 56/56 | 100.00 | 0.62 | 7.86 | 5.03 |
disjunctive Loops | 3 | 21/24 | 87.50 | 0/0 | | 21/24 | 87.50 | 0.44 | 522.63 | 95.24 |
Strategic Companies | 15 | 88/120 | 73.33 | 88/120 | 73.33 | 0/0 | | 0.35 | 523.40 | 71.22 |
Mutex | 7 | 18/56 | 32.14 | 0/0 | | 18/56 | 32.14 | 0.03 | 259.97 | 37.41 |
Random Quantified Boolean Formulas | 15 | 35/120 | 29.17 | 0/0 | | 35/120 | 29.17 | 0.11 | 290.99 | 44.41 |
Results for SAT instances only
Benchmark Class | # | Solved | % | min | max | avg |
Grammar Based Information Extraction | 8 | 64/64 | 100.00 | 1.19 | 7.86 | 5.35 |
Strategic Companies | 15 | 88/120 | 73.33 | 0.35 | 523.40 | 71.22 |
Results for UNSAT instances only
Benchmark Class | # | Solved | % | min | max | avg |
Grammar Based Information Extraction | 7 | 56/56 | 100.00 | 0.62 | 6.88 | 4.67 |
disjunctive Loops | 3 | 21/24 | 87.50 | 0.44 | 522.63 | 95.24 |
Mutex | 7 | 18/56 | 32.14 | 0.03 | 259.97 | 37.41 |
Random Quantified Boolean Formulas | 15 | 35/120 | 29.17 | 0.11 | 290.99 | 44.41 |
|