Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a Performance Overview
1. Argumentation Extensions
Enumeration as a Constraint
Satisfaction Problem: a
Performance Overview
Mauro Vallati, Federico Cerutti, Massimiliano Giacomin
DARe-2014 — Tuesday 19th August, 2014
2. Implementations for Enumerating Preferred
Extensions
Two main approaches:
1. Ad-hoc:
– NAD-Alg [Nofal et al., 2014];
2. Reduction of enumerating preferred extensions into a:
– ASP AspartixM [Dvorák et al., 2011];
– CSP CONArg2 [Bistarelli et al., 2014];
– SAT (+ maximisation process) PrefSAT
[Cerutti et al., 2013].
4. Background
Definition
Given an AF = hA;Ri, with R A A:
- a set S A is conflict–free if @ a; b 2 S s.t. a ! b;
- an argument a 2 A is acceptable with respect to a set S A if 8b 2 A
s.t. b ! a, 9 c 2 S s.t. c ! b;
- a set S A is admissible if S is conflict–free and every element of S is
acceptable with respect to S;
- a set S A is a complete extension, i.e. S 2 ECO(), iff S is admissible
and 8a 2 A s.t. a is acceptable w.r.t. S, a 2 S;
- a set S A is a preferred extension, i.e. S 2 EPR(), iff S is a maximal
(w.r.t. set inclusion) complete set.
5. Background
Definition
Let hA;Ri be an AF: Lab : A7! fin; out; undecg is a complete labelling iff
8a 2 A:
- Lab(a) = in , 8b 2 aLab(b) = out;
- Lab(a) = out , 9b 2 a : Lab(b) = in.
Let S A a conflict–free set: the corresponding labelling is
Ext2Lab(S) Lab, where
- Lab(a) = in , a 2 S
- Lab(a) = out , 9 b 2 S s.t. b ! a
- Lab(a) = undec , a =2 S ^ @ b 2 S s.t. b ! a
Proposition ([Caminada, 2006])
Given an an AF = hA;Ri, Lab is a complete (grounded, preferred)
labelling of if and only if there is a complete (grounded, preferred)
extension S of such that Lab = Ext2Lab(S).
8. Answer Set Programming
– Answer Set Programming is a recent problem solving approach;
– It has roots in KR, logic programming, and nonmonotonic
reasoning;
– The idea: stop trying to prove something, represent solutions, or
models (Answer Sets)!
– Normal logic program P is a finite set of rules of the form:
a b1; : : : ; bm; not c1; : : : ; not cn
where a; bi; cj are literals of the form p or :p (strong negation,
also written as “-”) where p is a first-order atom from a classical
FOL signature.
– An answer set is a set of ground atoms that are “collectively
acceptable”
9. Constraint Satisfaction Programming
[Rossi et al., 2008]
Definition
A Constraint Satisfaction Problem (CSP) P is a triple P = hX;D;Ci
such that:
– X = hx1; : : : ; xni is a tuple of variables;
– D = hD1; : : : ;Dni a tuple of domains such that 8i; xi 2 Di;
– C = hC1; : : : ;Cti is a tuple of constraints, where
8j;Cj = hRSj ; Sji, Sj fxijxi is a variableg, RSj (SD
j )n
where SD
j = fDijDi is a domain, and xi 2 Sjg.
Definition
A solution to the CSP P is A = ha1; : : : ; ani where 8i; ai 2 Di and
8j;RSj holds on the projection of A onto the scope Sj . If the set of
solutions is empty, the CSP is unsatisfiable.
10. Propositional Satisfiability Problems
SAT
solver
Φ1
Φ2
Φ3
SAT
UNSAT
SAT problem
– The SAT problem is a formula in conjunctive normal form
(CNF):
i = (u1 _ :u2 _ u3) ^ (u1 _ u2) ^ (:u1 _ :u2 _ u3)
– A solver searches a solution for the CNF, viz. a variable
assignment satisfying the formula.
u1 = V , u2 = F, u3 = V
13. AspartixM: [Dvorák et al., 2011]
– Expresses argumentation semantics in Answer Set Programming
(ASP);
– Tests for subset-maximality exploiting the metasp optimisation
frontend for the ASP-package gringo/claspD;
– Database of the form:
farg(a) j a 2 Ag [ fdefeat(a; b) j ha; bi 2 Rg
– Example of program for checking the conflict–freeness:
cf = f in(X) not out(X); arg(X);
out(X) not in(X); arg(X);
in(X); in(Y ); defeat(X; Y )g:
14. CONArg2: [Bistarelli and Santini, 2012,
Bistarelli et al., 2014]
Given an AF hA;Ri:
1. create a variable for each argument whose domain is always f0; 1g
— 8ai 2 A; 9xi 2 X such that Di = f0; 1g;
2. describe constraints associated to different definitions of Dung’s
argumentation framework: e.g.
fa; bg A is conflict–free iff :(x1 = 1 ^ x2 = 1);
3. solve the CSP problem.
15. PrefSAT: [Cerutti et al., 2013]
Given an AF = hA;Ri, is a boolean formula (complete labelling
formula) such that each satisfying assignment of the formula
corresponds to a complete labelling:
– k = jAj
– : f1; : : : ; kg7! A is a bijection (the inverse map is 1)
– For each argument (i) we define three boolean variables:
– Ii, which is true when argument (i) is labelled in, false
otherwise;
– Oi, which is true when argument (i) is labelled out, false
otherwise;
– Ui, which is true when argument (i) is labelled undec, false
otherwise;
– V() , [1ijAjfIi;Oi;Uig (set of variables for the AF )
16. PrefSAT: [Cerutti et al., 2013]
– Lab is a total function;
– If a is not attacked, Lab(a) = in;
– Lab(a) = in , 8b 2 aLab(b) = out;
– Lab(a) = out , 9b 2 a : Lab(b) = in;
– Lab(a) = undec , 8b 2 aLab(b)6= in ^ 9c 2 a : Lab(c) =
undec.
17. PrefSAT: [Cerutti et al., 2013]
^
i2f1;:::;kg
(Ii _ Oi _ Ui) ^ (:Ii _ :Oi)^(:Ii _ :Ui) ^ (:Oi _ :Ui)
^
^
fij(i)=;g
(Ii ^ :Oi ^ :Ui) ^
^
fij(i)6=;g
0
@Ii _
0
@
_
fjj(j)!(i)g
1
A
(:Oj )
1
A^
^
fij(i)6=;g
0
@
^
fjj(j)!(i)g
:Ii _ Oj
1
A ^
^
fij(i)6=;g
0
@
^
fjj(j)!(i)g
:Ij _ Oi
1
A ^
^
fij(i)6=;g
0
@:Oi _
0
@
_
fjj(j)!(i)g
Ij
1
A
1
A ^
^
fij(i)6=;g
0
@
^
fkj(k)!(i)g
0
@Ui _ :Uk _
0
@
_
fjj(j)!(i)g
Ij
1
A
1
A
1
A ^
^
fij(i)6=;g
0
@
0
@
^
fjj(j)!(i)g
1
A ^
(:Ui _ :Ij )
0
@:Ui _
0
@
_
fjj(j)!(i)g
Uj
1
A
1
A
1
A
19. The Experimental hypothesis
There will be a strict ordering — under any configuration
— regarding the performance of the software measured in (1)
CPU-time needed to enumerate all the preferred extensions
given an AF and in (2) percentage of successful
enumeration. Such an ordering should see the ad-hoc
approach NAD-Alg as the best one, followed by PrefSAT,
CONArg2, and finally AspartixM.
20. Empirical Evaluation: the Experiment
– Random generated 720 AFs divided in different classes according
to two dimensions:
– jAj: ranging from 25 to 225 with a step of 25;
– generation of the attack relations:
– fixing the probability patt 2 f0:25; 0:5; 0:75g that there is
an attack for each ordered pair of arguments: 10 AFs
forbidding self-attacks, 10 AFs allowing self-attacks;
– selecting randomly the number natt of attacks in it: 20
AFs.
21. Analysis Using the International Planning
Competition (IPC) Score
– For each test case (in our case, each test AF) let T be the best
execution time among the compared systems (if no system
produces the solution within the time limit, the test case is not
considered valid and ignored).
– For each valid case, each system gets a score of
1=(1 + log10(T=T )), where T is its execution time, or a score of 0
if it fails in that case. Runtimes below 1 sec get by default the
maximal score of 1.
– The (non normalized) IPC score for a system is the sum of its
scores over all the valid test cases. The normalised IPC score
ranges from 0 to 100 and is defined as
(IPC=# of valid cases) 100.
22. IPC score w.r.t. number of arguments
100
80
60
40
20
0
CONArg2
AspartixM
PrefSAT
NAD-Alg
25 50 75 100 125 150 175 200 225
Normalised IPC score (y axis) w.r.t. the number of arguments (x axis)
of each considered system.
23. Average runtime w.r.t. number of arguments
Average CPU-Time
25 50 75 100 125 150 175 200 225
CONArg2 0.25 0.27 0.65 2.15 5.48 14.98 73.78 86.62 187.11
AspartixM 0.18 0.67 1.44 3.26 6.02 15.70 27.99 87.46 117.18
PrefSAT 0.04 0.11 0.23 0.44 0.81 1.67 3.76 6.41 16.21
NAD-Alg 0.01 0.02 0.06 0.99 10.23 12.74 60.35 42.78 75.07
Average runtime for each of the considered solvers, according to the
number of arguments of the AFs.
24. IPC score w.r.t. probability of attacks
100
80
60
40
CONArg2
AspartixM
PrefSAT
NAD-Alg
25 50 75 RAND
Normalised IPC score (y axis) w.r.t. the probability of attacks (x
axis) of each considered system.
25. Average runtime w.r.t. probability of attacks
% Solved Average CPU-Time
25 50 75 RAND 25 50 75 RAND
CONArg2 97.8 100.0 100.0 97.2 87.4 11.0 7.1 59.6
AspartixM 98.3 100.0 100.0 98.9 56.5 14.7 10.0 34.0
PrefSAT 100.0 100.0 100.0 100.0 5.1 1.6 2.2 4.2
NAD-Alg 100.0 100.0 100.0 93.9 18.9 0.2 0.2 70.6
Percentage of solved AFs and average runtime for each of the
considered solvers, according to the percentages of attacks.
27. Concluding Remarks
– First comparison of state-of-the-art approaches which transform
the preferred enumeration problem into a CSP (CONArg2),
ASP (AspartixM) and SAT (PrefSAT) with the best
argumentation-dedicated approach NAD-Alg;
– Experimental hypothesis partially true: most of the cases this
order: NAD-Alg, PrefSAT, CONArg2 and AspartixM. But
there are several cases in which:
1. PrefSAT has been the best approach — and it is also the
only one implementation that solved all the AFs considered
in the experiment;
2. AspartixM performed significantly — according to the
Friedman statistic test confirmed by a post-hoc analysis with
the Wilcoxon signed rank with a Bonferroni correction
applied — better than CONArg2.
28. Future Works
– Larger experimental evaluation;
– Exploitation of a white-box approach: looking at the design of
the solvers.
29. Acknowledgement
The authors would like to acknowledge the use of the University of
Huddersfield Queensgate Grid in carrying out this work.
31. References I
[Bistarelli et al., 2014] Bistarelli, S., Rossi, F., and Santini, F. (2014).
Enumerating Extensions on Random Abstract-AFs with ArgTools, Aspartix, ConArg2, and
Dung-O-Matic.
In Bulling, N., van der Torre, L., Villata, S., Jamroga, W., and Vasconcelos, W., editors,
Computational Logic in Multi-Agent Systems, volume 8624 of Lecture Notes in Computer
Science, pages 70–86. Springer International Publishing.
[Bistarelli and Santini, 2012] Bistarelli, S. and Santini, F. (2012).
Modeling and solving afs with a constraint-based tool: Conarg.
In Modgil, S., Oren, N., and Toni, F., editors, Theorie and Applications of Formal
Argumentation, volume 7132 of Lecture Notes in Computer Science, pages 99–116. Springer
Berlin Heidelberg.
[Caminada, 2006] Caminada, M. (2006).
On the issue of reinstatement in argumentation.
In Proceedings of JELIA 2006, pages 111–123.
[Cerutti et al., 2013] Cerutti, F., Dunne, P. E., Giacomin, M., and Vallati, M. (2013).
Computing preferred extensions in abstract argumentation: A sat-based approach.
In Proceedings of Theory and Applications of Formal Argumentation (TAFA 2013), pages
176–193.
32. References II
[Dvorák et al., 2011] Dvorák, W., Gaggl, S. A., Wallner, J., and Woltran, S. (2011).
Making Use of Advances in Answer-Set Programming for Abstract Argumentation Systems.
In Proceedings of the 19th International Conference on Applications of Declarative
Programming and Knowledge Management (INAP 2011).
[Nofal et al., 2014] Nofal, S., Atkinson, K., and Dunne, P. E. (2014).
Algorithms for decision problems in argument systems under preferred semantics.
Artificial Intelligence, 207:23–51.
[Rossi et al., 2008] Rossi, F., van Beek, P., and Walsh, T. (2008).
Chapter 4 constraint programming.
In Frank van Harmelen, V. L. and Porter, B., editors, Handbook of Knowledge Representation,
volume 3 of Foundations of Artificial Intelligence, pages 181 – 211. Elsevier.