Interactive Powerpoint_How to Master effective communication
On the Infeasibility of Obtaining Polynomial Kernels for Some Problems
1. On the Infeasibility of
Obtaining Polynomial Kernels
why some problems are incompressible
2. PROBLEM KERNELS
how why what
when
no polynomial kernels
Examples
satisfiability vertex-disjoint cycles disjoint factors
WorkarouNds
many kernels
3. PROBLEM KERNELS
how why what
when
no polynomial kernels
Examples
satisfiability vertex-disjoint cycles disjoint factors
WorkarouNds
many kernels
4. ere are people at a professor’s party.
e professor would like to locate a group of six people who are popular.
i.e, everyone is a friend of at least one of the six.
Being a busy man, he asks two of his students to find such a group.
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
5. ere are people at a professor’s party.
e professor would like to locate a group of six people who are popular.
i.e, everyone is a friend of at least one of the six.
Being a busy man, he asks two of his students to find such a group.
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
6. ere is a graph on n vertices.
e professor would like to locate a group of six people who are popular.
i.e, everyone is a friend of at least one of the six.
Being a busy man, he asks two of his students to find such a group.
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
7. ere is a graph on n vertices.
Is there a dominating set of size at most k?
i.e, everyone is a friend of at least one of the six.
Being a busy man, he asks two of his students to find such a group.
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
8. ere is a graph on n vertices.
Is there a dominating set of size at most k?
(Every vertex is a neighbor of at least one of the k vertices.)
Being a busy man, he asks two of his students to find such a group.
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
9. ere is a graph on n vertices.
Is there a dominating set of size at most k?
(Every vertex is a neighbor of at least one of the k vertices.)
e problem is NP–complete,
(25)
e first grimaces and starts making a list of 6 possibilities.
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
10. ere is a graph on n vertices.
Is there a dominating set of size at most k?
(Every vertex is a neighbor of at least one of the k vertices.)
e problem is NP–complete,
but is trivial on “large” graphs of bounded degree,
e second knows that no one in the party has more than three friends .
Academicians tend to be lonely.
11. ere is a graph on n vertices.
Is there a dominating set of size at most k?
(Every vertex is a neighbor of at least one of the k vertices.)
e problem is NP–complete,
but is trivial on “large” graphs of bounded degree,
as you can say NO whenever n > kb.
12. Pre-processing is a humble strategy for coping with hard problems,
almost universally employed.
Mike Fellows
13. We would like to talk about the “preprocessing complexity” of a problem.
To what extent may we simplify/compress a problem before we begin to
solve it?
Note that any compression routine has to run efficiently to look attractive.
14. is makes it unreasonable for any NP–complete problem to admit
compression algorithms.
However, (some) NP–complete problems can be compressed.
We extend our notion (and notation) to understand them.
15. Notation
We denote a parameterized problem as a pair (Q, κ) consisting of a clas-
sical problem Q ⊆ {0, 1}∗ and a parameterization κ : {0, 1}∗ → N.
16. .
.
Data reduction
involves pruning down
a large input
K
. ernel
into an equivalent,
significantly smaller object,
S
. ize of the Kernel
quickly.
17. .
.
Data reduction
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
a large input
K
. ernel
into an equivalent,
significantly smaller object,
S
. ize of the Kernel
quickly.
18. .
.
Data reduction
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
into an equivalent,
significantly smaller object,
S
. ize of the Kernel
quickly.
19. .
.
Data reduction
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
f(x), k ∈ L iff (x, k) ∈ L,
significantly smaller object,
S
. ize of the Kernel
quickly.
20. .
.
Data reduction
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
f(x), k ∈ L iff (x, k) ∈ L,
|f(x)| = g(k),
S
. ize of the Kernel
quickly.
21. .
.
Data reduction
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
f(x), k ∈ L iff (x, k) ∈ L,
|f(x)| = g(k),
S
. ize of the Kernel
and f is polytime computable.
22. .
.
A kernelization procedure
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
f(x), k ∈ L iff (x, k) ∈ L,
|f(x)| = g(k),
S
. ize of the Kernel
and f is polytime computable.
23. .
.
A kernelization procedure
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
K
. ernel
(f(x), k ) ∈ L iff (x, k) ∈ L,
|f(x)| = g(k),
S
. ize of the Kernel
and f is polytime computable.
24. .
.
A kernelization procedure
is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that
for all (x, k), |x| = n
f(x), k ∈ L iff (x, k) ∈ L,
|f(x)| = g(k),
S
. ize of the Kernel
and f is polytime computable.
25. eorem
Having a kernelization procedure implies, and is implied by,
parameterized tractability.
Definition
A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
26. eorem
Having a kernelization procedure implies, and is implied by,
parameterized tractability.
Definition
A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
27. eorem
A problem admits a kernel if, and only if,
it is fixed–parameter tractable.
Definition
A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
28. eorem
A problem admits a kernel if, and only if,
it is fixed–parameter tractable.
Given a kernel, a FPT algorithm is immediate (even brute–force on the
kernel will lead to such an algorithm).
29. eorem
A problem admits a kernel if, and only if,
it is fixed–parameter tractable.
On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.
We run the algorithm for nc+1 steps and either have a trivial kernel if the
algorithm stops, else:
nc+1 < f(k) · nc
30. eorem
A problem admits a kernel if, and only if,
it is fixed–parameter tractable.
On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.
We run the algorithm for nc+1 steps and either have a trivial kernel if the
algorithm stops, else:
n < f(k)
33. “Efficient” Kernelization
What is a reasonable notion of efficiency for kernelization?
e smaller, the better.
In particular,
Polynomial–sized kernels <are than
better Exponential–sized Kernels
34. e problem of finding Dominating Set of size k on graphs where the
degree is bounded by b, parameterized by k, has a linear kernel. is is an
example of a polynomial–sized kernel.
35. PROBLEM KERNELS
how why what
when
no polynomial kernels
Examples
satisfiability vertex-disjoint cycles disjoint factors
WorkarouNds
many kernels
36. A composition algorithm A for a problem is designed to act as a fast
Boolean OR of multiple problem-instances.
It receives as input a sequence of instances.
x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
(k1 = k2 = · · · = kt = k)
It produces as output a yes-instance with a small parameter if and only if
at least one of the instances in the sequences is also a yes-instance.
κ(A(x)) = kO(1)
Running time polynomial in Σi∈[t] |xi |
37. A composition algorithm A for a problem is designed to act as a fast
Boolean OR of multiple problem-instances.
It receives as input a sequence of instances.
x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
k1 = k2 = · · · = kt = k
It produces as output a yes-instance with a small parameter if and only if
at least one of the instances in the sequences is also a yes-instance.
κ(A(x)) = kO(1)
Running time polynomial in Σi∈[t] |xi |
38. A composition algorithm A for a problem is designed to act as a fast
Boolean OR of multiple problem-instances.
It receives as input a sequence of instances.
x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
k1 = k2 = · · · = kt = k
It produces as output a yes-instance with a small parameter if and only if
at least one of the instances in the sequences is also a yes-instance.
k = kO(1)
Running time polynomial in Σi∈[t] |xi |
39. A composition algorithm A for a problem is designed to act as a fast
Boolean OR of multiple problem-instances.
It receives as input a sequence of instances.
x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
k1 = k2 = · · · = kt = k
It produces as output a yes-instance with a small parameter if and only if
at least one of the instances in the sequences is also a yes-instance.
k = kO(1)
Running time polynomial in Σi∈[t] |xi |
41. e Recipe for Hardness
Composition Algorithm + Polynomial Kernel
⇓
Distillation Algorithm
⇓
PH = Σp
3
eorem. Let (P, k) be a compositional parameterized problem such that
P is NP-complete. If P has a polynomial kernel, then P also has a distil-
lation algorithm.
42. Transformations
Let (P, κ) and (Q, γ) be parameterized problems.
We say that there is a polynomial parameter transformation from P to Q if
there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ ,
and a polynomial p : N → N, such that, if f(x) = y, we have:
x ∈ P if and only if y ∈ Q,
43. Transformations
Let (P, κ) and (Q, γ) be parameterized problems.
We say that there is a polynomial parameter transformation from P to Q if
there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ ,
and a polynomial p : N → N, such that, if f(x) = y, we have:
x ∈ P if and only if y ∈ Q,
and
γ(y) p(κ(x))
44. eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial
time and parameter transformation from P to Q and Q has a polynomial
kernel, then P has a polynomial kernel.
45. eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial
time and parameter transformation from P to Q and Q has a polynomial
kernel, then P has a polynomial kernel.
.
. x. .
z∈P
Instance of P
.
. NP–completeness
PPT Reduction
Reduction
.
f(x) = y . .
Kernelization K(y)
Instance of Q
46. PROBLEM KERNELS
how why what
when
no polynomial kernels
Examples
satisfiability vertex-disjoint cycles disjoint factors
WorkarouNds
many kernels
47. Recall
A composition for a parameterized language (Q, κ) is required to
“merge” instances
x1 , x2 , . . . , xt ,
into a single instance x in polynomial time, such that κ(x) is polynomial
in k := κ(xi ) for any i.
e output of the algorithm belongs to(Q, κ(x))
. if, and .only if
there exists at least one i ∈ [t] for which xi ∈ (Q, κ).
48. A .
. Composition Tree
. (a, b)
ρ
.
a .
b
. (x1 , x2 )
ρ . (x3 , x4 )
ρ . (xt−3 , xt−2 ) . (xt−1 , xt )
ρ ρ
.
.1
x .2
x .3
x .4
x . t−3
x . t−2 . t−1
x x .t
x
General Framework For Composition
49. A .
. Composition Tree
. (a, b)
ρ
.
a .
b
. (x1 , x2 )
ρ . (x3 , x4 )
ρ . (xt−3 , xt−2 ) . (xt−1 , xt )
ρ ρ
...
.1
x .2
x .3
x .4
x . t−3
x . t−2 . t−1
x x .t
x
General Framework For Composition
50. A .
. Composition Tree
. (a, b)
ρ
.
a .
b
. (x1 , x2 )
ρ . (x3 , x4 )
ρ . (xt−3 , xt−2 ) . (xt−1 , xt )
ρ ρ
…
.
.1
x .2
x .3
x .4
x . t−3
x . t−2 . t−1
x x .t
x
General Framework For Composition
51. Most composition algorithms can be stated
in terms of a single operation, ρ,
.
.
that describes the dynamic programming
over this complete binary tree on t leaves.
52. Weighted Satisfiability
Given a CNF formula φ,
Is there a satisfying assignment of weight at most k?
Parameter: b+k.
When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
53. Weighted Satisfiability
Given a CNF formula φ,
Is there a satisfying assignment of weight at most k?
Parameter: b+k.
When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
54. Weighted Satisfiability
.Given a CNF formula φ,
Is there a satisfying assignment of weight at most k?
Parameter: b+k.
When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
|
.bp(n)|||d|)
57. Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
58. Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
If t > bk , then solve every αi individually.
Total time = t · bk · p(n)
If not, t < bk – this gives us a bound on the number of instances.
59. Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
If t > bk , then solve every αi individually.
Total time = t · bk · p(n) Total time = t · bk · p(n)
If not, t < bk – this gives us a bound on the number of instances.
60. Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
If t > bk , then solve every αi individually.
Total time = t · bk · p(n) Total time < t · t · p(n)
If not, t < bk – this gives us a bound on the number of instances.
61. Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
If t > bk , then solve every αi individually.
Total time = t · bk · p(n) Total time < t · t · p(n)
If not, t < bk – this gives us a bound on the number of instances.
62. L .
. evel 1
. β1 ∨ b0 ) ∧ . . . ∧ (βn ∨ b0 )
( . β1 ∨ b0 ) ∧ . . . ∧ (βm ∨ b0 )
( ¯ ¯
is is the scene at the leaves, where the βj s are the clauses in αi for
some i and the βj s are clauses of αi+1 .
63. L .
. evel j
. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
( ¯ ¯
is is the scene at the leaves, where the βj s are the clauses in αi for
some i and the βj s are clauses of αi+1 .
64. (β1 ∨ b) ∧ (β2 ∨ b) .∧ . . . ∧ (βn ∨ b) ∧
.
(β1 ∨ b) ∧ (β2 ∨ b) ∧ . . . ∧ (βm ∨ b)
¯ ¯ ¯
. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
( ¯ ¯
Take the conjunction of the formulas stored at the child nodes.
Take the conjunction of the formulas stored at the child nodes.
65. (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧
.
.
(β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj )
¯ ¯ ¯
. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
( ¯ ¯
where the βj s are the clauses in αi for some i and the.
if the parent is a “left child”.
66. (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧
¯ ¯
.
¯
.
(β1 ∨ b
¯ ∨ bj ) ∧ (β ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj )
¯
2
¯ ¯ ¯ ¯
. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
( ¯ ¯
where the βj s are the clauses in αi for some i and the.
if the parent is a “right child”.
67. adding a suffix to “control the weight”
α
∧
(c0 ∨ b0 ) ∧ (c0 ∨ b0 )∧
¯ ¯
(c1 ∨ b1 ) ∧ (c1 ∨ b1 )∧
¯ ¯
...
(ci ∨ bi ) ∧ (ci ∨ bi )∧
¯ ¯
...
(cl−1 ∨ b
¯ ¯l−1 ) ∧ (cl−1 ∨ bl−1 )
68. Claim
e composed instance α has a satisfying assignment of weight 2k
⇐⇒
at least one of the input instances admit a satisfying assignment of
weight k.
71. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
72. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
73. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
74. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
75. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
76. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
77. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
78. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Are there k disjoint factors?
Does the word have all the k factors, mutually disjoint?
79. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Does the word have all the k factors, mutually disjoint?
Does the word have all the k factors, mutually disjoint?
80. Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
k k
1 i < j r, which starts and ends with the same letter.
123235443513
Disjoint factors do not overlap in the word.
Does the word have all the k factors, mutually disjoint?
Parameter: k
82. A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.
83. A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.
Let t be the number of instances input to the composition algorithm.
Again, the non-trivial case is when t < 2k .
84. A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.
Let t be the number of instances input to the composition algorithm.
Again, the non-trivial case is when t < 2k .
Let w1 , w2 , . . . , wt be words over L∗ .
k
85. L .
. eaf Nodes
. 0 w1 b0
b . 0 w2 b0
b ... . 0 wt−1 b0 . 0 wt b0
b b
86. Level j.
.
. j bj−1 ubj−1 vbj−1 bj
b
. j−1 ubj−1
b . j−1 vbj−1
b
87. Claim
e composed word has all the 2k disjoint factors
⇐⇒
at least one of the input instances has all the k disjoint factors.
88. Proof of Correctness
.
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 1 b0 pb0 qb0 b1
b . 1 b0 rb0 sb0 b1
b . 1 b0 wb0 xb0 b1
b . 1 b0 pyb0 zb0 b1
b
. 0 pb0
b . 0 qb0
b . 0 rb0
b . 0 sb0
b . 0 wb0
b . 0 xb0
b . 0 yb0
b . 0 zb0
b
Input words: p, q, r, s, w, x, y, z.
89. Proof of Correctness
.
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 1 b0 pb0 qb0 b1
b . 1 b0 rb0 sb0 b1
b . 1 b0 wb0 xb0 b1
b . 1 b0 pyb0 zb0 b1
b
. 0 pb0
b . 0 qb0
b . 0 rb0
b . 0 sb0
b . 0 wb0
b . 0 xb0
b . 0 yb0
b . 0 zb0
b
Input words: p, q, r, s, w, x, y, z.
90. Proof of Correctness
.
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 1 b0 pb0 qb0 b1
b . 1 b0 rb0 sb0 b1
b . 1 b0 wb0 xb0 b1
b . 1 b0 pyb0 zb0 b1
b
. 0 pb0
b . 0 qb0
b . 0 rb0
b . 0 sb0
b . 0 wb0
b . 0 xb0
b . 0 yb0
b . 0 zb0
b
Input words: p, q, r, s, w, x, y, z.
91. Proof of Correctness
.
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
b
. 1 b0 pb0 qb0 b1
b . 1 b0 rb0 sb0 b1
b . 1 b0 wb0 xb0 b1
b . 1 b0 pyb0 zb0 b1
b
. 0 pb0
b . 0 qb0
b . 0 rb0
b . 0 sb0
b . 0 wb0
b . 0 xb0
b . 0 yb0
b . 0 zb0
b
Input words: p, q, r, s, w, x, y, z.
95. Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Parameter: k.
Related problems:
FVS, has a O(k2 ) kernel.
Edge–Disjoint Cycles, has a O(k2 log2 k) kernel.
96. Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Parameter: k.
Related problems:
FVS, has a O(k2 ) kernel.
Edge–Disjoint Cycles, has a O(k2 log2 k) kernel.
In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles in
polynomial time.
97. . = 1123343422
w
.
1 .
1 .
2 .
3 .
3 .
4 .
3 .
4 .
2 .
2
.1
v .2
v .3
v .4
v .5
v .6
v .7
v .8
v .9
v . 10
v
.
.
1 .
2 .
3 .
4
Disjoint Factors ppt Disjoint Cycles
98. . = 1123343422
w
.
1 .
1 .
2 .
3 .
3 .
4 .
3 .
4 .
2 .
2
.1
v .2
v .3
v .4
v .5
v .6
v .7
v .8
v .9
v . 10
v
.
.
1 .
2 .
3 .
4
Claim
w has all k disjoint factors ⇐⇒ Gw has k vertex–disjoint cycles.
99. PROBLEM KERNELS
how why what
when
no polynomial kernels
Examples
satisfiability vertex-disjoint cycles disjoint factors
WorkarouNds
many kernels
102. No polynomial kernel?
Look for the best exponential or subexponential kernels...
... or build many polynomial kernels.
103. Many polynomial kernels give us better algorithms:
( 2)
k
k
2klog k
versus
p(n)ck
( )
ck
p(n) ·
k
104. We say that a subdigraph T of a digraph D is an out–tree if T is an
oriented tree with only one vertex r of in-degree zero (called the root).
e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.
Parameter: k
105. We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.
e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.
Parameter: k
106. We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.
e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.
Parameter: k
107. We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.
e D k–L O-B problem is to find an
out-branching in a given digraph with at least k leaves.
Parameter: k
108. We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.
e D k–L O-B problem is to find an
out-branching in a given digraph with at least k leaves.
Parameter: k
109. R k–L O–B admits a kernel of size O(k3 ).
k–L O–B does not admit a polynomial kernel (proof
deferred).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
110. R k–L O–B admits a kernel of size O(k3 ).
k–L O–B does not admit a polynomial kernel (proof
deferred).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
111. R k–L O–B admits a kernel of size O(k3 ).
k–L O–B does not admit a polynomial kernel (proof
deferred).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O–B.
112. R k–L O–T admits a kernel of size O(k3 ).
k–L O–B does not admit a polynomial kernel (proof
deferred).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
113. R k–L O–T admits a kernel of size O(k3 ).
k–L O–T does not admit a polynomial kernel (composition via
disjoint union).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
114. R k–L O–T admits a kernel of size O(k3 ).
k–L O–T does not admit a polynomial kernel (composition via
disjoint union).
However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O–T.
117. Composition
.
(produces an instance of k–Leaf Out Branching)
. D , bmax + 1)
(
Kernelization
.
(Given By Assumption)
.D , k )
(
NP–completeness reduction from k–Leaf Out Branching
.
to k–Leaf Out Tree)
. D∗ , k∗ )
(
118. A polynomial pseudo–kernelization K pro-
duces kernels whose size can be bounded by:
h(k) · n1−ε
where k is the parameter of the problem,
and h is polynomial.
concluding
remarks
119. Analogous to the composition framework,
there are algorithms (called Linear OR)
whose existence helps us rule out polynomial
pseudo–kernels.
concluding
remarks
120. Analogous to the composition framework,
there are algorithms (called Linear OR)
whose existence helps us rule out polynomial
pseudo–kernels.
Most compositional algorithms can be ex-
tended to fit the definition of Linear OR.
concluding
remarks
121. A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.
concluding
remarks
122. A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.
Mechanisms for ruling out strong kernels are
simpler and rely on self–reductions and the
hypothesis that P = NP.
concluding
remarks
123. A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.
Mechanisms for ruling out strong kernels are
simpler and rely on self–reductions and the
hypothesis that P = NP.
Example: k–Path is not likely to admit a
strong kernel.
concluding
remarks
124. ere are no known ways of inferring that a problem is unlikely to have a
kernel of size kc for some specific c.
Open P
roblems
125. Can lower bound theorems be proved under some “well-believed”
conjectures of parameterized complexity - for instance - FPT = XP, or,
FPT = W[t] for some t ∈ N+ ?
Open P
roblems
126. A lower bound framework for ruling out p(k) · f(l)–sized kernels for
problems with two parameters (k, l) would be useful.
Open P
roblems
127. “Many polynomial kernels” have been found only for the directed
outbranching problem. It would be interesting to apply the technique to
other problems that are not expected to have polynomial–sized kernels.
Open P
roblems