Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.
Próximos SlideShares
Carregando em…5
×

# Tetsunao Matsuta

848 visualizações

Tetsunao Matsuta

• Full Name
Comment goes here.

Are you sure you want to Yes No
• Seja o primeiro a comentar

• Seja a primeira pessoa a gostar disto

### Tetsunao Matsuta

1. 1. 2015 12 4 1 / 56
2. 2. 1. 2. 3. 4. 2 / 56
3. 3. 1. 3 / 56
4. 4. 4 / 56
5. 5. 5 / 56
6. 6. John Snow 1854 616 John Snow : 6 / 56
7. 7. 7 / 56
8. 8. 1. 8 / 56
9. 9. G : V(G) : G E(G) : G (i, j) ∈ E(G) : i j 9 / 56
10. 10. t ≥ 0 F(t) F(t) = 1 − e−λt λ F 10 / 56
11. 11. ∆t λ · ∆t t (1 − λ · ∆t) t ∆t ∆t → 0 lim ∆t→0 (1 − λ · ∆t) t ∆t = e−λt 11 / 56
12. 12. {τ(i,j)}(i,j)∈E : F (Susceptible-infected (SI) model) 0 v1 v v v′ τ(v,v′) † † SIS model 12 / 56
13. 13. S(G) : G Gn : ( ) n G G Gn ∈ S(G) v1 V(Gn) SI model 13 / 56
14. 14. ϕ : S(G) → V(G) : Cn(ϕ, v1) : v1 ϕ Cn(ϕ, v1) = Gn∈S(G) Pn(Gn|v1) Pr{ϕ(Gn) = v1} Pn(Gn|v) v n Gn 14 / 56
15. 15. =⇒ Gn ∈ S(G) V(Gn) [Shah and Zaman, 2011] Gn ∈ S(G) ˆv = argmax v∈V(Gn) Pn(Gn|v) 2 Pn(Gn|v) 15 / 56
16. 16. 1. 16 / 56
17. 17. N(v) : G v B(V) : V B(V) v∈V N(v) V Pn(v1) : v1 n Pn(v1) vn ∈ Vn : vi ∈ B({v1, · · · , vi−1}) vn = (v1, v2, · · · , vn) Vn = V × V · · · × V n Pn(v1, Gn) : V(Gn) Pn(v1) Pn(v1, Gn) vn ∈ Pn(v1) : V(Gn) = {v1, v2, · · · , vn} 17 / 56
18. 18. N(1) = {2, 3, 4} B({1, 2}) = {3, 4, 5, 6} P2(2) = {(2, 1), (2, 5), (2, 6)} P3(2) = {(2, 1, 4), (2, 1, 3), (2, 1, 5), (2, 1, 6), (2, 5, 1), (2, 5, 6), (2, 6, 1), (2, 6, 5)} P3(2, Gn) = {(2, 5, 6), (2, 6, 5)} 18 / 56
19. 19. Regular Tree : Regular Tree † † 19 / 56
20. 20. Vi : i Pr{V1 = v1} = 1 v2 ∈ B({v1}) Pr{V2 = v2|V1 = v1} = Pr{τ(v1,v2) = min v∈B({v1}) {τ(v1,v)}} = 1 |B({v1})| 20 / 56
21. 21. † vn−1 ∈ P(v1) vn ∈ B({v1, · · · , vn−1}) Pr{Vn = vn|V n−1 = vn−1 } = 1 |B({v1, · · · , vn−1})| † τ Pr{τ > s + t|τ > s} = Pr{τ > t} 21 / 56
22. 22. δ(v) : v |B({v1})| = δ(v1) |B({v1, v2})| = |B({v1})| − 1 + δ(v2) − 1 = δ(v1) + (δ(v2) − 2) |B({v1, v2, v3})| = |B({v1, v2})| − 1 + δ(v3) − 1 = δ(v1) + (δ(v2) − 2) + (δ(v3) − 2) |B({v1, · · · , vn})| = δ(v1) + n i=2 (δ(vi) − 2) 22 / 56
23. 23. Pn(Gn|v1) = Pr{Gn V n |v1} = vn∈P(v1,Gn) Pr{V n = vn } = vn∈P(v1,Gn) n k=2 1 |B({v1, · · · , vk−1})| = vn∈P(v1,Gn) n k=2 1 δ(v1) + k i=2(δ(vi) − 2) vn∈P(v1,Gn) p(vn ) 23 / 56
24. 24. Regular Tree argmax v∈V(Gn) Pn(Gn|v) = argmax v∈V(Gn) vn∈P(v,Gn) n k=2 1 δ + k i=2(δ − 2) = argmax v∈V(Gn) |P(v, Gn)| argmax v∈V(Gn) R(v, Gn) argmax v∈V(Gn) R(v, Gn) O(n) [Shah and Zaman, 2011] 24 / 56
25. 25. Regular Tree [Dong et al., 2013] ϕML v1 ∈ V(G) Cn(ϕML, v1) = ⎧ ⎪⎪⎨ ⎪⎪⎩ 1 2n−1 n−1 ⌊(n−1)/2⌋ if δ = 2, 1 4 + 3 4 1 2⌊n/2⌋+1 if δ = 3, 1 − δ 1 2 PP´olya(n/2) + x>n/2 PP´olya(x) if δ ≥ 4 PP´olya(x) = n − 1 x 1(δ−2,x)(δ − 1)(δ−2,n−1−x) δ(δ−2,n−1) x(a,b) = x(x + a)(x + 2a) · · · (x + (b − 1)a) 25 / 56
26. 26. 100 200 300 400 500 0.0 0.2 0.4 0.6 0.8 1.0 n Correctprob. ∆ 2 ∆ 3 ∆ 4 ∆ 5 26 / 56
27. 27. [Shah and Zaman, 2012] δ = 2 v1 ∈ V(G) Cn(ϕML, v1) = Θ 1 √ n δ ≥ 3 lim n→∞ Cn(ϕML, v1) = δ · I1/2 1 δ − 2 , δ − 1 δ − 2 − (δ − 1) Ix(a, b) Ix(a, b) Γ(a + b) Γ(a)Γ(b) x 0 ta−1 (1 − t)b−1 dt, Γ(·) 27 / 56
28. 28. 50 100 150 200 0.300 0.302 0.304 0.306 0.308 ∆ limCn [Shah and Zaman, 2012] lim δ→∞ lim n→∞ Cn(ϕML, v1) = 1 − ln 2 ≈ 0.3069 28 / 56
29. 29. regular tree [Shah and Zaman, 2011] ˆv = argmax v∈V(Gn) R(v, Gn)p(vn BFS(v)) vn BFS(v) v Gn vn ∈ P(v, Gn) p(vn) 29 / 56
30. 30. [Shah and Zaman, 2011] ˆv = argmax v∈V(Gn) R(v, TBFS(v))p(vn BFS(v)) TBFS(v) v Gn vn ∈ P(v, Gn) p(vn) 30 / 56
31. 31. : Small-World Network 5000 Small-world network 400 −→ ) 2% [Shah and Zaman, 2011] 31 / 56
32. 32. : Scale-Free Network 5000 scale-free netowrk 400 −→ 5% [Shah and Zaman, 2011] 32 / 56
33. 33. 33 / 56
34. 34. 2. 34 / 56
35. 35. : 2 35 / 56
36. 36. regular tree 36 / 56
37. 37. Dn(d) : v1 ˆv d Dn(d) Pr ˆV ∈ v (d) 1 , v (d) 2 · · · , v (d) δ·(δ−1)d−1 ˆV : v (d) 1 , · · · , v (d) δ·(δ−1)d−1 : v1 d(≥ 1) ( δ · (δ − 1)d−1 ) Dn(0) = Cn(ϕML, v1) 37 / 56
38. 38. 1 1 k l (k − 1) k − 1 l + k − 1 l − 1 xk = x(x + 1)(x + 2) · · · (x + k − 1) xk = n l=0 k l xl 1 s(k, l) (−1)k−l k l xk = x(x − 1)(x − 2) · · · (x − k + 1) xk = n l=0 s(k, l)xl 38 / 56
39. 39. δ = 3 [Matsuta and Uyematsu, 2014] d ≥ 1 n ≥ 3 Dn(d) = 3 · 2d−1 (n+1)/2 k=d+1 2 k + 1 (n+3)/2 k+1 n+1 k+1 (−1)d+k (k − 1)! d l=1 s(k, l) n ≥ 2 Dn(d) = 3 · 2d−1 n/2+1 k=d+1 2 k + 1 n/2+1 k+1 + n 2(n+2) n/2+1 k n+1 k+1 (−1)d+k (k − 1)! d l=1 s(k, l) 39 / 56
40. 40. δ = 3 50 100 150 200 0.0 0.1 0.2 0.3 0.4 0.5 0.6 n DistanceProb. d 0 d 1 d 2 d 3 d 4 d 5 40 / 56
41. 41. δ = 3 [Matsuta and Uyematsu, 2014] d ≥ 2 lim n→∞ Dn(d) = 3 · 2d−1 (−1)d d l=1 (−1)l lnl 2 l! − 2 + l m=0 (ln 2)m m! + 1 4 0 1 2 3 4 5 6 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 6 d DistanceProb. 41 / 56
42. 42. δ = 3 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 d CumulativeProb. 3 d=0 lim n→∞ Dn(d) ≈ 0.9676. 42 / 56
43. 43. δ ≥ 3 [Matsuta and Uyematsu, 2014] d ≥ 1 δ ≥ 3 m ∈ N 0 ≤ lim n→∞ Dn(d) − f(δ, d, m) ≤ e2 (3 + m)24−m f(δ, d, m) δ(δ − 1)d−1 m k=d+1 p(δ, d, k) I1/2 k − 1 + 1 δ − 2 , δ − 1 δ − 2 − (δ − 1)I1/2 k − 1 + δ − 1 δ − 2 , 1 δ − 2 p(δ, d, k) 2 (δ − 2)d 1 δ−2 k−1 2 δ−2 k ζd−1 k−2 1 δ − 2 ζd k (x) 1≤j1<j2<···<jd≤k d i=1 1 ji + x ( ) 43 / 56
44. 44. δ = 6 m = 35 f(δ, d, 35) lim n→∞ Dn(d) − f(δ, d, m) ≤ e2 (3 + m)24−m ≈ 1.3075 · 10−7 0 1 2 3 4 5 6 0.0 0.1 0.2 0.3 0.4 0.5 0 1 2 3 4 5 6 d DistanceProb. 44 / 56
45. 45. δ = 6 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 d CumulativeProb. 3 d=0 lim n→∞ Dn(d) ≈ lim n→∞ Cn(ϕML, v1) + 3 d=1 f(6, d, 35) ≈ 0.9854. 45 / 56
46. 46. 2. 46 / 56
47. 47. 47 / 56
48. 48. 3. 48 / 56
49. 49. [Dong et al., 2013] Regular tree P´olya 49 / 56
50. 50. SIR [Zhu and Ying, 2013] Susceptible-Infected-Recovered model: SI + R (Recovered) sample path based detection Regular tree 50 / 56
51. 51. [Prakash et al., 2012] MDL (Minimum description length) 51 / 56
52. 52. [Wang et al., 2014] Gn L Regular tree L → ∞ 1 L ≥ 2 δ → ∞ 1 52 / 56
53. 53. [Luo et al., 2014] Sample path based detection Regular tree O(n) O(n3) Regular tree 53 / 56
54. 54. 4. 54 / 56
55. 55. regular tree Regular tree 55 / 56
56. 56. [Dong et al., 2013] W. Dong, W. Zhang, and C. W. Tan, “Rooting out the rumor culprit from suspects,” ISIT 2013, pp.2671–2675, 7-12 July 2013. [Kuba and Prodinger, 2010] M. Kuba and H. Prodinger, “A note on Stirling series,” Integers, vol. 10, no. 4, pp. 393–406, 2010. [Luo et al., 2014] W. Luo, W. P. Tay, and M. Leng, “How to identify an infection source with limited observations,” IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. 4, pp. 586–597, Aug. 2014 [Matsuta and Uyematsu, 2014] T. Matsuta and T. Uyematsu, “Probability distributions of the distance between the rumor source and its estimation on regular trees,” SITA 2014, pp. 605-610, Dec. 2014. [Prakash et al., 2012] B. A. Prakash, J. Vreeken, and C. Faloutsos, “Spotting culprits in epidemics: How many and which ones?,” ICDM 2012, pp. 11–20, 10-13 Dec. 2012. [Shah and Zaman, 2011] D. Shah and T. Zaman, “Rumors in a network: Who’s the culprit?,” IEEE Trans. Inform. Theory, vol. 57,no. 8, pp. 5163–5181, Aug. 2011. [Shah and Zaman, 2012] D. Shah and T. Zaman, “Rumor centrality: A universal source detector,” SIGMETRICS Perform. Eval. Rev., vol. 40, no. 1, pp. 199–210, Jun. 2012. [Steyn, 1951] H. S. Steyn, “On discrete multivariate probability functions,” Proc. Koninklijke Nderlandse Akademie van Wetenschappen, Ser. A, vol. 54, pp. 23–30. [Wang et al., 2014] Z. Wang, W. Dong, and W. Zhang and C.W. Tan, “Rumor source detection with multiple observations: Fundamental limits and algorithms,” ACM SIGMETRICS 2014, pp. 1–13, 16-20 June 2014. [Zhu and Ying, 2013] K. Zhu and L. Ying, “Information source detection in the SIR model: A sample path based approach,” ITA 2013, pp. 1–9, 10-15 Feb. 2013. 56 / 56