9. RICC
4th RICC workshop
4th RICC
workshopの
ご案内
@Okinawa
2014/3/27(Thu)∼28(Fri)
An invitation for 4th RICC workshop
第15回さくらの夕べ in 札幌
柏崎 礼生
俵屋宗達: 風神雷神図
(1624ころ?)
Soutatsu Tawaraya: Fujin Raijin-zu
Hiroki Kashiwazaki
R 地域間
I
インタークラウド
C 分科会
C
90- 00
2001.9.11
the Internet
September 11 attacks
live migration of VM
between distributed areas
雲内放電
広域分散
仮想化環境
after Migration
82p/5min
Inter Cloud Lightening
TOYAMA site
2003.8.14
Northeast blackout of 2003
TOKYO site
before Migration
2011.3.11
in Japan
群馬
Copy to DR-sites
Copy to DR-sites
OSAKA site
石狩
The aftermath of the 2011
Tohoku earthquake and tsunami
Copy to DR-sites
real time and active-active features seem to be just a simple "shared storage".
Live migration is also possible between DR sites
(it requires common subnet and fat pipe for memory copy, of course)
DR
Distcloud
1978
Disaster Recovery
Sun Information Systems
Gunmma prefecture
BCP
Business Continuity Plan
2つで
十分ですよ?
Ishikari city
82p/5min
琉球大学
40
0
10MB
0
64
16
random write
40000
20000
Global VM migration is also available by sharing "storage space" by VM host machines.
Real time availability makes it possible. Actual data copy follows.
(VM operator need virtually common Ethernet segment and fat pipe for memory copy)
backend
(core servers)
File
block
block
block
block
block
block
block
block
40000
20000
0
16384
4096
256 1024
409616384
65536
262144
16
1.04858e+06
4.1943e+06
File size in 2^n KBytes
1.67772e+07 4
6.71089e+07
64
1024
256
Record size in 2^n Kbytes
POSIX準拠
TOYAMA site
NFS, CIFS, iSCSI
Copy to DR-sites
TOKYO site
before Migration
OSAKA site
propos
ed
metho
d
Copy to DR-sites
real time and active-active features seem to be just a simple "shared storage".
Live migration is also possible between DR sites
(it requires common subnet and fat pipe for memory copy, of course)
NFS
CIFS
iSCSI
r=2
e=0
ACK
r=1
block
redundancy
=3
r=0
VM
広島大学
1GB
redundancy
=3
10GB
File size
bkwd read
record rewrite
60
40
20
0
60
40
20
10MB
100MB
1GB
0
10GB
Throughput (MB/sec)
100
80
80
60
40
100MB
1GB
0
10GB
60
40
20
20
10MB
File size
80
10MB
100MB
File size
1GB
10GB
0
10MB
40
Throughput (MB/sec)
100
60
80
60
40
20
10MB
100MB
1GB
10GB
0
1GB
10GB
File size
fread
120
100
80
100MB
File size
fwrite
120
legend
従来方式 Exage/Storage
広域対応 Exage/Storage
80
SINET4 Kanazawa University EXAGE L3VPN
60
40
20
10MB
100MB
1GB
10GB
0
10MB
File size
100MB
1GB
10GB
File size
SC2013
当初の予定
2013/11/17∼22
We have been developing a widely distributed cluster storage system and
evaluating the storage along with various applications. The main advantage of
our storage is its very fast random I/O performance, even though it provides a
POSIX compatible file system interface on the top of distributed cluster storage.
@Colorado Convention Center
NFS
本番
r = -1
e=2
external
Hypervisor
shared
write
hash
100MB
r=0
e=1
r=1
e=0
r=2
ACK
consistent
Meta
Data
after Migration
Copy to DR-sites
10MB
Read (before migration)
Read (after migration)
Write (before migration)
Write (after
migration)
live migration of VM
between distributed areas
interface protocl
SINET4 Hiroshima University EXAGE L3VPN
40
)
064
60000
60
0
10GB
120
Throughput (MB/sec)
File size in 2^n KBytes
t (MB/sec
Kbytes/sec
60000
High
Random R/W
Performance
80000
1GB
File size
100
Throughpu
requirement
80000
100000
100MB
100
File size
120000
80
20
10MB
120
0
100000
reread
40
120
4
256 1024 4096 1638465536 1.04858e+06
262144 4.1943e+066.71089e+07
1.67772e+07
下條真司
Shinji Shimojo @Osaka Univ, NICT
面白く
ないよね!
国際回線を
国際回線上での
本番
RTT=244ms
1Gbps
金沢大学
iozone -aceI
problems
100
60
0
10GB
100
20
120000
1GB
120
80
stride read
64
Throughput (MB/sec)
16384
4096
1024
256
64 Record size in 2^n Kbytes
100MB
Throughput (MB/sec)
256
20000
256 1024
409616384
65536
262144
16
1.04858e+06
4.1943e+06
File size in 2^n KBytes
1.67772e+07 4
6.71089e+07
CloudStack
4.0.0
120
80
20
10MB
File size
Throughput (MB/sec)
1024
40000
40000
064
40
100
Throughput (MB/sec)
60000
20000
60
0
10GB
120
4096
80000
60000
1GB
random read
16384
100000
80000
100MB
File size
120000
100000
read
100
80
20
20
Throughput (MB/sec)
60
Throughput (MB/sec)
Throughput (MB/sec)
80
XenServer 6.0.2
CloudStack
4.0.0
120
100
100
Record size in 2^n Kbytes
Distributed
Storage
rewrite
120
120
Kitami Institute of
Technology
XenServer 6.0.2
write
write
120000
SINET
最長
University of the
Ryukyus
国立情報学研究所
Throughput (MB/sec)
point of sales
RTT
> 100ms
≒50ms
shared
storage
POS
Realtime
Processing
Kbytes/sec
80- 90
mainframe
hot site
北見工大
DCダウン時の
今年は
折り返し
10. live migration of VM
between distributed areas
after Migration
広域分散
仮想化環境
TOYAMA site
Copy to DR-sites
TOKYO site
Copy to DR-sites
OSAKA site
Copy to DR-sites
real time and active-active features seem to be just a simple "shared storage".
Live migration is also possible between DR sites
(it requires common subnet and fat pipe for memory copy, of course)
before Migration
42. Con$idential
Global VM migration is also available by sharing "storage space" by VM host machines.
Real time availability makes it possible. Actual data copy follows.
(VM operator need virtually common Ethernet segment and fat pipe for memory copy)
live migration of VM
between distributed areas
after Migration
TOYAMA site
Copy to DR-sites
TOKYO site
before Migration
Copy to DR-sites
OSAKA site
Copy to DR-sites
real time and active-active features seem to be just a simple "shared storage".
Live migration is also possible between DR sites
(it requires common subnet and fat pipe for memory copy, of course)
50. iozone -aceI
a: full automatic mode
c: Include close() in the timing calculations
e: Include flush (fsync,fflush) in the timing calculations
I: Use DIRECT IO if possible for all file operations.
57. We have been developing a widely distributed cluster storage system and
evaluating the storage along with various applications. The main advantage of
our storage is its very fast random I/O performance, even though it provides a
POSIX compatible file system interface on the top of distributed cluster storage.