SlideShare uma empresa Scribd logo
1 de 27
VIRTUALIZATION IN 4-4 1-4 
DATA CENTER NETWORK 
P R E S E N T A T I O N B Y : A N K I T A M A H A J A N
Introduction 
Previous work 
Proposed Plan 
Experimental setup 
Results 
Conclusion 
A 
G 
E 
N 
D 
A
INTRODUCTION 
• Data center network 
• Traditional architecture 
• Agility 
• Virtualization 
• 4-4 1-4 Data center network 
Large clusters of servers interconnected 
by network switches, concurrently provide 
large number of different services for 
different client-organizations. 
Design Goals: 
• Availability and Fault tolerance 
• Scalability 
• Throughput 
• Economies of scale 
• Load balancing 
• Low Opex 
A number of virtual servers are 
consolidated onto a single physical server. 
Advantages: 
• Each customer gets his own VM 
• Virtualization provides Agility 
Fig 1. Traditional Data Center 
• In case of hardware failure VM can be 
cloned & migrated to diff server. 
• Synchronized Fig 3: 4-replicated 4 1-4 Data Center 
VM images 
instead of redundant servers 
• Easier to test, upgrade and move 
virtual servers across locations 
• Virtual devices in a DCN 
• Reduced Capex and Opex.
4-4 1-4 ARCHITECTURE 
• 4-4 1-4 is a location based forwarding 
architecture for DCN which utilizes IP-hierarchy. 
• Forwarding of packets is done by masking 
the destination IP address bits. 
• No routing or forwarding table maintained 
at switches 
• No convergence overhead. 
• Uses statically assigned, location based 
IP addresses for all network nodes. 
A 3-Level 4-4 1-4 Data Center Network
LOCATION-IP BASED ROUTING
MOTIVATION FOR THIS WORK 
4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) 
setting, due to location based static IP address allocation to all 
network elements. 
Agility is essential in current data-centers, run by cloud service providers, to 
reduces cost by increasing infrastructure utilization. 
Server Virtualization provides the required agility. 
Whether the 4-4 1-4 network delivers performance guarantees in a 
virtualized setting, suitable to modern Data Centers, is the major 
motivation for this work.
PROBLEM STATEMENT 
How to virtualize the 4-4 1-4 data center network with the following constraints: 
• Use static IP allocation along with dynamic VMs. 
• No modification of network elements or end hosts. 
Design Goals: To design a virtualized data center using 4-4 1-4 topology, that is 
• Agile 
• Scalable and Robust 
• Minimize overhead incurred due to virtualization. 
• Minimum end-to-end Latency and maximum Throughput 
• Suitable for all kinds of data center usage scenarios: 
• Compute Intensive: HPC 
• Data Intensive: Video and File Streaming 
• Balanced: Geographic Information System
PROPOSED SOLUTION 
• Separation of Location-IP and VM-IP 
• Tunneling at source 
• Directory structure 
• Query process 
• Directory Update mechanism 
Packet tunneled through physical network using location-IP header 
Packet sending at a server running a type-1 hypervisor
PROPOSED SOLUTION 
• Separation of Location-IP and VM-IP 
• Tunneling at source 
• Directory structure 
• Query process 
• Directory Update mechanism 
Directory structure 
Physical Machines = 2^16. 
Virtual Machines = 2^17 (2 VMs/PM). 
Virtual Machines = 2^20 (16 VMs/PM). 
Directory Servers = 64. 
Number of Update Servers = 16. 
Hence, one DS per 1024 PMs and 
one US per (4 * 1024) PMs. 
This implies 64 DSs, for minimum 131072 VMs
PROPOSED SOLUTION 
• Separation of Location-IP and VM-IP 
• Tunneling at source 
• Directory structure 
• Query process 
• Directory Update mechanism 
Data Structure of Directory
EXPERIMENTAL SET UP 
Simulation environment: Extension of NS2/Greencloud: 
• Packet-level simulation of communications in DCN, unlike CloudSim, 
MDCSim, etc. 
• DCN Entities are modeled as C++ and OTCL objects. 
DCN Workloads: Categories of Experiments setup 
• Computation Intensive Workload (CIW): The servers are considerably 
loaded, but negligible inter server communication. 
• Data Intensive Workload (DIW): Huge inter server data transfers, but 
negligible load at the computing servers 
• Balanced Workload (BW): Communication links and computing servers are 
proportionally loaded.
EXPERIMENTAL SET UP 
• In CIW and BW, tasks are scheduled in a Round Robin fashion by the Data 
Center object, onto VMs on servers fulfilling task resource requirement. 
• A task is sent to allocated VM by DCobject through core switches. Output is 
returned to the same core switch, which then forwards it to the DCobject. 
• In DIW and BW, intra-DCN comm or data-transfer is modelled by 1:1:1 TCP 
flows between servers. 
S: Source and Destination within same Level-0 
D: Source and Destination are in different Level-0 but same Level-1 
R: Random selection of Source and Destination pairs inside Level- 
1.
SIMULATION PARAMETERS
NAM SCREEN SNAPSHOT 
6 4 S E R V E R D C N
PERFORMANCE METRICS 
• Average packet delay 
• Network Throughput 
• End to End Aggregate/data Throughput 
• Average hop count 
• Packet drop rate 
• Normalized Routing overhead
RESULTS: AVERAGE HOP COUNT
RESULTS: COMPUTE INTENSIVE WORKLOAD 
• DVR vs LocR in 16 Servers: 50% less Delay and more throughput 
• 16 vs 64 Servers: Almost same. 
• Routing Overhead in DVR increases with more number of servers.
RESULTS: DATA INTENSIVE WORKLOAD 
• Average Packet Delay: 
• DVR vs LocR: Less Delay in 
LocR 
• 16 vs 64: Delay reduces by 54% 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 54% 
• Network throughput: 
• End-to-end aggregate Throughput: 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 53%
RESULTS: BALANCED WORKLOAD 
• Average Packet Delay: 
• DVR vs LocR: Less Delay in 
LocR 
• 16 vs 64: Delay reduces by 42% 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 42% 
• Network throughput: 
• End-to-end aggregate Throughput: 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 41%
CONCLUSION 
Creation of a packet level simulation prototype in NS2/Greencloud for 4-4 1- 
4 DCN. 
Modelling of compute-intensive, data-intensive and balanced workloads 
We conclude that our framework for virtualization in 4-4 1-4 DCN in has the 
following significance: 
• Routing over-head: No convergence overhead in location-based routing 
• Networking loops: network is free from networking loops 
• Faster hop-by-hop forwarding: as per-packet-per-hop mask operation is 
faster than table lookup and update operation. 
• Efficiency: Location- IP based routing delivers two to ten times more 
throughput than DVR with same traffic and same topology. 
• Scalable: In DIW and BW the performance increases by 50% when number 
of servers is increased by four times.
LIMITATION 
4-4 1-4 is highly scalable in Data Intensive and Balanced workload data 
centers but moderately for heavy-computing data centers . 
In computation intensive workloads, the performance of 4-4 1-4 DCN with 
location based routing, either remains the same or increases marginally.
FUTURE WORK 
Simulation 
test-bed is 
ready 
Trace-driven 
workload 
Dynamic VM 
migration 
Optimum 
task 
Scheduling 
for 4-4 1-4 
Energy 
consump 
tion
REFERENCES 
1. A. Kumar, S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center 
Network Based on IP Address Hierarchy for Efficient Routing," in Parallel and 
Distributed Computing (ISPDC), 2012 11th International Symposium on, 
2012, pp. 235-242. 
2. D. Chisnall, The defitive guide to the xen hypervisor, 1st ed. Upper Saddle 
River, NJ, USA: Prentice Hall Press, 2007. 
3. D. Kliazovich, P. Bouvry, and S. Khan, “Greencloud: a packet-level simulator 
of energy-aware cloud computing data centers," The Journal of 
Supercomputing, pp.1{21, 2010, 10.1007/s11227-010-0504-1. Available: 
http://dx.doi.org/10.1007/s11227-010-0504-1 
4. “The Network Simulator NS-2," http://www.isi.edu/nsnam/ns/.
THANK YOU
There are mysteries in the 
universe, 
We were never meant to solve, 
But who we are, and why we are 
here, 
Are not one of them. 
Those answers we carry inside.
RESULTS: DATA INTENSIVE WORKLOAD 
• Average Packet Delay: 
• DVR vs LocR: Less Delay in 
LocR 
• 16 vs 64: Delay reduces by 54% 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 54% 
• Network throughput: 
• End-to-end aggregate Throughput: 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 53% 
• Routing overhead using dynamic routing:
RESULTS: BALANCED WORKLOAD 
• Average Packet Delay: 
• DVR vs LocR: Less Delay in 
LocR 
• 16 vs 64: Delay reduces by 54% 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 54% 
• Network throughput: 
• End-to-end aggregate Throughput: 
• DVR vs LocR: More in LocR 
• 16 vs 64: Increases by 53% 
• DVR Routing overhead:

Mais conteúdo relacionado

Mais procurados

Switching techniques
Switching techniquesSwitching techniques
Switching techniques
Gupta6Bindu
 

Mais procurados (20)

Data Center Network Multipathing
Data Center Network MultipathingData Center Network Multipathing
Data Center Network Multipathing
 
Data center network architectures v1.3
Data center network architectures v1.3Data center network architectures v1.3
Data center network architectures v1.3
 
Link_NwkingforDevOps
Link_NwkingforDevOpsLink_NwkingforDevOps
Link_NwkingforDevOps
 
CS8591 Computer Networks - Unit I
CS8591 Computer Networks - Unit ICS8591 Computer Networks - Unit I
CS8591 Computer Networks - Unit I
 
CS8591 Computer Networks - Unit IV
CS8591 Computer Networks - Unit IVCS8591 Computer Networks - Unit IV
CS8591 Computer Networks - Unit IV
 
Global state routing
Global state routingGlobal state routing
Global state routing
 
IT8602 Mobile Communication - Unit V
IT8602 Mobile Communication - Unit V IT8602 Mobile Communication - Unit V
IT8602 Mobile Communication - Unit V
 
DevoFlow - Scaling Flow Management for High-Performance Networks
DevoFlow - Scaling Flow Management for High-Performance NetworksDevoFlow - Scaling Flow Management for High-Performance Networks
DevoFlow - Scaling Flow Management for High-Performance Networks
 
Week2.1
Week2.1Week2.1
Week2.1
 
Switching
Switching Switching
Switching
 
Switching concepts Data communication and networks
Switching concepts Data communication and networksSwitching concepts Data communication and networks
Switching concepts Data communication and networks
 
Packet Switching
Packet SwitchingPacket Switching
Packet Switching
 
Cloud computing Module 2 First Part
Cloud computing Module 2 First PartCloud computing Module 2 First Part
Cloud computing Module 2 First Part
 
Network Switching | Computer Science
Network Switching | Computer ScienceNetwork Switching | Computer Science
Network Switching | Computer Science
 
Switching Concepts presentation
Switching Concepts presentationSwitching Concepts presentation
Switching Concepts presentation
 
RPL - Routing Protocol for Low Power and Lossy Networks
RPL - Routing Protocol for Low Power and Lossy NetworksRPL - Routing Protocol for Low Power and Lossy Networks
RPL - Routing Protocol for Low Power and Lossy Networks
 
Rpl
Rpl Rpl
Rpl
 
Lan overview
Lan overviewLan overview
Lan overview
 
Valiant Load Balancing and Traffic Oblivious Routing
Valiant Load Balancing and Traffic Oblivious RoutingValiant Load Balancing and Traffic Oblivious Routing
Valiant Load Balancing and Traffic Oblivious Routing
 
Switching techniques
Switching techniquesSwitching techniques
Switching techniques
 

Destaque

Server room cable clean up Project
Server room cable clean up ProjectServer room cable clean up Project
Server room cable clean up Project
GS CHO
 
KASPERSKY SECURITY CENTER IMPLEMENTATION
KASPERSKY SECURITY CENTER IMPLEMENTATIONKASPERSKY SECURITY CENTER IMPLEMENTATION
KASPERSKY SECURITY CENTER IMPLEMENTATION
GS CHO
 
Data Center Network Trends - Lin Nease
Data Center Network Trends - Lin NeaseData Center Network Trends - Lin Nease
Data Center Network Trends - Lin Nease
HPDutchWorld
 
Cisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design GuideCisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design Guide
Cisco Service Provider
 

Destaque (20)

Migrating Mission-Critical Workloads to Intel Architecture
Migrating Mission-Critical Workloads to Intel ArchitectureMigrating Mission-Critical Workloads to Intel Architecture
Migrating Mission-Critical Workloads to Intel Architecture
 
Data Center Migration and Network Bandwidth Assessments with Cisco MATE Desig...
Data Center Migration and Network Bandwidth Assessments with Cisco MATE Desig...Data Center Migration and Network Bandwidth Assessments with Cisco MATE Desig...
Data Center Migration and Network Bandwidth Assessments with Cisco MATE Desig...
 
Juniper Networks: Q Fabric Architecture
Juniper Networks: Q Fabric ArchitectureJuniper Networks: Q Fabric Architecture
Juniper Networks: Q Fabric Architecture
 
Data center network
Data center networkData center network
Data center network
 
diagram
diagramdiagram
diagram
 
OpenFlow Data Center - A case Study by Pica8
OpenFlow Data Center - A case Study by Pica8OpenFlow Data Center - A case Study by Pica8
OpenFlow Data Center - A case Study by Pica8
 
Kernel Recipes 2013 - Virtual Network over TRILL (VNT) : Design, implementati...
Kernel Recipes 2013 - Virtual Network over TRILL (VNT) : Design, implementati...Kernel Recipes 2013 - Virtual Network over TRILL (VNT) : Design, implementati...
Kernel Recipes 2013 - Virtual Network over TRILL (VNT) : Design, implementati...
 
Server room cable clean up Project
Server room cable clean up ProjectServer room cable clean up Project
Server room cable clean up Project
 
KASPERSKY SECURITY CENTER IMPLEMENTATION
KASPERSKY SECURITY CENTER IMPLEMENTATIONKASPERSKY SECURITY CENTER IMPLEMENTATION
KASPERSKY SECURITY CENTER IMPLEMENTATION
 
Data Center Network Trends - Lin Nease
Data Center Network Trends - Lin NeaseData Center Network Trends - Lin Nease
Data Center Network Trends - Lin Nease
 
The Evolving Data Center Network: Open and Software-Defined
The Evolving Data Center Network: Open and Software-DefinedThe Evolving Data Center Network: Open and Software-Defined
The Evolving Data Center Network: Open and Software-Defined
 
Cisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design GuideCisco VMDC Cloud Security 1.0 Design Guide
Cisco VMDC Cloud Security 1.0 Design Guide
 
Morphology of Modern Data Center Networks - YaC 2013
Morphology of Modern Data Center Networks - YaC 2013Morphology of Modern Data Center Networks - YaC 2013
Morphology of Modern Data Center Networks - YaC 2013
 
Ecet375 1 a - basic networking concepts
Ecet375   1 a - basic networking conceptsEcet375   1 a - basic networking concepts
Ecet375 1 a - basic networking concepts
 
Firewall, Router and Switch Configuration Review
Firewall, Router and Switch Configuration ReviewFirewall, Router and Switch Configuration Review
Firewall, Router and Switch Configuration Review
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
 
diagrama 6
diagrama 6diagrama 6
diagrama 6
 
QFabric: Reinventing the Data Center Network
QFabric: Reinventing the Data Center NetworkQFabric: Reinventing the Data Center Network
QFabric: Reinventing the Data Center Network
 
diagrama 2
diagrama 2diagrama 2
diagrama 2
 
diagrama5
diagrama5diagrama5
diagrama5
 

Semelhante a Virtualization in 4-4 1-4 Data Center Network.

Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
balmanme
 
LOAD BALANCING ALGORITHMS
LOAD BALANCING ALGORITHMSLOAD BALANCING ALGORITHMS
LOAD BALANCING ALGORITHMS
tanmayshah95
 
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
LF_DPDK
 
Data Replication In Cloud Computing
Data Replication In Cloud ComputingData Replication In Cloud Computing
Data Replication In Cloud Computing
Rahul Garg
 

Semelhante a Virtualization in 4-4 1-4 Data Center Network. (20)

Dcnintroduction 141010054657-conversion-gate01
Dcnintroduction 141010054657-conversion-gate01Dcnintroduction 141010054657-conversion-gate01
Dcnintroduction 141010054657-conversion-gate01
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
Optimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using dockerOptimising nfv service chains on open stack using docker
Optimising nfv service chains on open stack using docker
 
FlowN vs FlowVisor: Scalable Network Virtualization in SDN
FlowN vs FlowVisor: Scalable Network Virtualization in SDNFlowN vs FlowVisor: Scalable Network Virtualization in SDN
FlowN vs FlowVisor: Scalable Network Virtualization in SDN
 
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...Network-aware Data Management for High Throughput Flows   Akamai, Cambridge, ...
Network-aware Data Management for High Throughput Flows Akamai, Cambridge, ...
 
OVS and DPDK - T.F. Herbert, K. Traynor, M. Gray
OVS and DPDK - T.F. Herbert, K. Traynor, M. GrayOVS and DPDK - T.F. Herbert, K. Traynor, M. Gray
OVS and DPDK - T.F. Herbert, K. Traynor, M. Gray
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
 
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
Radisys/Wind River: The Telcom Cloud - Deployment Strategies: SDN/NFV and Vir...
 
Optimized placement in Openstack for NFV
Optimized placement in Openstack for NFVOptimized placement in Openstack for NFV
Optimized placement in Openstack for NFV
 
LOAD BALANCING ALGORITHMS
LOAD BALANCING ALGORITHMSLOAD BALANCING ALGORITHMS
LOAD BALANCING ALGORITHMS
 
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
LF_DPDK17_OpenNetVM: A high-performance NFV platforms to meet future communic...
 
Data Replication In Cloud Computing
Data Replication In Cloud ComputingData Replication In Cloud Computing
Data Replication In Cloud Computing
 
Improving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization OverlaysImproving performance and efficiency with Network Virtualization Overlays
Improving performance and efficiency with Network Virtualization Overlays
 
Presentation oracle net services
Presentation    oracle net servicesPresentation    oracle net services
Presentation oracle net services
 
Deco1
Deco1Deco1
Deco1
 
Whats new in Oracle Database 12c release 12.1.0.2
Whats new in Oracle Database 12c release 12.1.0.2Whats new in Oracle Database 12c release 12.1.0.2
Whats new in Oracle Database 12c release 12.1.0.2
 
Performance ananlysis of the effect of load balancer in sdn based cloud
Performance ananlysis of the effect of load balancer in sdn based cloudPerformance ananlysis of the effect of load balancer in sdn based cloud
Performance ananlysis of the effect of load balancer in sdn based cloud
 
Optimising Service Deployment and Infrastructure Resource Configuration
Optimising Service Deployment and Infrastructure Resource ConfigurationOptimising Service Deployment and Infrastructure Resource Configuration
Optimising Service Deployment and Infrastructure Resource Configuration
 
The Convergence of HPC and Deep Learning
The Convergence of HPC and Deep LearningThe Convergence of HPC and Deep Learning
The Convergence of HPC and Deep Learning
 

Último

Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
dollysharma2066
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Último (20)

Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf22-prompt engineering noted slide shown.pdf
22-prompt engineering noted slide shown.pdf
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 

Virtualization in 4-4 1-4 Data Center Network.

  • 1. VIRTUALIZATION IN 4-4 1-4 DATA CENTER NETWORK P R E S E N T A T I O N B Y : A N K I T A M A H A J A N
  • 2. Introduction Previous work Proposed Plan Experimental setup Results Conclusion A G E N D A
  • 3. INTRODUCTION • Data center network • Traditional architecture • Agility • Virtualization • 4-4 1-4 Data center network Large clusters of servers interconnected by network switches, concurrently provide large number of different services for different client-organizations. Design Goals: • Availability and Fault tolerance • Scalability • Throughput • Economies of scale • Load balancing • Low Opex A number of virtual servers are consolidated onto a single physical server. Advantages: • Each customer gets his own VM • Virtualization provides Agility Fig 1. Traditional Data Center • In case of hardware failure VM can be cloned & migrated to diff server. • Synchronized Fig 3: 4-replicated 4 1-4 Data Center VM images instead of redundant servers • Easier to test, upgrade and move virtual servers across locations • Virtual devices in a DCN • Reduced Capex and Opex.
  • 4. 4-4 1-4 ARCHITECTURE • 4-4 1-4 is a location based forwarding architecture for DCN which utilizes IP-hierarchy. • Forwarding of packets is done by masking the destination IP address bits. • No routing or forwarding table maintained at switches • No convergence overhead. • Uses statically assigned, location based IP addresses for all network nodes. A 3-Level 4-4 1-4 Data Center Network
  • 6. MOTIVATION FOR THIS WORK 4-4 1-4 delivers great performance guarantees in traditional (non-virtualized) setting, due to location based static IP address allocation to all network elements. Agility is essential in current data-centers, run by cloud service providers, to reduces cost by increasing infrastructure utilization. Server Virtualization provides the required agility. Whether the 4-4 1-4 network delivers performance guarantees in a virtualized setting, suitable to modern Data Centers, is the major motivation for this work.
  • 7. PROBLEM STATEMENT How to virtualize the 4-4 1-4 data center network with the following constraints: • Use static IP allocation along with dynamic VMs. • No modification of network elements or end hosts. Design Goals: To design a virtualized data center using 4-4 1-4 topology, that is • Agile • Scalable and Robust • Minimize overhead incurred due to virtualization. • Minimum end-to-end Latency and maximum Throughput • Suitable for all kinds of data center usage scenarios: • Compute Intensive: HPC • Data Intensive: Video and File Streaming • Balanced: Geographic Information System
  • 8. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Packet tunneled through physical network using location-IP header Packet sending at a server running a type-1 hypervisor
  • 9. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Directory structure Physical Machines = 2^16. Virtual Machines = 2^17 (2 VMs/PM). Virtual Machines = 2^20 (16 VMs/PM). Directory Servers = 64. Number of Update Servers = 16. Hence, one DS per 1024 PMs and one US per (4 * 1024) PMs. This implies 64 DSs, for minimum 131072 VMs
  • 10. PROPOSED SOLUTION • Separation of Location-IP and VM-IP • Tunneling at source • Directory structure • Query process • Directory Update mechanism Data Structure of Directory
  • 11. EXPERIMENTAL SET UP Simulation environment: Extension of NS2/Greencloud: • Packet-level simulation of communications in DCN, unlike CloudSim, MDCSim, etc. • DCN Entities are modeled as C++ and OTCL objects. DCN Workloads: Categories of Experiments setup • Computation Intensive Workload (CIW): The servers are considerably loaded, but negligible inter server communication. • Data Intensive Workload (DIW): Huge inter server data transfers, but negligible load at the computing servers • Balanced Workload (BW): Communication links and computing servers are proportionally loaded.
  • 12. EXPERIMENTAL SET UP • In CIW and BW, tasks are scheduled in a Round Robin fashion by the Data Center object, onto VMs on servers fulfilling task resource requirement. • A task is sent to allocated VM by DCobject through core switches. Output is returned to the same core switch, which then forwards it to the DCobject. • In DIW and BW, intra-DCN comm or data-transfer is modelled by 1:1:1 TCP flows between servers. S: Source and Destination within same Level-0 D: Source and Destination are in different Level-0 but same Level-1 R: Random selection of Source and Destination pairs inside Level- 1.
  • 14. NAM SCREEN SNAPSHOT 6 4 S E R V E R D C N
  • 15. PERFORMANCE METRICS • Average packet delay • Network Throughput • End to End Aggregate/data Throughput • Average hop count • Packet drop rate • Normalized Routing overhead
  • 17. RESULTS: COMPUTE INTENSIVE WORKLOAD • DVR vs LocR in 16 Servers: 50% less Delay and more throughput • 16 vs 64 Servers: Almost same. • Routing Overhead in DVR increases with more number of servers.
  • 18. RESULTS: DATA INTENSIVE WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53%
  • 19. RESULTS: BALANCED WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 42% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 42% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 41%
  • 20. CONCLUSION Creation of a packet level simulation prototype in NS2/Greencloud for 4-4 1- 4 DCN. Modelling of compute-intensive, data-intensive and balanced workloads We conclude that our framework for virtualization in 4-4 1-4 DCN in has the following significance: • Routing over-head: No convergence overhead in location-based routing • Networking loops: network is free from networking loops • Faster hop-by-hop forwarding: as per-packet-per-hop mask operation is faster than table lookup and update operation. • Efficiency: Location- IP based routing delivers two to ten times more throughput than DVR with same traffic and same topology. • Scalable: In DIW and BW the performance increases by 50% when number of servers is increased by four times.
  • 21. LIMITATION 4-4 1-4 is highly scalable in Data Intensive and Balanced workload data centers but moderately for heavy-computing data centers . In computation intensive workloads, the performance of 4-4 1-4 DCN with location based routing, either remains the same or increases marginally.
  • 22. FUTURE WORK Simulation test-bed is ready Trace-driven workload Dynamic VM migration Optimum task Scheduling for 4-4 1-4 Energy consump tion
  • 23. REFERENCES 1. A. Kumar, S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center Network Based on IP Address Hierarchy for Efficient Routing," in Parallel and Distributed Computing (ISPDC), 2012 11th International Symposium on, 2012, pp. 235-242. 2. D. Chisnall, The defitive guide to the xen hypervisor, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall Press, 2007. 3. D. Kliazovich, P. Bouvry, and S. Khan, “Greencloud: a packet-level simulator of energy-aware cloud computing data centers," The Journal of Supercomputing, pp.1{21, 2010, 10.1007/s11227-010-0504-1. Available: http://dx.doi.org/10.1007/s11227-010-0504-1 4. “The Network Simulator NS-2," http://www.isi.edu/nsnam/ns/.
  • 25. There are mysteries in the universe, We were never meant to solve, But who we are, and why we are here, Are not one of them. Those answers we carry inside.
  • 26. RESULTS: DATA INTENSIVE WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53% • Routing overhead using dynamic routing:
  • 27. RESULTS: BALANCED WORKLOAD • Average Packet Delay: • DVR vs LocR: Less Delay in LocR • 16 vs 64: Delay reduces by 54% • DVR vs LocR: More in LocR • 16 vs 64: Increases by 54% • Network throughput: • End-to-end aggregate Throughput: • DVR vs LocR: More in LocR • 16 vs 64: Increases by 53% • DVR Routing overhead:

Notas do Editor

  1. In virtualization a system pretends to be two or more of the same system
  2. CloudSim, MDCSim are event based simulators. Fire events. whenever a data message has to be transmitted between simulator entities a packet structure with its protocol headers is allocated in the memory and all the associated protocol processing is performed. On the contrary, CloudSim and MDCSim are event-based simulators. They avoid building and processing small simulation objects (like packets) individually. Instead, the effect of object interaction is captured. Such a method reduces simulation time considerably, improves scalability, but lacks in the simulation accuracy.
  3. Average packet delay: Delay of a packet is the actual time it took to reach its destination node, from its source node. Average packet delay is computed as: P p2P pdp N , where P is the set of all packets, pdp is the delay of pth packet and N is the total number of packets. Network Throughput: It is dened as the the end-to-end bytes transmitted in the network per unit of time. It is computed as: P p2P psp pdp , where psp is the size of the pth packet, in bits. End to End Aggregate/data Throughput: It is dened as the useful data bytes transmitted in the network per unit of time. It is computed as: P p2P psp pdp , where psp is the size of data payload in pth packet and pdp is the delay of pth packet. Packet drop rate: Ratio of the number of packets dropped to the toatal number of packets sent. Average hop count: Average number of hops taken by packets to reach their destination, from their source. 42 Normalized Routing overhead: Number of routing packets per data packet. It is computed as the ratio of number of routing packets, to the number of data packets.
  4. Theoretical network limit rough estimation: rate < (MSS/RTT)*(C/sqrt(Loss)) [ C=1 ] (based on the Mathis et.al. formula) network limit (MSS 1540 byte, RTT: 0.4 ms, Loss: 10-08 (10-06%)) : 305970.51 Mbit/sec. Bandwidth-delay Product and buffer size BDP (1000 Mbit/sec, 0.4 ms) = 0.05 MByte  required tcp buffer to reach 1000 Mbps with RTT of 0.4 ms >= 49.2 KByte  maximum throughput with a TCP window of 32 KByte and RTT of 0.4 ms <= 666.67 Mbit/sec.
  5. We have created the test-bed for simulations on virtualized 4-4 1-4 data center network in NS2/Greencloud. The next step would be to run tests for dynamic virtual machine migration and server provisioning in this network. We have modelled different kinds of workloads possible for any data center network but trace-driven workload generation on this network would present the most relevant and practical metrics. Performance of Multi-casting groups in this network is also an unexplored domain. VM Scheduling policy plays a very important role in the power consumption and performance of a network. A scheduling algorithm optimum for 4-4 1-4 network is needed for better computation Centric workload performance.