O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Software Load Balancer for OpenFlow Complaint SDN architecture

1.484 visualizações

Publicada em

Download this presentation and view in Microsoft powerpoint. Animation effects make it difficult to understand on Slideshare.

REFERENCE:
R. Wang, D. Butnariu, and J. Rexford, “OpenFlow-based server load balancing gonewild,” In Hot-ICE, 2011.

Publicada em: Ciências
  • @Pritesh Ranjan Thanks again for helping. Regards.
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui
  • @Muhammad Johar Jaafar Yeah, the links were to my local system on which I had presented. I will have to search for those videos and will email them to you once I get those.
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui
  • hi. i have come accross your presentation and seem interesting. i would like to know more and keen to watch the video at slide 32. But the link is already dead. Perhaps can you share or have link for it. Wish you can email me at whitefloux90[at]gmail[dot]com
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui
  • @Elahe Moradi Hey Elahe, If you are just starting with SDN then do check out Dr. Nick Feamster's MOOC on Coursera, it will give you a formal grounding in SDN and openflow. R. Wang, D. Butnariu, and J. Rexford, “Openflow-based server load balancing gonewild,” In Hot-ICE, 2011. The above paper was the base for my undergraduate project. I had implemented the algorithms and future steps as suggested in this paper. I am mentioning another few of the papers from where I got the thought process behind SDN, OpenFlow and some research work on Load-balancing in SDN environment. A) Composing SDN, C Monsanto, J Rexford and others B) OpenFlow Based Load Balancing, Hardeep Uppal and Dane Brandon, Networking Project Report @ University of Washington C) Aster*x: Load-Balancing as a Network Primitive, by Nikhil Handigol and others D) Improving Network Management with SDN, Nick Feamster and others, IEEE communications Magazine, FEB 2013 E) Logically Centralized? State Distribution Trade-offs in SDN Do let me know if you have specific doubts. It has been a while since I checked out the latest news in SDN and Openflow, but I might still be able to help. :P I check my emails more regularly priteshranjan01@gmail.com
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui

Software Load Balancer for OpenFlow Complaint SDN architecture

  1. 1. Enhancing Load Balancer For OpenFlow Compliant SDN Architecture MIT COLLEGE OF ENGINEERING BY- Pritesh Ranjan Pankaj Pande Ramesh Oswal Zainab Qurani
  2. 2. Contents • Introduction to SDN • Project idea • Load balancing methods • Our approach • Controller selection • Environmental setup • Reactive/Proactive approach • Partitioning algorithm • Transitioning algorithm • Questions
  3. 3. Packet Forwarding Hardware Ap p Ap p Ap p Packet Forwarding Hardware Ap p Ap p Ap p Hardware Packet Forwarding Ap p Ap p Ap p Packet Forwarding Hardware Operating System Operating System Operating System Operating System Ap p Ap p Ap p Network Operating System App App App INTRODUCTION TO SDN 1. Open Interface to HW (South Bound API) 3. Open API for business Applications (NorthBound API) 2. Operating System (controller Platforms)
  4. 4. Load Balancer Switch Project Idea
  5. 5. The Plan OpenFlow Compatible Controller Load Balancing Module
  6. 6. Our Approach For Weighted Load Balancing ? To ensure Connection Persistence ? For non-uniform traffic pattern? Partitioning Algorithm Load Redistribution Algorithm Transitioning Algorithm
  7. 7. Controller Selection
  8. 8. Coming Up Next Environment Setup Setup a network with 10 hosts on 1 switch Time Required ?? Setup a network with 100 hosts on 5 switches. Time Required ?? Tired / Bored ?? Solution : “Mininet”
  9. 9. Switch 10.0.0.10 Port 2 Low Source Subnet Forward to Priority Rule Table Default Controller h1 h2 Srcip=10.0.0.10 Srcip=10.0.0.20 s1 Mininet : Using Inbuilt Wrapper “mn”
  10. 10. Create Custom Network with own Scripts
  11. 11. Controller
  12. 12. Controller Design To redirect traffic destined for “Service IP” to one of the backend replica servers A/c to assigned weighted load. Network Design Decisions Distributed or Centralized ? Goal of the Application Flow Based or Aggregated? Reactive or Proactive? Centralized Both – Microflow and wildcard rules Proactive
  13. 13. Match (exact & wildcard) Action Statistics Match (exact & wildcard) Action Statistics Match (exact & wildcard) Action Statistics Match (exact & wildcard) Action Statistics --------------- Srcip=10.0.23.23) Output port = 2 No. of Packets=10 Srcip=10.0.0.0/10 Priority=Low Output port = 4 No. of bytes Srcip=10.0.0.0/10 Priority=High Send to controller No of received packets Microflow rules Wildcard rules Rules/Flow Entries
  14. 14. Controller Switch Source Forward to Priority Rule Table A R4 Medium B R2 High R1 R2 R3 R4 Load Balancer Reactive Approach Drawback: High Setup time
  15. 15. Controller Switch Source Subnet Forward to Priority Rule Table 10.0.0.0/11 R4 Medium 10.32.0.0/11 R2 High R1 R2 R3 R4 Load Balancer 10.64.0.0/11 R1 Low 10.224.0.0/11 R4 Medium Configuring switch Table Generated Proactive Approach Drawback: Wildcard rules are expensive
  16. 16. Implementation Details AIM: Reduce initial setup time Servers get load in proportion to the assigned weights Minimum number of wildcard rules APPROACH: Proactively install wildcard rules to smaller sub-subnets Assign each server some subnets according to weighted load Minimization technique Coming Up Next Partitioning Algorithm
  17. 17. Partitioning Algorithm Deciding the no of subnets: Server R1 Alpha = 2 Server R2 Alpha = 3 Server R3 Alpha = 1 Total alpha = 2 + 3 + 1 = 6 Nearest 2n = 8 Normalization Factor = 8/6 = 1.333 Weighted Load = 3 Weighted Load = 4 Weighted Load = 1 Weighted Load: R1 = 1.333 * 2 = 2.666 = 3 R2 = 1.333 * 3 = 3.999 = 4 R3 = 1.333 * 1 = 1.333 = 1 No of subnet = 3 + 4 + 1 = 8 Partition The subnet into 8 subgroups
  18. 18. 10.0.0.0/8 10.128.0.0/910.0.0.0/9 10.0.0.0/10 10.64.0.0/10 10.128.0.0/10 10.192.0.0/10 10.0.0.0/11 10.32.0.0/11 10.64.0.0/11 10.96.0.0/11 10.128.0.0/11 10.160.0.0/11 10.192.0.0/11 10.224.0.0/11 Server R1 Weighted Load = 3 Server R3 WL = 1 Server R2 Weighted Load = 4 Company Network Partitioning Algorithm
  19. 19. 0 0 0 * R1 0 0 1 * R1 010 * R1 011 * R2 100 * R2 101 * R2 110 * R2 111 * R3 0 1 0 1 0 1 0 1 0 1 0 1 0 1 / 8 / 9 / 10 / 11 Number of wild card rules = 8 Partitioning Algorithm – Contd.
  20. 20. Partitioning Algorithm - Analysis Benefit : Limitation : Improvement : Reduced initial setup time Minimal involvement of controller Too Much wildcard rules Minimization technique Coming Next Dynamic Load redistribution Coming Soon For uniform client traffic pattern
  21. 21. 0 0 0 * R1 0 0 1 * R1 010 * R1 011 * R2 100 * R2 101 * R2 110 * R2 111 * R3 Swap 011 * 111 * Minimization Technique
  22. 22. 0 0 0 * R1 0 0 1 * R1 010 * R1 111 * R2 100 * R2 101 * R2 110 * R2 011 * R3 1* R2 00* R1 Number of wild card rules = 4 Minimization Technique
  23. 23. Load Shift Operation Situation: Goal: Conditions: Solution: Server R1 needs to be taken down for maintenance. Traffic of R1 (old) should be allocated to R2 (New) Ongoing connections should be continued with old server(R1) New connections should be forwarded to new server(R2) R1 can be taken down only when all the connections have expired. Transitioning Algorithm
  24. 24. Subnet A Subnet B R1(Old) R2(New) Server R1 is to be taken down, Shift its load To R2 Ok, let me check the connections for SYN Rule Table Source Subnet Forward to Priority A R1 Low B R2 Low Transitioning Algorithm
  25. 25. Subnet A Subnet B R1(Old) R2(New) Rule Table Source Subnet Forward to Priority A R1 Low B R2 Low 1. Adds new flow entry 2. Modify Old Flow Entry R2 A Controller High Rule Table Source Subnet Forward to Priority A R2 Low B R2 Low A Controller High Transitioning Algorithm
  26. 26. IP=10.0.0.1 Subnet A Subnet B R1(Old) R2(New) Add micro flow rule Rule Table Source Subnet Forward to Priority A R2 Low B R2 Low A Controller High SYN flag NOT SET 10.0.0.1 R1 Highest Transitioning Algorithm
  27. 27. IP=10.0.0.1 Subnet A R1(Old) R2(New) Add micro flow rule Rule Table Source Subnet Forward to Priority A R2 Low B R2 Low A Controller High SYN flag SET 10.100.0.1 R2 Highest IP=10.100.0.1 Subnet A Transitioning Algorithm
  28. 28. IP=10.0.0.1 Subnet A R1(Old) R2(New) Flow Entries get deleted after Idle time-out Rule Table Source Subnet Forward to Priority A R2 Low B R2 Low A Controller High 10.100.0.1 R2 Highest IP=10.100.0.1 Subnet A Now R1 can be taken down Transitioning Algorithm
  29. 29. R1 (X=2) R2 (X=2) 2*x 2*x 00* 01* 10* 11* x x x x 00*  R1 01*  R1 10*  R2 11*  R2 Uniform Client traffic pattern Each subnet has same no of Connections Each server gets proportional no of Connections (weighted Load)
  30. 30. R1 (X=2) R2 (X=2) 2*x 2*x 00* 01* 10* 11* 2*x 1*x 1*x 0*x 00*  R1 01*  R1 10*  R2 11*  R2 Non-Uniform Client traffic pattern Subnets have unequal no of requests Load gets unequally distributed among servers
  31. 31. R1 (X=2) R2(X=2) 3*x 1*x 00* 01* 10* 11* 2*x x x 0*x 00*  R1 Overloaded Server Underloaded Server Read Statistics Find over and underloaded server Shift appropriate load from over to under loaded server 01*  R1 10*  R2 11*  R2 01*  R2 2*x 2*x Load Redistribution Algorithm
  32. 32. Project Demo: Videos Topology Creation Partitioning Algorithm- Video 1 Transitioning Algorithm Load Redistribution Algorithm Partitioning Algorithm- Video 2
  33. 33. Questions..??
  34. 34. THANK YOU….!!!
  35. 35. Topology Extra slides
  36. 36. Comparative Study Parameter H/w Load Balancer POX S/w Load Balancer Our Solution Methods IP Sticky Round Robin Cookie Sticky Weighted Load IP Sticky Random IP Sticky Persistent Weighted load Layer Layer 4 Layer 7 Layer 4 Layer 4 Layer 7 Server health Monitoring PING HTTP GET ARP ARP PING Speed Fast Slow Slow Cost Costly Free Free
  37. 37. Switch 10.0.0.1 Port 2 Low Source Subnet Forward to Priority Rule Table Default Controller Default Switch h1 h2 Srcip=10.0.0.1 Srcip=10.0.0.2 s1
  38. 38. Controller Switch Source Subnet Forward to Priority Rule Table 10.0.0.0/11 R1 Low 10.32.0.0/11 R1 Low R1 R2 R3 R4 Load Balancer 10.64.0.0/11 R1 Low 10.224.0.0/11 R4 Medium Configuring switch Table Generated (No. of Wildcard rules 8) Partitioning Algorithm (Flow Table)
  39. 39. Partitioning Algorithm (Cont.) 0 0 0 * R1 0 0 1 * R1 010 * R1 011 * R2 100 * R2 101 * R2 110 * R2 111 * R3 0 1 0 1 0 1 0 1 0 1 0 1 0 1 / 8 / 9 / 10 / 11 Number of wild card rules = 8 Achieved Benefit- Reduced initial setup time. Drawbacks- Very large number of rules installed. Improvement-Minimization Techniques
  40. 40. Minimization Technique(Cont.) 0 0 0 * R1 0 0 1 * R1 010 * R1 011 * R2 100 * R2 101 * R2 110 * R2 111 * R3 Swap 011 * 111 *
  41. 41. Minimization Technique(Cont.) 0 0 0 * R1 0 0 1 * R1 010 * R1 111 * R2 100 * R2 101 * R2 110 * R2 011 * R3 1* R2 00* R1 Number of wild card rules = 4
  42. 42. Controller Switch Source Subnet Forward to Priority Rule Table 10.0.0.0/11 R1 Low 10.64.0.0/11 R1 Low R1 R2 R3 R4 Load Balancer 10.96.0.0/11 R2 Low 10.128.0.0/9 R3 Low Configuring switch Table Generated Minimization Technique (Flow Table)
  43. 43. Scapy • Scapy is a Python framework for crafting and transmitting arbitrary packets • Scapy also performs very well on a lot of other specific tasks that most other tools can’t handle, like sending invalid frames, injecting your own 802.11 frames
  44. 44. ARP Replica R1 Replica R2 Replica R3 Replica R4 Replica R5 Source MAC: 00:00:00:00:00:01 Dest MAC: ff:ff:ff:ff:ff:ff Source IP: 10.24.24.24 Dest IP :10.0.0.2
  45. 45. TCP/HTTP Replica R1 Replica R2 Replica R3 Replica R4 Replica R5 Source IP: 10.24.24.24 Dest IP :10.0.0.2 Src Port:Random Dst Port :80 Protocol:TCP Source IP: 10.134.4.2 Dest IP :10.0.0.2 Src Port:Random Dst Port :80 Protocol:TCP

×