4. DATACENTER = METAFABRIC ARCHITECTURE PORTFOLIO
Flexible building blocks; Switching simple switching fabrics
Routing Universal data center gateways
SDN Simple and flexible SDN capabilities
Data Center Security Adaptive security to counter data center threats
Management Smart automation and orchestration tools
Solutions & Services Reference architectures and professional services
9. JUNIPER ARCHITECTURES
Juniper
Architectures
Juniper
Architectures
Open
Architectures
Open
Architectures
MC-LAG
…
QFX5100
Virtual Chassis
Up to 10 members
QFabric
Up to 128 members
IP Fabric
L3 Fabric
Virtual Chassis
Fabric
Up to 20 members
Benefits
Single point of
management and control
Purpose-built and
turnkey
Benefits
Flexible deployment
scenarios
Open choice of
technologies and protocols
One Architecture Does Not Fit All,
QFX5100 enables Choices!
11. DEPLOYMENT SCENARIO DETAILS
Juniper Architectures
Attribute
Control Plane
Technology
Latency
Storage Convergence
1G Copper
1G Fiber
10G Copper
10G Fiber
MAC Addresses
ARP Entries
VLANs
VXLAN L2 Gateway
QFX3000-M/G
Centralized
QFabric
3 μs /5μs
Yes
768/6,144
768/6,144
768/6,144
768/6,144
1,536,000
20,000
4K
No
VCF
Centralized
Virtual Chassis Fabric
1.5μs
Yes
1,536
1,536
1,536
1,536
288,000
48,000
4K
Yes
CCoommppaarriissoonn
Control Plane: VCF
(in-band) vs. QFabric
(out-of-band)
QFabric for large
scale only
ISSU on VCF
12. OPEN ARCHITECTURE SCENARIOS
Juniper
Architectures
Juniper
Architectures
Open
Architectures
Open
Architectures
Core
Distribution
Access
QFX5100
or
EX9214
EX4300-VC
QFX5100
or
EX9214
QFX5100
EX9214
VCF
or
QFX3000-M
QFX5100
or
EX9214
QFX5100
QFX5100 when possible, otherwise EX9214 if required for scale
13. DEPLOYMENT SCENARIO DETAILS
Open Architectures with the QFX5100 Spine
Attribute
Control Plane
Technology
Latency
Storage Convergence
1G Copper
1G Fiber
10G Copper
10G Fiber
MAC Addresses
ARP Entries
VLANs
QFX5100-96S
+
EX4300-VC
Distributed
MC-LAG
2μs
No
4,608
3,072
No
No
288,000
48,000
4K
QFX5100-96S
+
QFX5100-48
Distributed
MC-LAG
2μs
No
4,608
4,608
No
No
288,000
48,000
4K
QFX5100-24Q
+
QFX5100-48
Distributed
MC-LAG
2μs
No
No
No
896
896
288,000
48,000
4K
QFX5100-24Q
+
QFX5100-24Q
Distributed
MC-LAG
2μs
No
No
No
No
1,536
288,000
48,000
4K
CCoommppaarrisisoonn
Low latency overall
Port Density varies
(<4,608)
QFX5100-96S + EX4300-
VC for 1G deployment
14. DEPLOYMENT SCENARIO DETAILS
Attribute
Control Plane
Technology
Latency
Storage Convergence
1G Copper
1G Fiber
10G Copper
10G Fiber
MAC Addresses
ARP Entries
VLANs
EX9214
+
QFX5100-48
Distributed
MC-LAG
19μs
No
15,360
15,360
3,840
3,840
1,000,000
256,000
24K
EX9200
QFX3000-M
Distributed
MC-LAG
19μs
Yes
4,032
4,032
4,032
4,032
1,000,000
256,000
24K
Open Architectures with EX9214 Spine
EX9214
+
EX4300-VC
Distributed
MC-LAG
19μs
No
15,360
No
No
No
1,000,000
256,000
24K
CCoommppaarrisisoonn
High Port Density
Higher logical scale
Higher latency overall
QFX9214 + EX4300-VC for
1G deployment
EX9200 + QFX3000-M for
storage convergence
EX9214: 240 10G ports at
line rate
16. VIRTUAL CHASSIS FABRIC
What and Why
switching building blocks
EX4300
QFX3500
QFX3600
QFX5100
APIs
NNeettwwoorrkk DDiirreeccttoorr
Single Point of Management
Full Layer 2 and Layer 3
ECMP
Transit FCoE
Topology Independent ISSU
Plug and Play Provisioning
4 spines and 16 leaves
VXLAN L2 Gateway
Virtual Chassis Fabric
17. SERVER AND STORAGE CONNECTIVITY
Any Ethernet Media, High Resiliency,
Flexible deployment
10/100/1000M Copper
10/100/1000M Fiber
10G Copper
10G Fiber
10G or 40G Fabric
Any-port connectivity
In-Service Software Upgrade
n-Way multi-homing
Active-Active paths
Single Point of Management
FCoE Transit
iSCSI / NFS / CIFS
Lossless Ethernet / DCB
Hardware SDN support
QFX5100 QFX5100 QFX5100 QFX5100
Server Storage
18. 1, 10, 40, GE – ALL IN ONE FABRIC
10GbE 1/10/40GbE 1GbE
10GbE POD 1/10/40GbE POD 1GbE POD
Spine QFX5100-24Q QFX5100-24Q QFX5100-48S
QFX5100-96S
Leaf
QFX5100-48S
QFX5100-24Q
QFX3500 & QFX3600
QFX5100-48S
QFX5100-24Q
QFX3500 & QFX3600
EX4300
EX4300
10/40GbE spine and 1/10/40GbE leaf nodes
10G
40G
19. 2 spine nodes
QFX5100-24Q
10G 10G 10G 10G
10G
1 2 3 4
18
QFX5100-48S
1 2
2 X uplinks
• 18 x 10GbE racks
• 936 x 10GbE ports 6:1 OS
QFX5100-24Q
4 spine nodes
1 2 3 4
10G 10G 10G 10G
10G
1 2 3 4
16
QFX5100-24Q
8 X uplinks QFX5100-96S
• 16 x 10GbE racks
• 1,536 x 10GbE ports 3:1 OS
2 OR 4 SPINE NODE DEPLOYMENTS
40G 40G
20. Integrated Routing Engine (RE)
VCF INTEGRATED CONTROL PLANE
Control Plane Inline Control Plane
• Dual RE (routing engine) with backup’s
• Distributed In-Band Control plane
• VCCPD running on all members
• Automatic fabric topology discovery
• Loop-free fabric forwarding path construction
• Control traffic protection for converged fabric
Master Backup
21. Intelligent spine and leaf nodes
Federated state
Distributed Forwarding
Data Plane
Backup
RE
• All Fabric links active-active
• Traffic load balanced on all links
• 1.8usec inter rack latency
Master RE
• In rack switching
• 550nsec in rack latency
• 16 way server multi-homing
VCF INTEGRATED DATA PLANE
22. VCF DEPLOYMENT METHODS
Auto-provisioned
• Plug and Play
• Pre-provision Spine Switches using single CLI
• Remaining switches will join VCF automatically as a line card
Pre-provisioned
• No ambiguity of member role
• All switches will be pre-provisioned into VCF
Non-provisioned
• Flexible
• Configure VCP ports then regular VC master election will happen
automatically
{set | delete} virtual-chassis {pre-provisioned | auto-provisioned}
25. Virtual Chassis Fabric versus others
Other Juniper VCF
10GbE scale 1000+ 1500+
Local forwarding No Yes
Intra rack latency 1.7usec 0.550usec
Inter rack latency 2.4usec 1.8usec
ISSU No Yes
Server multi-homing 2 way 16 way
Overlay gateway No Yes
Segmentation VR VR and MPLS
Power per switch 1000W <200W
26. Attribute
ISSU
VXLAN L2 Gateway
NSX Control Plane
Single-point of Mgmt
Flexible Topologies
Other
Only spine
Only leaves
Only leaves
No
Only MLAG
Juniper QFX5100
Yes – spine and leaf
Yes – spine and leaf
Yes – spine and leaf
Network Director
VC,MCLAG,VCF,QF
JJuunnipipeerr A Addvvaannttaaggee
ISSU
Network Director
Overlay
Architecture choices
Virtual Chassis Fabric versus others (2)
33. WAN
(MPLS, IP)
MX: UNIVERSAL SDN GATEWAY
L2: EVPN,
VPLS
L3: L3VPN,
NG-MVPN
Industry leading L2-L3 LAN-WAN-Overlay
Gateway
Standards based, multivendor
solutions
Highly scalable, virtualized,
multitenant connectivity
Vmware (VxLAN)
Any to any gateway Universal SDN
Gateway
Building on proven track record in
major DC and SP deployments
With extensible, future proof platform
capabilities
GW GW GW
Vmware NSX Based POD Contrail SDN based POD Legacy, VLAN based POD
POD
Contrail (MPLS,
VxLAN) POD VLAN POD
34. USG COMPARISONS
Description
QFX5100
EX9200/MX
Layer 2
USG
Provide SDN-to-non-SDN
translation, same IP subnet
✔
✔
NSX or Contrail talk Layer
2 to non-SDN VMs, bare
metal and L4-7 services
Use Cases
Layer 3
USG
Provide SDN-to-non-SDN
translation, different IP subnet
✔
NSX or Contrail talk Layer
3 to non-SDN VMs, bare
metal and L4-7 services
and Internet
SDN
USG
Provide SDN-to-SDN translation,
same or different IP subnet, same
or different Overlay
✔
NSX or Contrail talk to other
PODs of NSX or Contrail
USG
(Universal SDN Gateway)
WAN
USG
Provide SDN-to-WAN translation,
same or different IP subnet
✔
NSX or Contrail talk to
other remote locations–
branch, DCI
X86 Appliance ✔ ✔
Competing ToRs ✔
Competing Chassis ✔
USG
(Universal SDN Gateway)
35. CONTRAIL
EXTENDING ADVANCED NETWORKING INTO THE VIRTUAL WORLD
CONTRAIL CONTROLLER
Physical Network
(no changes)
Analytics
Configuration Control
VM VM VM VM
vRouter
Physical Host
with Hypervisor
VM VM VM VM
vRouter
Physical Host
with Hypervisor
Gateway
WAN,
Internet
Simple, open and agile
Virtual network overlay
Developer momentum
OpenContrail community
36. VXLAN
• Virtual eXtensible
Local Area Network
(VxLAN)
• L2 connections
within IP overlay
– Unicast &
multicast
• Allows flat DC design
w/out boundaries
• Simple and elastic
network
• Options to run with
and without SDN
controller
WAN
Overlay
environment
TOR
Management
Station
IP overlay connections
established between
VxLAN end-points of a
tenant
IP overlay connections
established between
VxLAN end-points of a
tenant
Gateway
between
overlay LAN:
one end of the
VxLAN tunnels
Gateway
between
overlay LAN:
one end of the
VxLAN tunnels
VVDDSS
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Hypervisor /
distributed
Virtual Switch –
other end of
VxLAN tunnels
Hypervisor /
distributed
Virtual Switch –
other end of
VxLAN tunnels
Fully meshed
unicast tunnels – for
known L2 unicast
traffic
Fully meshed
unicast tunnels – for
known L2 unicast
traffic
PIM signaled
multicast tunnels for
L2 BUM traffic
PIM signaled
multicast tunnels for
L2 BUM traffic
37. ETHERNET VPN (EVPN)
WAN
LAG
A new standards based protocol to inter-connect
L2 domains
Juniper leading the multi-vendor industry
wide initiative
Improves network efficiency
Ideally suited for Datacenter
Interconnectivity
Allows L2 multi-tenancy in IP fabric DC
BGP based
state
exchange
EVPN
router
EVPN
router
LAN
38. WHY EVPN
Where is EVPN Applicable:
– DC Interconnect – allowing L2 stretch between data centers over WAN
– For multi-tenancy in DC with VxLAN or MPLS as transport
– Next generation L2VPN technology that replaces VPLS
Which customers will be interested in EVPN :
– Data Center Builders – SPs, Enterprises, Content providers
– These customers use MX is a DC WAN Edge Router
– These customers use MX as a PE router for L2 business services
39. USE CASE: EVPN FOR DATA CENTER INTERCONNECT
VLAN 1
MAC1
VLAN 2
MAC 2
VLAN 1
MAC11
VLAN 2
MAC22
BGP Control Plane based learning on
DaDtaat aP lPalnaen eL eLaeranrinnigng BGP Control Plane based learning on
WAN DaDtaat aP lPanlaen eL eLaeranrinnigng
WAN
EEVVPPNN C Clolouudd
VVxLxLAANN C Clolouudd LeLgeagcayc yL 2L 2C Clolouudd
MX Series MX Series
Data Center Site1 Data Center Interconnect Data Center Site 2
Benefits:
•Seamless interconnect between DCs - L2 stretch between DCs
•Seamless Workload migration - VM mobility across DCs
•Wide Applicability – Interconnects Native L2 and overlay technologies
41. SMART DATA CENTER SECURITY
RAPID THREAT IDENTIFICATION AND PREVENTION
VM
VM
VM
Virtual Physical
Leading high-end
firewall
Proven data center
scale
VM
Firefly
Virtual host and
perimeter security
Smart groups—
automatic policy
control
Optimized for
performance
SRX Series
Firewall
VM
VM
Virtual Physical
Spotlight
Data Center Global Attacker Database
43. JUNOS SPACE
SMART NETWORK MANAGEMENT FROM A SINGLE PANE OF GLASS
Physical
Networks
Virtual
Networks
API
Visualize
Physical and virtual visualization
Analyze
Smart and proactive networks
Control
Lifecycle and workflow automation
Notas do Editor
Navigating Data Center Architectures
Hey guys, and welcome to navigation data center architectures. My name is Doug Hanks. I’m a data center architect in the Campus and Data Center Business Unit, and today we’re going to go over three topics that we have to present to you guys. Number one we’re going to talk about some of the overall architectures that we have available in the data center and then we’re going to go into a Virtual Chassis Fabric deep dive and then we’re going to start touching on some of the IP fabrics.
Data center investments are all about the pursuit of business agility.
In today’s data center, agility means new apps that are driving IT transformation.
Technology is transitioning as well, with:
Virtualization (application and network)
Cloud (public, private and hybrid)
SDN
Re-hash of categories in the MetaFabric architecture portfolio.
MetaFabric
So at a high level, if you take a look at what we’re doing with our MetaFabric strategy, is that we’re building the foundation on both Juniper silicon and merchant silicon. We’re going to have different options, whether it’s going to be building trees with IP Fabrics, our Juniper architecture in terms of QFabric, Virtual Chassis Fabric, or the universal SDN gateways, we’re going to give the customer a lot of different options on how they want to solve their particular problem. So you start moving into network virtualization. So we do support OpenFlow, we support overlay networks, whether it’s VMware NSX or Open Contrail, and we have Juniper Firefly which can perform NFV. And with that comes service chaining, deep packet inspection, NATing, firewall, things like that. And to control all of this we interoperate with both OpenStack, CloudStack, VSphere, the whole nine yards. And obviously we have our own MAC tools as well with Junos Space, with Network Director, Security Director, Service Now, and really the combination of all these different things is MetaFabric. As we roll out our different MetaFabric solutions, such as MetaFabric 1.0, 1.1, these are targeted solutions towards enterprise IT, and for example MetaFabric 2.0 are some solutions that we’re targeting towards our Telco customers with our virtual hosting. We have an entire roadmap that I’ll actually share with you around MetaFabric and what the solutions are and what our use cases focus on. But in general, these are the tenants, these are the principals that we abide by and when we start talking about MetaFabric we give the customer plenty of different options to solve their particular problem, so we’re not locking them in to any particular vendor so we’re very open about who we partner and who we work with, so it’s about doing one thing with many different options.
The MetaFabric architecture includes three pillars that enable the network to innovate at the speed of applications. This enables better troubleshooting, capacity planning, network optimization, and security threat mitigation, PLUS it helps accelerate the adoption of cloud, mobility and Big Data applications. The three pillars are:
Simple: Network and security that’s simple to acquire, deploy, use, integrate and scale. From simplification in our devices and network architectures to operational simplicity with automation and orchestration, we continue to build simple solutions, especially as they extend into physical and virtual environments. This ultimately results in better network performance, reliability, and reduced operating costs.
Open: An approach that extends across our devices, how we work with technology partners in the ecosystem, and also open communities that give our customers the flexibility they need to integrate with any heterogeneous data center environment, support any application, any policy, any SDN protocol—and do so without disruption or fear of vendor lock in.
Smart: Using network intelligence and analytics to drive insight and turn raw data into knowledge that can be acted upon. Customers benefit from a flexible and adaptable data center network.
Junos Space Network Director enables users to focus on their respective roles: Build, Deploy, Monitor, Troubleshoot, Report. This saves time through better troubleshooting, capacity planning, network optimization, and security threat mitigation, AND it helps accelerate the adoption of cloud, mobility and Big Data applications.
Juniper Architectures
So at a very high level, we have two different types of architectures that we can offer our customers. The first one is going to be Juniper architectures. And the second one is open architectures. And at a high level, we start taking a look at the very top of the Juniper architectures, what we have number one is Virtual Chassis. So we should all know that it’s going to be a sequel point of management for a given set of switches and we support this up to 10 members today and we typically see this in the access layer. Now, the next one is QFabric, where we can scale all the way up to over 6,000 ports and create this single point of management and it’s an Ethernet fabric. And the newest one is a combination of both of these guys. So we’ve improved upon Virtual Chassis Fabric in terms of the scale, so we used to support 10with Virtual Chassis and now we go all the way up to 20 with Virtual Chassis Fabric and as of September 2014, we’ll actually go ahead and support up to 32 members as well. So we have more physical switches in our Virtual Chassis Fabric and we also support more topologies than a Virtual Chassis Fabric because traditionally with Virtual Chassis, that’s going to be connected in a ring topology or a braided ring topology, so it’s going to be generally access layer only, whereas with Virtual Chassis Fabric, it’s going to be both the spine and the leaf, or the core and the access combined into one. Very similar to QFabric. Now, we take a look at Juniper architectures, one of the key benefits is that it’s going to be a single point of management and control no matter what option you go with, and part of hat is that it’s going to be purpose built and it’s going to be very turn-key. So whether you’re adding new switches to a Juniper architecture, it’s going to be a plug and play feature set, and as you enable features across a Juniper architecture, you don’t have to worry about all the backend protocols. So we take care of that for you. So the entire solution is going to be plug and play. Now, for the other customers that want a more open architecture because they want to integrate additional third-party switches and components, we have several different options. One of those options is going to be MC-LAG or multi-chassis LAG, and we see this in varying different parts of the network, whether it’s the core, distribution, or access. There’s also an IP fabric, or a Clos fabric, where it’s just a very simple Layer 3 network, top to bottom, and the benefit is going to be that it’s going to be a very flexible deployment so it works with both Juniper boxes and other third-party switches as well, and you really have a choice in terms of do you want to use an IP Fabric, MC-LAG or even just a standalone box. You can choose what protocols you want to run, there&apos;s OSPF, ISIS, BGP, so again you actually give the customer more options on how they want to operate their network. So the key method here is that one architecture does not fit all, and the QFX5100 really enables you to sell both Juniper architectures and open architectures to your customer.
Juniper Architecture Scenarios
So if we take a look at the Juniper architectures and we start taking a look at number one, what are the options? It’s going to be QFabric and Virtual Chassis Fabric. And what we’ve done is we’ve overlaid this on top of the roles in the network which is the core, distribution, and access, and we can see that QFabric and Virtual Chassis Fabric cover all three of these roles. So from that point of view, it’s going to be equal in terms of how do you position this in terms of the roles in the network.
Deployment Scenario Details
Now the next up is that we can double-click to get some additional information, and what we’ve done here is we have a bunch of different attributes on the left hand side, all the way down to how do we handle the control plane, and then all the way down to do we support VXLAN Layer 2 gateway for example, and this is a key feature that you’ll require if you want to run an overlay network such as NSX or Contrail. What you can see is that at a high level, if you really need a lot of ports, the QFX3000 or QFabric is going to be your product of choice because using the QFX3000-G it goes all the way up to 6000 ports in a single Ethernet fabric. But on the other hand, if you need a faster switch with lower latency, and if you need to support an overlay network in the future, which is going to be fairly common as companies start to transition to an overlay network, you want to lead with Virtual Chassis Fabric and the QFX5100. And what’s really interesting is a number of ARP entries, and this is simply a mapping of Layer 3 addresses to MAC addresses. So as you have more VMs come up on your network, this is really the first table to be exhausted, and even though you can support more MAC addresses, you’re typically not going to run a Virtual Chassis Fabric at only a Layer 2 mode, you’re going to have Layer 3 functionality on there. So for example, you’ll be running different VLANs, and those VLANs will have different layer L3 interfaces which are RVIs and in an L2/L3 mode, it’s going to have to require ARP entries and you only have 20,000 with the QFabric solution, but with Virtual Chassis Fabric, we more than double that and go to 48,000, so overall we can’t support more VMs in an L2/L3 mode which is actually very important.
The other comparison is that on the control plane side with QFabric, the control plane is out-of-band, so it’s going to require some additional optics, some additional cabling, some additional switches to support that out-of-band control plane, whereas with Virtual Chassis Fabric, that control plane is actually in-band. So as you connect the spines at different leafs with 40G optics or DAC cables, those same 40G ports can be used or are used as the control plane for Virtual Chassis Fabric. So it’s more user-friendly. And at the end of the day, we really want you guys to position Virtual Chassis Fabric as much as possible, except when you need the use case of going up to 6,000 ports with QFabric or the QFX3000-G. Another key note is that in August and September, with the D10 release, we will support ISSU on Virtual Chassis Fabric as well. That’s going to stem from the ability to create multiple VMs on the QFX5100 to perform the ISSU between the different routing engines. We can support ISSU today on a standalone QFX5100, it’s just some additional development work that we have to get done to meet the 14.1 D10 release here in September to support that on our Virtual Chassis Fabric. So after these SE Summits are done and over with, we’ll have the ISSU both on standalone and Virtual Chassis Fabric and we will not have that same functionality on QFabric. So if you need ISSU, the only option is Virtual Chassis Fabric.
Open Architecture Scenarios
Now, the other one is the Open Architectures. And what we’re going to take a look at here is again we go back to those roles in the network and start mapping some of the technologies and products that can map into this from an open architecture perspective, and it’s going to be a lot of different options, so what we’ve done is that we’ve aggregated some of the most common options that we see with our customers and the key is that in the core/distribution, these typically get collapsed into a single device and it really depends on the overall port density required, and what we typically see is that we can put a QFX51 in the core and distribution or if that’s not enough scale, we can go to the EX9214, a chassis-based solution as opposed to a fixed switch configuration with the 5100. So depending on the scale, it’s going to be the 5100 vs. the EX9200, and on the access layer, we see two things, so for 1G deployments, we’re going to see the EX4300 and we can run that in Virtual Chassis to get some better management capabilities, and another option is for 10G deployments, we’re going to see the QFX5100. And the last option, number 4 here, is that we can actually take a look at using Virtual Chassis Fabric to position in the distribution and the access, and tie that back up into a different core device such as the EX9200 or the QFX5100. So depending if they want a POD architecture as opposed to a top-of-rack switch architecture, we can go both ways with our open architecture scenarios. But the key message here is that generally, we always lead with QFX when possible, otherwise we use the EX9200 if we need the scale.
Deployment Scenario Details
So again, we’ll do a little double-click here and take a look at number one, we want to take a look at all the different options for our open architectures, and we’ll make the assumption that we’re using the QFX5100 for the spine. And we’ve got four different options here. So the first two options we’re going to be using the 96S which is the 96x10G switch, and then the first option we’ll pair that with the EX4300 to get 1G access. And using the QFX5100-96S with the QFX4300, we can get up to 4608 ports of 1G. Now, if you go to the second option, we’ve replaced the 4300 in the access with the QFX5100-48, and that can be either the 48S for fiber or the 48T for copper, and we can get the same port density as well. Now, the next option is using the 5100-24Q in the spine, and in the access layer, we can either use the 48S or 48T for the 10G access depending if they need copper or fiber, and we can go up to about 900 ports of that with these two switches. And again, this is the assumption that we’re using MC-LAG between the spine and the leaf. The last option is if they need 40G access at the servers, we can use the 24Q or if they need a little bit more 10G, we can actually use breakout cables on the 24Q and increase the overall 10G count over 1500. So a lot of different options here.
Deployment Scenario Details
Now, the next one is basically doing the exact same thing, but taking a look at using the EX9200 instead of the QFX5100 in the spine. So again, the access layer stays the same, it’s going to be the QFX5100, the EX4300, and to mix things up, we actually offer another option where you can pair the EX9200, the EX9200 with PODs of QFX3000 or the QFabric technology, and you kind of see what some of these attributes are. So in some we risk getting higher 1G and 10G capacities, and another thing to notice is that we’re actually increasing the capacity in terms of MAC addresses and ARP entries. And because the EX9200 is based off the MX technology, we can actually increase the number of VLANs from 4000 to 24,000. So we get a lot higher logical scale, higher port density, and the downside is that it’s going to be a bit slower in terms of latency, and we’re not going to get any kind of storage convergence unless you go with the last option which is basically the QFabric plus a 9200.
Virtual Chassis Fabric
So that concludes our navigating the data center architectures. So our next session, we’re going to do a deep dive into Virtual Chassis Fabric which is one of our new data center architectures enabled by our QFX5100 product that was released November back in 2013, and since then we’ve been releasing new models. So we initially released the 24Q and the 48S and now we’re focusing on releasing the 96S as well as the copper 10G version which is the 48T. So this family of switches enables a new architecture called Virtual Chassis Fabric and today we’re going to take a look at what problems are we solving, and how does it actually work.
Virtual Chassis Fabric
So again, let’s go over the what and the why. Our basic building blocks are our new set of switches, it’s the QFX5100 line, and not only does the new QFX5100 support Virtual Chassis Fabric, but we also back ported the Virtual Chassis Fabric functionality back to our original QFX3500 and 3600 switches as well as our new 1G switch, the EX4300. So let’s take a look at the topology that we’re going to enable with Virtual Chassis Fabric. So what we have here is that on the left-hand side what I’m showing is that we have four spines at the top, and the rule about Virtual Chassis Fabric is that the spines must be a QFX5100 device, and we can have any number of leaves. So one leaf all the way up to 16 leaves as of the D20 release of Junos. However, with the 14.1 release, D15, we’re actually going to support up to 28 leaves. So we’re going to increase that number. But, again, at the basics it’s going to be a spine and leaf topology, and this is going to be enabled as a Virtual Chassis Fabric, and let’s take a look at what that means. So it’s going to go above and beyond a regular Virtual Chassis because Virtual Chassis kind of stops at the access layer and it stops at 10 devices, and you’re typically limited in terms of the topology supported by our Virtual Chassis which is going to be a ring, a braided ring or what have you. But a Virtual Chassis Fabric is a spine and leaf architecture where you can actually scale out to a larger number of devices and you can build a small to medium data center using nothing but a Virtual Chassis Fabric, and if you need to go beyond the size and limitations of a single Virtual Chassis Fabric you can use it as a POD architecture, so have multiple versions of Virtual Chassis Fabric. So what I’m showing here is that we’re plugging our servers into the leaves, and the leaves again plugin to the spine, and what we have now is that we can manage the entire thing as a single point of management and we have Junos Space with Network Director application, so as of 1.6 with Network Director, we can manage Virtual Chassis Fabric, and of course Network Director can tie into your cloud management system such as OpenStack, CloudStack and other tools as well.
Now you start taking a look at some of the attributes of Virtual Chassis Fabric. So I mentioned that it is a single point of management, so you can have up to 4 spines and 16 leaves as of today and it’s going up to 28 leaves here shortly in the future, and gain it’s going to be a single point of management, and we get full Layer 2 and Layer 3 with Virtual Chassis Fabric. So there’s going to be no limitations in terms of which type of network access can go where, it’s that any port can be turned on with any service whether it’s Layer 2, Layer 3, or multicast. We can do that universally within Virtual Chassis Fabric, and one of the great things is that we’ve actually enhanced the older Virtual Chassis to support ECMP with a new Virtual Chassis Fabric. So now, even with Layer 2, we get full ECMPs to take advantage of all the additional bandwidth going up to the spine. Now, one other cool thing is that we actually support transit FCoE, so not only as you plug in your servers, but you can plug in your storage and get a converged storage platform with lossless Ethernet or data center bridging. Now, you still need a FC Gateway, which is the QFX3500, but in terms of transit, we can perform that function with Virtual Chassis Fabric. Now, one of the great things is that we also have ISSU built into Virtual Chassis Fabric as well, because we’re going to leverage the virtualization capabilities of the new QFX5100 where this thing actually boots up with Linux using KVM and we run an instance of Junos within a VM. So as you upgrade this system with ISSU, we’ll actually spawn another VM inside of each and every QFX5100 and perform the ISSU function with graceful routing engine switch over, non-stop routing, non-stop bridging, and basically upgrade one route engine at a time, and when it’s all complete, the net result is that you upgrade the software of all the different switches in the Virtual Chassis Fabric and there’s no downtime in terms of traffic loss in the data plane. Now, this is also plug and play provisioning. There’s going to be a mode called auto-provision, and what this allows you to do is that as you grow your network you can simply plug in a new switch straight from the box, cable it up, you power it on, and it’ll use a combination of Juniper protocols and open protocols to, number one, identify the switch, discover the switch, push to configuration the right software versions down, and bring it into that Virtual Chassis Fabric automatically. So it’s a true plug and play Ethernet fabric. And one of the great things is that, as your customers look forward to moving towards an overlay data center with VMware NSX or Juniper Contrail, we do support VXLAN L2 gateway as part of the Virtual Chassis Fabric. So when you need to plug in a bare metal server into an overlay architecture, we can do that with Virtual Chassis Fabric, and the real benefit is that, not only can we dual home a bare metal server into the Virtual Chassis Fabric, we can actually go beyond that, so if you want to multi-home to four switches, or eight switches, or even more via LACP, we can do that. And the other big benefit is, as you span a bare metal server across the different leaves via LACP, we also have the distributed VTEP, or the Virtual Tunnel Endpoint, that has that Ethernet to VXLAN translation, operating at every point in a Virtual Chassis Fabric. So you basically configure it once because again, it’s a single point of management, and that logic gets pushed down to every leaf and every spine in a Virtual Chassis Fabric. So it’s truly plug and play, single point of management, and supports multi-homing in an overlay architecture that you really just can’t get if you have a bunch of individual switches.
Server and Storage Connectivity
Now, let’s take a look at the connectivity option. So I mentioned that yes, you can multi-home into the Virtual Chassis Fabric. What I’m showing here is that, basically I’ve got the Virtual Chassis Fabric on the left, I have four spines, and I’m showing four leaves, which are the QFX5100s, and I have a server which is multi-homed into four leaves using LACP, as well as my storage multi-home into the four leaves as well. And again, because this is a single point of management with a single control plane, I have no issue spanning this number of switches via LACP because it’s logically one switch. Now, what’s really interesting is that we’re not limited to just the leaves, we can actually plug in edge routers, firewalls, load balancers, or even servers directly into the spine as well. Because we have this concept called a universal port on Virtual Chassis Fabric, which is actually a bit different than QFabric, because in QFabric, you have dedicated nodes that perform different functions such as routing vs. switching, and that limitation is actually gone with Virtual Chassis Fabric, so again, any port can be used for any function, no limitation on that. Now, with the addition of our new switches in the QFX5100 product line, we’re going to give you the flexibility of any Ethernet media and so whether it’s going to be a 10Meg, 100Meg, 1G, copper or fiber, we can support that. So for example, if you need a 10Meg/100Meg solution, you can bring in the EX4300 to get that 1G connectivity. Now, if you need a 10G copper fiber solution, you can bring in the QFX5100-48S or -48T. So whether it’s coper, fiber, 1G, 10G, we don’t care, we’re going to give you any-to-any connectivity with a Virtual Chassis Fabric. So it’s a single architecture regardless of what type of connectivity you need. So we mentioned that we do have in-software service upgrade, it’s N-way multi-homing, instead of a pair of TORs it’s going to be a set of TORs, and everything is active-active. There are no standby nodes in this architecture, and by design, traffic is going to be forwarded across every single link from the server going to the TORs, and the TORs going to the spine, everything is active.
Now we did mention that we have the FCoE transit, but in addition we can actually leverage those same protocols with data center bridging and lossless Ethernet to support other storage protocols as well such as iSCSI, NFS, and CIFS. Again, we mentioned that we do have hardware support for SDN, so we have the built-in VTEPs with OVSD as well as EVPN later on the release of RT2 here in the third trimester.
1, 10, 40, GE – All In One Fabric
So again, I really want to reiterate in terms of one fabric. It gives you a lot of different connectivity options. So here on the left what I’m showing is a 10G POD where the spine is the 24Q and you’ve got a bunch of different options in terms of the leaves, you can do a 48S, a 24Q, or you can go down to the older QFX3500s and 3600s to get that 10G connectivity. In the middle, if you want a combination of 1G, 10G and 40G, you can actually use the same components and add the EX4300. Now, if you just want a 1G POD on the right, we can basically use the QFX48S as the spine and then use the EX4300 as the leaf. So we give you a lot of different options in this new architecture to get the right type of connectivity required for your customer.
2 or 4 Spine Node Deployments
Now, we also offer not only a four spine, but a two spine as well. So you’ve got the option of either two or four spines. Now, obviously if you have two spines you can actually squeeze out a few more leaves, because there&apos;s actually an upper limit right now of a total of 20 switches, so traditionally that’s going to be four spines and 16 leaves, so that’s a total of 20 switches. However if you have two spines, you can obviously go up to 18 leaves, so you get the extra two racks of connectivity. Of course we already spoke about the four spines and that’s what I’m showing her on the right as opposed to two spines on the left. The drawback is that if you do go to an architecture with two spines, you’re going to double the amount of oversubscription where it’s going to be a 6:1 oversubscription left to right, as opposed to if you have four spines you get a more standardized 3:1 oversubscription. So it’s really a balance of how many racks and what type of oversubscription do you want to have in your network.
VCF Integrated Control Plane
So let’s talk about, how do we integrate the control plane into Virtual Chassis Fabric, because traditionally with QFabric we have an external control plane that requires some additional switches, some directors, some cabling, and things like that. But with Virtual Chassis Fabric what we’ve done is that we’ve put the control plane directly inside of the spine, so what you have is this master and backup routing engine in the actual spine itself. And we get this really nice concept of an end line control plane, which actually rides across these 40G links, as opposed to having an out-of-band control plane and having to deal with external switches, directors, etc., so it’s more user friendly, requires less power, less space, less cabling, and we have enough internal queues on these links to handle the end line control plane and still have enough user accessible queues for FCoE, network control, and other types of queues as well.
So again, at the high level there’s going to be two routing engines, a master and a backup, and we have the in-band control plane, and it’s going to be running the same protocols that we know back from the Virtual Chassis days, so it’s going to be the VCCP daemon, or VCCP DE. And it’s going to have the automatic fabric discovery in terms of what the topology looks like, and it’s going to have the loop-free fabric as well because, number one, we know what the topology looks like and we can put a time-to-live in the fabric header.
VCF Integrated Data Plane
Now, let’s take a look at the data plane. So what we have is that we have intelligent spines and leaves, and what happens is that as we learn routing and we learn MAC addresses and the control plane, we push that down to every other switch in Virtual Chassis Fabric. So it gets pushed out to a line card. Now, in terms of taking a look at what happens from a perspective of Junos and the master routing engine, it really thinks that every other switch in the Virtual Chassis Fabric is a line card. And it’s a true sequel control plane with one version of Junos running in the spine and all the other switches appear as a line card to that. What you get is that, number one, all the links are going to be active-active and the traffic is going to be load balanced equally across all links. And because we’re using the latest silicon from Broadcom which is Trident 2, we’re going to have 1.8 microseconds of latency between racks or between leaves. Now, you start taking a look at the latency within a single rack, it’s going to be about 550-650 nanoseconds, which is very fast.
VCF Deployment Methods
So, again, the auto-provision is it’s full plug and play. You configure the spines and you kind of forget it. The pre-provision is that you actually say which switch is what role. If you want to go beyond that and configure which Virtual Chassis ports go where and how that mastership election happens, you can do a non-provision, so we give you the best of all worlds here.
Smart Trunks
Now one really great feature with Virtual Chassis Fabric is that we took a lot of time to think about how does traffic flow from point A to point B of the network. What I’m showing here is how traffic flows through switches. What we have is L1 is our first leaf, and it’s going to have two spines, and on the right-hand side what we’re going to have is two leaves, so L2 and L3. And what I’m going to show is the traffic flow from left to right, and the assumption is that I have 30G of traffic coming in from the left hitting the first leaf L1, and what’s going to happen is because I have these trunks called T1 and T2, which is just a pair of cables, is that I’ll get an equal split of traffic. So I’ve got 15G going North to S1, and then 15G of traffic going South to S2. But if you notice what happens is that from S1 there’s only a single link going down to L3 but, from S2 we have two links going to L3. So what happens is that you actually lose some of the forwarding capacity from the perspective of S1 because it’s no longer able to forward the incoming 15G of traffic down a single link of 10G, so S1 can only transmit at 10G, it’s got one link. Whereas S2 has two links, so it can easily transmit the original 15G, and what you’re left with is that you get 25G of throughput because there happens to be a link down. So this is before. Now, what we did to solve this problem, is introduce a concept called Smart Trunks. What this does is that it takes a look at the path, end-to-end, and it figures out where the milestones and where’s the different hops in my network. And how much bandwidth does each link have and it performs an end-to-end calculation for the total amount of bandwidth in the topology. So what happens is that now my Smart Trunk is a combination of T1 and T2, we’re going to call that T12, just a concatenation of both of these. Now we can see that we have uneven balancing between S1 and S2, because S1 has one link of 10G, but S2 has two links of 10G going to L3, so now what we can do is unequal cost load balancing on L1. So L1 can send 10G of traffic to S1 because it already knows in advance it has a 10G link on S1 going to L3, and again, we&apos;re going to send 20G of the remainder of the traffic to S2, because we already know that S2 can handle 20G because it has two links of 10G. So the result is you transmits 30G on L1 and you receive the 30G on L3 using Smart Trunks.
Competition – Cisco
Now, another thing we also hit is the Cisco FEX, specifically the 6K + 2K, and really on every single metric we beat the competition. When you look at the number of 10G ports, the latency, even the power, the segmentation, no matter how you look at it, we just absolutely beat the Cisco FEX on the 6K and 2K when you compare it to a Juniper Virtual Chassis Fabric. Really the key thing here, which I think is great, is the ISSU support. So yet it’s more ports, there’s lower latency, all that great stuff, but I think ISSU is key, and being able to support overlay networks, we can do that on Virtual Chassis Fabric, but Cisco FEX, it can’t do that.
Competition – Arista
Now you start comparing the features as well, and the first thing to look at is ISSU. We can do this top to bottom, all the way from the spine down to the access layer, we support ISSU. If you start taking a look at the VXLAN L2 gateway, we can do that on both the spine and the leaf, whereas Arista can only do it on the leafs, and the exact same thing on the control panel side as well. If you need a single point of management, we do have Network Director to take care of that for you, and if you need to re-use these same SKU numbers into other topologies or architectures, Arista is only going to give you MLAG, but Juniper will support Virtual Chassis, Virtual Chassis Fabric, QFabric, so we’re going to give you more options in case you need to re-use these switches somewhere else in your network. So in summary it’s going to be ISSU, Network Director, support for overlay networking with VMware NSX and Juniper Contrail, and different architecture choices with Juniper.
L3 IP Fabric
Hey guys, welcome to our next session on Layer 3 IP Fabrics. What we’re going to be talking about today is number one, what is an IP Fabric and why do we need it, and what are some of the design options and the architecture of IP Fabrics?
Spine and Leaf
So at a high level, if we apply this technology, or this concept, to networking, what we have is a spine and leaf architecture, or a 3-stage Clos. So we can actually use these terms interchangeably, they mean the exact same thing. So we have our spine up at the top and our leaves down at the bottom, and the benefit is that we actually can scale a number of leaves to get our scale. So as traffic comes into the leaf, it’s going to be fanned out to the spine, it’s going to be the middle tier, and go back out the other leaf which is the egress. And again, you can scale horizontally with this method to increase the scale of your network.
Clos Requirements
Now, you start taking a look at what are some of the requirements to build a Clos fabric or an IP Fabric, and obviously you’re going to need some sort of routing protocol, and there’s some debate in terms of which one is best. So what I’ve done is I’ve compiled a list of requirements at a very basic level. So you obviously get to advertise prefixes throughout the IP Fabric. You’re going to want some level of scale, and it’s going to be nice to be able to have the ability to do some traffic engineering as your network gets larger, and actually tag some of the traffic for troubleshooting. One of the most interesting bits is the multi-vendor stability because typically what happens is that as you build out the large IP Fabrics, you’re going to start at point A, and you’re slowly going to evolve your network and scale it over time, and you’re typically not going to stick with the same vendor over a period of years. So as you make investments into an IP Fabric you want to make sure that whatever design you come up with needs to be very stable across a set of vendors. Now, some of the options to do this is that we’re going to have OSPF, ISIS, as well as BGP. So obviously, OSPF, ISIS, BGP can advertise all the prefixes, no problem there, but it gets interesting when you start taking a look at the scale, and it kind of varies between OSPF and ISIS on what kind of CPU do you have in the control plane, and how many routers can you put into an area, and you start getting to area design, so it’s kind of limited and a bit tricky, but you start taking a look at the traffic engineering and you can only do so much with the link state protocols and the traffic tagging is very minimal, it’s just a single tag on a prefix and you can’t really go beyond that. Of course with BGP, you’re going to get a lot of scale because everything is based on AS numbers, traffic engineering, you’ve got the entire BGP toolset at your fingertips, whether it’s AS padding, local preference, med values, what have you, and of course with traffic tagging, we can take full advantage of BGP communities and extended communities to tag prefixes with troubleshooting information or origin information just to help with the operational simplicity of your IP Fabric. And of course, one of the best use cases in the world for multi-vendor stability is the Internet which of course runs BGP, and for that reason, it’s my recommendation to use BGP as your control plane when building an IP Fabric.
Multi-Stage Clos BGP Overview
Now, we take a look at the control plane and some further depth, so we take a look at the spine and leaf and how it’s connected together, and what we’re going to have is IBGP running BFD for a sub second detection, and what we can do is one option is that we can do a route reflector on the spine. And we actually encapsulate both the spine and the leaf together into the same AS number. The combination of this together is our vSpine. Then we bring in our leaf and we can run EBGP, again running BFD for that sub-second convergence, and put that into a different AS number. So this is one example of how we can design this from the perspective of BGP, and of course we can actually put each switch, the spine, leaf, and access into its own AS number and just run EBGP everywhere. That’s a valid option as well, and we support that as well. So whether you want to do route reflection with add-path with IBGP, or just a regular EBGP, we support both options with the QFX5100.
Multi-Stage Clos BGP Detail
So again, we’ll go back to the detail and take look at how does BGP fit into this overall topology and architecture of a multi-stage Clos network, and what we have is a vSpine that’s going to be running BGP route reflectors in the spine, and the vSpine as a whole can run IBGP, and the access tier could peer to each vSpine as a different AS number. And of course, each access switch will be its own AS number as well.
MX: Universal SDN Gateway
So this is a nice pictorial view of what we covered in the last slide. So what it is showing is the right-most POD—it’s a legacy VLAN-based POD, with a bunch of VLANs and switches, put together in a very legacy type of environment.
The middle one is our Contrail based POD, where our Contrail Controller is working with MX.
And then, on the left-most side, is a VMware NSX based POD, and again, which is controlled by the VMware NSX.
And MX is essentially being the gateway of the Universal SDN… why is it universal? Because it is connecting all these different technologies. And MX is being a gateway, which is interlocking all these different SDN technologies and giving one common WAN gateway into the IP/MPLS network.
OK, the critical part here is that we are standards-based and we are multi-vendor and we are highly scalable. So that is our biggest value proposition—we are standards-based, multi-vendor, and highly scalable.
And then, on top of it, obviously being Juniper, being the industry leading WAN gateway, and having a very flexible, future-proof platform, such as MX—those are the added benefits.
Building upon our simple, open and agile SDN solution we announced in September, we are announcing support for the VMWARE ESXi hypervisor.
(Left in original talk track for any context if needed.)
The MetaFabric architecture is the ideal networking foundation for the emerging ecosystem of SDN controllers, protocols, and orchestration platforms.
This includes broad investment in the technology ecosystem partnerships (including VMware, OpenFlow-based solutions, CloudStack, and others) as well continued enhancements in our SDN portfolio with Contrail.
This strategy enables customers to simplify their pathways and choose choose the SDN strategy that best suits them.
Already, a simple, open and agile virtual network overlay solution:
Simple integration with existing physical network for investment protection
Open interfaces and OpenContrail source code enable customization
Agile connectivity for private, hybrid & public clouds
In the past month since we launched Contrail and announced OpenContrail, we’ve been able to deliver further on this promise and present customers with an even broader array of choices, Juniper Networks Contrail will support VMware ESX in the first half of 2014.
What is VxLAN
What is VxLAN? It basically stands for Virtual eXtensible Local Area Network. It provides L2 connections within an IP network and allows for flat data center design. Just as you have a GRE tunnel with IP over IP, a VxLAN tunnel is MAC over UDP. So a VxLAN tunnel is another tunneling protocol; it’s an encapsulation protocol that helps to bridge two different hosts or two different data centers.
What is Ethernet VPN (EVPN)
Sachin Natu
So now let’s take a review of what is Ethernet VPN, called EVPN.
So, we will first do the technology review right now. So what it is really, it’s a new standards-based protocol for connecting L2 domains across different geographies. OK? In this particular one, Juniper is actually leading this multi-vendor industry-wide initiative.
So, look at this, Juniper had proposed a solution about 3-4 years back, called MAC VPNs. And the idea was so good, but people essentially did not have the standard until they would read it in the vendors. And now all the vendors—Cisco, Alcatel-Lucent, and lots of them are our leading customers—like AT&T, Bloomberg and Verizon. They’re all actively supporting this technology and it is one of the hottest technology in the IETF Internet L2 Working group.
And, so I mean Juniper is leading the multi-vendor industry-wide initiative. And what it is for, it essentially improves on network efficiencies. And exactly how? We’re going to go into details later.
What we have learned over the last 10 years of operating Ethernet or operating VPLS and whatever the problems of the VPLS were, are fixed in this new technology. Additionally some of the benefits that we have learned off learning L3VPN, they are included into this new technology, and essentially it improves the network efficiency. Additionally, this technology is also ideally suited for data center interconnect. OK?
So data center interconnect has typically been [indiscernible] at IP or VPLS level. But, at those levels, you cannot even provide a solution, whereas it is most strong from from data center 1 to data center 2. Right?
So, this new technology is designed—some of the mechanics of the mobility is designed to do the protocol itself. And additionally it also allows—this particular technology—also allows—it becomes the control plane for a data center which has purely IT fabric but allows L2 multi-tenancy. In this use case—so whatever you just learned in the last section of VxLAN. So VxLAN becomes a data center transport technology and EVPN becomes data center control plane technology. So, these are some of the… “what this new technology does”.
Why EVPN
Now we see where this technology is applicable, OK? So when… where EVPN makes sense.
And it’s a clear-cut predefined use cases where you can make sense. One is data center interconnect, where you’re allowing L2 stretch between the data centers or WAN network.
Then, within a data center—within an IP data center—IP or IP/MPLS data center—we are using L2 tenants and yet allowing multi-tenancy inside a data center. And, as we talked about, this is where the VxLAN becomes a transport or MPLS over GRE becomes a transport, and EVPN becomes control plane technology.
And the third one is a next generation L2VPN technology that replaces VPLS. So, the service offering is the E-LAN or E-LINE, as people offer today. And today we offer everything using VPLS, and so, the service remains the same, but now you’re using much more efficient, mature technology—right—called EVPN.
Now which customer would be interested in an EVPN solution? And we have the same class of customer who are naturally looking at the use cases. The customers that essentially offer services that talk to these use cases are our natural customers. Actually the first one the Data Center Builders; it is comprised of the service providers, enterprises, and content providers—whoever is building a data center, and wants to connect L2 tenants or L3 tenants and they want to connect them together or connect them to the internet, or as a data center LAN gateway, is interested in this technology.
The second is service providers—again, like I said, people like Verizon or Telefonica or anybody else who are offering E-LAN or E-LINE services, and they’re just usually going to use now EVPN as a… on a PE as a transport technology for connecting L2 business services.
Use Case 1: EVPN For Data Center Interconnect
Well, it’s essentially to carve [indiscernible] view into various use cases where EVPN makes sense.
And the first use case is EVPN for Data Center Interconnect. In this one, the benefit that EVPN offers is seamless interconnectivity between different data centers and basically adds L2 stretch between different data centers. An interesting thing to notice here is in data center 1 you can have any overlay technology save VxLAN; in data center 2, you can have any other overlay technology, also even simple legacy L2 cloud — L2 technologies. An EVPN, there in the middle, can offer a seamless interconnection.
Then seamless workload migration—particularly offering VM mobility across data centers via technologies like MAC mobility or VMTO, which are built into the protocol.
And then it’s obviously widely applicable, which essentially interconnects different types of native [indiscernible] overlay technologies. So that is the use case number 1 where you are connecting data centers.
Use case – highlighting security portfolio for rapid threat ID and prevention in the data center.
Junos Space Network Director enables smart, comprehensive and automated network management through a single pane of glass. Network Director enables network administrators to visualize, analyze and control their entire data center: physical and virtual, single and multiple sites. Three key elements:
Visualize:
Complete visualization of virtual and physical network along with graphical Virtual Machine tracing
Analyze:
Performance Analyzer – Provides real-time and trended monitoring of VMs, users, ports.
VM Analyzer – Real time physical and virtual topology view with vMotion activity tracking.
Fabric Analyzer – Monitor and analyze the health of any Juniper fabric system.
Control:
Automating provisioning w/ Zero touch provisioning – Simplifies network deployment without user intervention resulting in reduced configuration errors due to human error.
Bulk provisioning – Accelerates application delivery while protecting against configuration errors with profile based pre-validated configuration.
Orchestration via Network Director APIs – Open RESTful APIs that provides complete service abstraction (not just device or network) of the network while integrating with third-party orchestration tools (like openstack and cloudstack) for accelerated service delivery.
VMWare vCenter Integration – Physical virtual network orchestration based on vMotion activity.
End – Thank You
And that concludes our IP Fabric discussion. Thank you.