14. Bandwidth and QoE management
Gold Subscriber (20 Mbps)
Silver Subscriber (10 Mbps)
Bronze Subscriber (5 Mbps)
PGW/GGSN VIPRION
Even if subscriber is entitled for more by
subscriber bandwidth policy his P2P traffic
gets reduced to configured value (512kbps)
PER-SUBSCRIBER BANDWIDTH CONTROL
PER-SUBSCRIBER PER APPLICATION BANDWIDTH CONTROL
Gold Subscr total (20 Mbps)
Gold Subscr p2p (512 kbps)
PGW/GGSN VIPRION
PCRF
15. DPI inspection for OTT Identification & Monetization
OTT MONETIZATION & FLEXIBLE CHARGING
Gold Subscr total (acct only)
OTT Service (acct + DSCP mark) PCRF
PGW/GGSN VIPRION
• Subscription models / bundles for OTT or specialized service
• Bundled into subscription for a lower fee
• OTT traffic excluded from volume bundle
• OTT traffic marked/tagged for differential treatment at radio layer
SPECIALIZED
SERVICE
(MNO BRAND)
F5’s Intelligent Services Platform can be deployed across an array of hardware or software choices designed to address different application needs, deployment scenarios, and resource requirements in the data center.
Hardware designed specifically for application delivery
Industry’s best performance—up to 20M L7 RPS and 640 Gbps throughput
Ultra reliable, long life components
Compliance options—FIPS, Common Criteria, NEBS
Always-on management
Hardware SSL offload and compression
This RQ implements enhancements to the PEM URL filtering feature developed in the Vancouver release. The key enhancement is the addition of a custom URL database that can be leveraged for adding custom URLs, categories for the operator.
The feature also provides the ability to leverage the server name indication (SNI) in SSL traffic for URL categorization, this is available via an irule hook.
Categorize URLs based on the server name indication information in the SSL connection.
Uses the rewrite engine at the backend.
Compressed content handling.
Local DNS, in three flavors, is used by the SP to service the end subscribers when their handsets or devices perform a DNS query. The LDNS is responsible for responding with an answer to a domain query. In a resolver mode, F5 BIG-IP can traverse the root name servers of the internet to retrieve the response and then cache it once found. The F5 implementation is very high performance, meaning that one F5 BIG-IP can replace 10s or 100s of traditional DNS servers. However, some customers may have existing infrastructure they may wish to keep, at least initially. In such a case F5 can load balance to them or, for even higher performance, perform as a transparent cache. This latter solution has almost no impact operationally. In all of these scenarios, F5 technology protects the infrastructure through its built in ICSA certified firewall.
If run in an authoritative mode, this is where the F5 BIG-IP acts as a nameserver that is authoritative for a given zone (domain). This is used, for example, for sites a customer might use for billing, device activity etc. This may be combined with GSLB for high availability.
The infrastructure play for F5 DNS is where BIG-IP is being used in the core of the network for connection setup and other functions. For example, in HSDPA networks, a connection is made to a packet gateway (GGSN) on behalf of the mobile device when it makes a data connection. The selection of that gateway can be performed intelligently by monitoring the health of those gateways and through GSLB only responding with an IP address of a gateway which is able to take the connection. This type of function today may be performed through a combination of iRules and monitors.
Clients will now only ever reach the DNS Server if the DNS request they are making has expired or is for a name that has not been requested before. By having a high performance cache, this allows existing DNS resolver servers to scale further as they are receiving less load. This is a low impact installation scenario as the client and server are unchanged in behavior, only the scalability of DNS increases. And of course, this is combined with load balancing to the DNS servers with a built in ICSA certified firewall, consolidating resources while increasing security.
This last step for LDNS now eliminates the need for the DNS servers altogether. F5 provides full resolving out to the internet for all DNS requests. It can do this with high performance in combination with caching. This step consolidates infrastructure further but having all of the components built into one box. Furthermore, high availability for this type of installation (or any of the footprints mentioned so far) can be performed by deploying F5 DNS with HA enabled in a device pair.
To provide the best subscriber QoE, your ideal TCP stack would do a few things for you.
It would promote high goodput so that you are always maximizing the amount of data being pushed through your network that is relevant to your subscriber. The higher your goodput, the faster your subscribers get the information they want.
It would minimize buffer bloat so you can reduce congestion before it even starts. Buffer bloat is a sign of far too much traffic on a network that can’t handle it, leading to increased delays of your data being sent to your subscribers. Minimizing buffer bloat means less delay which means faster performance for your subscribers.
And finally, it would keep fairness between your flows so no one flow gets dropped. A dropped flow leads to the lowest form of subscriber QoE, thus keeping all flows alive is the best scenario.
These three characteristics: high goodput, minimal buffer bloat, and flow fairness, would all work together to optimize your network and your subscribers’ QoE.
Woodside is an F5 proprietary congestion control routine.
Results are for preliminary code on simulated networks
3G test was 4 Mbps, 1% loss, 400 ms RTT, 128 KB router buffer
LTE test was 45 Mbps, 0.1% loss, 20 ms RTT, 8 MB router buffer