5. 5
Centralized Computing
• What is Centralized Computing
• Gathers data (Push/Pull) from the remote sensors
• Analyzes the data
• Feed the remote actuators to take action
• All of above done at one place.
• Challenges
• Real time feedback (eg. Connected Vehicles, Smart Cities etc..)
• High latency
• Network bandwidth (eg. Surveillance )
Distributed Intelligence
6. 6
IoT Networks – Distributed Computing/Intelligence
Private/Public Cloud
Edge Cloud (Fog)
Edge Cloud (Fog)
Edge Cloud (Fog)
Near-real time Analytics
Transactional/contextual
analytics
Historical analytics
Trending analytics
Big Data and Long term Data
storage
Near-real time Analytics
Transactional/contextual
analytics
Near-real time Analytics
Transactional/contextual
analytics
7. 7
Distributed Intelligence
• What is Distributed Intelligence
• Real time analytics are done closer to the sensors/actuators using Edge computing.
• Continue to use Cloud computing for transactional analytics & Business analytics.
• Advantages
• Scalability – Support millions of end points.
• Minimal usage of network bandwidth
• Low latency & Jitter for critical close-loop feedback applications
• Continuous operations even if there is network connectivity outages.
Distributed Computing scales well. How is it being achieved?
8. 8
Welcome to Fog Computing
• Fog Computing
• Edge computing using cloud technologies is fog computing.
• Flexibility of Cloud computing
• COTS (Commercial Off The Shelf) computing hardware.
• Agility – Bring new services easily as Virtual machines or containers
• Scalability – Add new instances of a service to take up load
• Multi-tenancy – Allow multiple tenants to use common infrastructure
But there are borrowed challenges of Cloud and newer challenges….
9. 9
Virtualization leads to lower performance
• Performance Challenges
• Virtualization Layers (eg. QEMU/KVM and Virtual
switch) leads to lower throughput and higher
latency.
• Virtual Functions not utilizing the hardware
accelerators for their functions
• Lack of standards and uniformity.
• Lack of real time extensions support in VMM (Virtual
Machine Manager / Hypervisor )
• Result
• High Jitter
• Higher latency and
• Lower throughput
NiC
Packet processing and traffic flow
Virtual Function
(VM/Container)
vSwitch
Overlay
Software NiC (+TC)
Switching
Filtering
Virtual Function
(VM/Container)
OS/VMM
(QEMU/
KVM)
10. 10
Challenges – Security
(How do tenants – Service providers - secure their workloads and data?)
• Providers and Tenants
• Providers : Infrastructure owners
• Tenants : Service providers for end customers.
• Security Challenges
• Trust :
• Multiple sites and multiple providers
• How does a tenant ensure that the workloads
are secure from rogue provider administrators
and other tenants?
• How does a tenant ensure that there are no
rogue compute nodes in provider
environment?
• Network Security
• How does a tenant ensure that the traffic that
is sent among the VMs are not snooped by the
providers?
• Secure Data Storage
• How do tenants keep their data secure in fog
nodes?
• Secure Execution
• How do tenants ensure that data in memory is
secured from sophisticated attackers?
• Key Security
• How do tenants ensure that keys are not
exposed to other tenants and providers?
11. 11
Solutions to mitigate security concerns
• Trust
• Keep the workload images encrypted in Fog image repository.
• Root of trust and Chain of trust using TPM and TXT technologies
• Remote attestation of nodes for both compliance and to provide confidence to tenants
• Open Source Software (https://01.org/opencit)
• Secure Execution
• Protect the execution algorithms from run time reverse engineering.
• Software Guard Extensions (https://en.wikipedia.org/wiki/Software_Guard_Extensions)
• Key Security
• Protect keys secured from both run time memory and persistent memory
• Remote key vault for on-demand key management/retrieval
• Usage of TPM to keep the keys secure in untrusted environment
12. 12
Solutions to reduce Virtualization Overheads & Increase Crypto performance
• Software Optimized Virtual Switch
• OVS-DPDK (https://software.intel.com/en-us/articles/using-
open-vswitch-with-dpdk-for-inter-vm-nfv-applications)
• Part of OVS tree
• Improves virtual switch performance by 4 to 10x.
• Usage of hardware accelerators by Virtual appliance to
offload algorithms (Secure Communications & Secure
Data Storage)
• OPNFV DPACC - Standardization of Accelerator APIs.
• AES-NI and Intel AVX Instruction level acceleration
(https://software.intel.com/en-us/articles/intel-advanced-
encryption-standard-instructions-aes-ni) – Part of Linux
distributions
• Quick Assist technology to offload compression and public
and symmetric key cryptography (https://01.org/packet-
processing/intel%C2%AE-quickassist-technology-drivers-
and-patches)
• FPGA for Virtual Switch Offload and Packet processing
Offload (https://newsroom.intel.com/press-kits/intel-
acquisition-of-altera/)
NiC
Packet processing and
traffic flow
Virtual appliance
vSwitch
Overlay
Software NiC (+TC)
Switching
Filtering
Lookaside
accelerator
Inline/Fastpath
accelerator
(e.g FPGA,
iNIC)
NIAcceleration
• Deterministic performance : Using real
time KVM and Cache Allocation
Technology
(http://www.intel.com/content/www/us/en/communications/cach
e-allocation-technology-white-paper.html)
13. 13
Summary
• Distributed Intelligence is good for
• Scalability, real time and low latency deployments
• Multiple Provides and Multiple Actors lead to new security challenges
• No assumption that all infrastructure providers are trusted.
• Encrypt Everything (Communications, Storage and even in memory)
• Keep the keys secure (From snapshots, from sophisticated attackers)
• Fog Computing – Success factors
• Borrow the security technologies used in cloud computing.
• Isolate the compute, bandwidth and memory for multi-tenancy.
• Reduce the performance impacts of encrypt everything by utilizing all possible accelerators.
• Increase the awareness of accelerators in Cloud orchestration tools.
• Achieve performance determinism using new technologies such as Cache Allocation technologies.
Notas do Editor
Place at the back of the deck.
Scalability. Large deployment of smart metering systems involves millions of end points, which makes the use of intelligent concentrators mandatory. A digital factory represents a few tens of thousand of sensors and actuators. A smart city with parking lot management, road traffic control, and environmental monitoring over a very large territory is bringing its own deployment complexity. The centralized approach is not sufficient to handle this increasing volume of end devices and its geographical specificities. Data is most relevant, or safest, if it is processed close to the edge of the network.
Network resource preservation. The volume of data generated by all type of sensors has a direct impact on the network bandwidth necessary to carry this new created information (we may call it "little data" by opposition to the term "Big Data"). Some remote locations are only connected using wired or wireless connections with limited bandwidth (2G/3G/4G, ADSL, or satellite link). Distributed processing helps relieving the constraints on the network by sending to the cloud or operation center only the necessary information and by doing most of the data processing, like video analytic for instance, at the remote site much closer to the data's source.
Close loop control. Low latency is required to create stable behavior in real-time systems. Large delays found in many multi-hop networks and overloaded cloud server farms prove to be unacceptable, and the local, high performance nature of distributed intelligence can minimize latency and timing jitter. Many critical applications such as industrial automation, inflight control system, electrical tele-protection system, medical applications, or internal vehicle networking have very tight requirements in term of latency and jitter. Only local processing could satisfy the most stringent requirements. Very often, this is combined with advanced networking technologies like deterministic networking, where the guarantee of delivery of packets in a bounded time is granted.
Resilience. It is of most importance that mission critical processes run even if communication with the operation center is not effective. An architecture based on distributing processing is not only recommended but is often the only valid solution.
Clustering. Moving from individual devices to clusters. For example, a connected vehicle has many sensors and actuators, but they are seen from the outside as a single unit (vehicle) that communicates with other vehicles or the terrestrial infrastructure.
Fog computing has its advantages due to its edge location, and therefore is able to support applications (e.g. gaming, augmented reality, real time video stream processing) with low latency requirements. This Security and Privacy Issues of Fog Computing 3 edge location can also provide rich network context information, such as local network condition, traffic statistics and client status information, which can be used by fog applications to offer context-aware optimization. Another interesting characteristic is the location-awareness; not only can the geo-distributed fog node infer its own location but also the fog node can track end user devices to support mobility, which may be a game changing factor for location-based services and applications. Furthermore, the interplays between fog and fog, fog and cloud become important since fog can easily gets local overview while the global coverage can only be achieved at a higher layer.