This document provides an introduction to Eclipse Zenoh, an open source project that unifies data in motion, data at rest, and computations in a distributed system. Zenoh elegantly blends traditional publish-subscribe with geo-distributed storage, queries, and computations. The presentation will demonstrate Zenoh's advantages for enabling typical edge computing scenarios and simplifying large-scale distributed applications through real-world use cases. It will also provide an overview of Zenoh's architecture, performance, and APIs.
1. Angelo Corsaro, PhD
Chief Technology Officer
Advanced Technology Office
angelo@adlink-labs.tech
Data
The
Edge
Fabric
<
2. Abstract
Zenoh is rapidly growing Eclipse project that uni
fi
es data in motion,
data at rest and computations. It elegantly blends traditional pub/sub
with geo distributed storage, queries and computations, while
retaining a level of time and space ef
fi
ciency that is well beyond any of
the mainstream stacks. This presentation will provide an introduction to
Eclipse Zenoh along with a crisp explanation of the challenges that
motivated the creation of this project. We will go through a series of
real-world use cases that demonstrate the advantages brought by
Zenoh in enabling and optimising typical edge scenarios and in
simplifying the development of any scale distributed applications.
3. Speaker Bio
Angelo Corsaro, Ph.D. is Chief Technology Of
fi
cer (CTO) at ADLINK Technology Inc. where he
looks after corporate technology strategy and innovation, leads the Advanced Technology
Of
fi
ce and the Software and Technology Business Unit.
Angelo is a world top expert in edge/fog computing and a well known researcher in the area of
high performance and large scale distributed systems. Angelo has over 100 publications on
referred journals, conferences, workshops, and magazines. Angelo has co-authored over ten
international standards.
Specialties: Fog/Edge Computing, Industrial and Consumer Internet of Things, Innovation and
Innovation Management, Product Strategy, Open Source, High Performance Computing, Large
Scale Mission/Business Critical Distributed Systems, Real-Time Systems, Software Patterns,
Functional Programming Languages
6. Moving and Resting
Technologies for dealing with
data in motion and data at rest
have belonged historically to
different families
Publish/Subscribe is today the
leading paradigm for dealing
with with data in motion
Databases (SQL and NoSQL)
are the leading paradigm to
deal with data at rest
Data in Motion
Data at Rest
7. Pushing and Pulling
Technologies for dealing
with data in motion and
data at rest also distinguish
in another dimension:
Data in motion is Pushed
to interested parties
Data at rest is Pulled when
needed
Push
Data at Rest
Pull
9. Decentralisation
The increasing availability of and
storage, compute capabilities on
devices is creating new
opportunities for computing
and storing and data much
closer its production
Existing technologies for data in
motion and data at rest fall short
in supporting this scenario.
More importantly fail to provide a
uni
fi
ed data management.
11. Robotics
Robotics applications are quickly
evolving to require swarm
coordination, Internet-Scale
management and teleoperation
Robots are increasingly operating in
swarms and over constantly
expanding geographical regions
12. Computation Offloading
Next generation robotics (and
autonomous driving) applications
need to leverage surrounding
infrastructure to of
fl
oad
computations and facilitate
coordination
13. Key Differences
• Many
• Moving
• Geo-Distributed
• Collaborative
• Internet Scale
• Open Environment
• Distributed Computing
• One
• Fixed
• Geo-localised
• Stand-Alone
• LAN Scale
• Closed Environment
• Cloud Computing
15. Smart Home Today
Data produced locally is sent to the cloud
where it is processed and stored
The core of the application logic runs on the
cloud.
Most if not all of the interactions with devices
that are close to you are through the cloud
This leads to several problems, including
energy waste, availability in case of
connectivity issues, privacy concerns…
16. Exploiting Locality
Ideally we would want communication to be local
whenever possible.
Ideally we would want to place computations
closer to data sources
Ideally we would want most of the data to be kept
in our house… But still access it from anywhere — if
I have the rights to do so
Some could be still processed or stored on the
cloud — but that should be a choice not the only
option.
17. Managing a Residence
Let’s assume for a moment that we want to exploit data and computation
locality at each house, yet we would like to easily monitor or query any
kind of data — for which we have the rights. How can I do that?
18. Traditional Approach #1
Replicate all data on the cloud
and use that as the location to
access information on the
houses
The drawbacks of this solution
is that all data is duplicated,
energy is wasted to send data
across the cloud, and privacy is
again at risk …
19. Traditional Approach #2
Data is kept on the house and
when needing to access it the
house of interest is addressed
The drawbacks of this solution
is there is no location
transparency. What if I want to
keep some of the data on an
edge server? Or even the
cloud?
…
20. Wouldn’t be nice if…
We could keep data where it
makes sense an retrieve it when
needed in a location transparent
manner — just naming the data
Wouldn’t it be nice if we could
provision application logic
wherever it made sense on this
computing fabric?
22. Technological Gap
The ecosystem of technologies available
today for data plane are unable to cover
the needs of these large scale
distributed systems because either
cannot work at the proper scale, e.g.
DDS, or are inherently depending on
broker technologies, e.g. MQTT, AMQP
Additionally none of this technologies
help with dealing with geo-distributed
data at rest
24. Uni
fi
es data in motion, data in-use, data at
rest and computations.
It carefully blends traditional pub/sub with
distributed queries, while retaining a level of
time and space ef
fi
ciency that is well beyond
any of the mainstream stacks.
It provides built-in support for geo-distributed
storages and distributed computations
25. Provides a high level API for pub/sub and
distributed queries, data representation
transcoding, an implementation of geo-distributed
storage and distributed computed values
zenoh Data Link
Network
Transport
Physical
zenoh
zenoh.net
Implements a networking layer capable of running
above a Data Link, Network or Transport Layer. This
protocol provides primitives for ef
fi
cient pub/sub
and distributed queries. It supports fragmentation
and ordered reliable delivery.
zenoh.net
28. Brokered Communication
Router and peers can
help with brokering
communication
between clients as
well as between
clients and mesh of
peers
Router
Client
Client
Client
Peer
Peer
Peer
Peer
Peer
Client
Client
Client
31. Naming Data
Following the tradition of Named Data Networking protocols, data is
named by a sequence of byte arrays — called key — such as:
/home/kitchen/sensors/temp
/home/kitchen/sensors/C202
Data interest and intents are expressed by means of keys regular expressions,
such as:
/home/*/sensors/temp
/home/**/C202
32. Selecting Data
Uses selector to de
fi
nes data sets. A selector is composed by a key
expression, and optionally a predicate, a projection and a set of
properties
/myhome/*/sensor/temp?value>25
/mycar/dynamics?speed>25#acceleration
The key-expression is used to route the query, while predicate, properties,
projection, etc., are interpreted only by the entity that executes the query. It also
provide different policies to control query consolidation and completeness
and potentially quorums
33. Primitives: Entities
Resource. A named data, in other term a (key,value)
Publisher. A spring of values for a key expression
Subscriber. A sink of values for a key expression
Queryable. A well of values for a key expression
(e.g. /home/kitchen/sensor/temp, 21.5
(e.g. /home/kitchen/sensor/temp
/home/kitchen/sensor/hum, 0.67)
/home/kitchen/sensor/* )
(e.g. /home/kitchen/sensor/temp
/home/kitchen/sensor/*)
(e.g. /home/**)
34. Primitives: Operations
open/close — Open/Close a zenoh.net session
scout — Looks for zenoh entities, the kinds of relevant nodes, e.g. peers,
router, etc., is speci
fi
ed by a bit-mask.
declare/undeclare — Declare/Undeclare resource, publisher, subscriber and
queryable. Declarations are used for discovery and various optimisations. For
subscribers the declare primitive registers a user provided call-back that will
be triggered when data is available. For queryable, the declare primitive register
a user provided call-back triggered whenever a query needs to be answered.
35. Primitives: Operations
write — Writes data for a key expression
query — Issues a distributed query and returns a stream of results. The
query target, coverage and consolidation depends on policies
pull — Pulls data for a pull subscriber.
36. Storage
A storage is de
fi
ned by:
Selector. De
fi
nes the set of
resources keys that stores this
storage
Back-end. De
fi
nes the storage
technology used
/myhome/status/**
…
Storage Back-end
Storage Selector
zenoh storages can be created via the
administration API anywhere on the network
and back-ends are dynamically loaded plugins.
zenoh storages automatically align their
initial state, but can also be bound to
existing data-bases
37. Eval
An eval is de
fi
ned by:
Selector. De
fi
nes the set of
resources keys that will trigger
this computation
Implementation. The user
code implementing the
computation
Eval Implementation
/myhome/energy-cons
Eval Selector
45. Protocol Summary Highlights
Most wire/power/memory ef
fi
cient protocol in the market to provide
connectivity to extremely constrained targets
Supports push and pull pub/sub along with distributed queries
Resource keys are represented as integers on the wire, these integer
are local to a session => good for wire ef
fi
ciency
Supports for peer-to-peer and routed communication.
Support for zero-copy.
Ordered reliable data delivery and fragmentation.
Minimal wire overhead for user data is 4-6 bytes
Data Link
Network
Transport
Physical
zenoh
zenoh.net
55. Greetings
from zenoh import Zenoh
# Get a zenoh session
zs = Zenoh({‘peer’: ‘tcp/eu.zenoh.io:7447’})
z = zs.workspace()
# play around
z.put(“/demo/eu/greet/italian”, “Ciao!”)
57. Getting Greetings
from zenoh import Zenoh, ChangeKind
# Define the listener
def listener(change):
print("{} : {} (encoding: {} , timestamp: {})”
.format(change.path,
"DELETED" if change.kind == ChangeKind.DELETE
else change.value.get_content(),
"none" if change.kind == ChangeKind.DELETE
else change.value.encoding_descr(),
change.timestamp))
z.subscribe(“/demo/**/greet/*“, listener)
58. Finding out Greetings
# How do people greet in EU?
workspace.get(“/demo/eu/**/greet”)
# How about American?
workspace.get(“/demo/us-*/**/greet”)
# Just get me all you know about greeting…
workspace.get(“/demo/**/greet”)
62. Greeting of the Day
Imagine you want to do a greeting of the day that each time
somebody tries to query it generates a random quote, or a
daily quote, etc.
We could do that with an eval, here is how:
def quote_eval(request):
make_a_cute_quote(request)
z.register_eval(“/demo/*/greet/*/daily”, quote_eval)
66. ROS2 and
ROS2 based robots can leverage zenoh
into two ways (1) by leveraging a ROS2
RMW for zenoh, or (2) by leveraging the
zenoh-bridge-dds which transparently
moves R2X communication over zenoh
The latter case does not require any
change to your robot, not even a
recompile / re-link
Zenoh also supports full interoperability
with ROS2 in the sense than you can
read/write data from/into ROS2 via native
zenoh API
68. Internet Scale Robotics
Zenoh enables for mesh peer-
to-peer communication when
useful, routed communication
when necessary and in general
enables ef
fi
cient Internet-scale
Additionally, it does not require
any changes to your existing
ROS2 systems.
72. zenoh is an innovative and performant
protocol that solves some of they problems
at the very core of IoT and Edge Computing
Its open architecture enables to easily
expand both storage back-ends as well as
protocols that are routed and integrated into
the zenoh world
If you like zenoh, star our repo and start
hacking some code!