SlideShare uma empresa Scribd logo
1 de 31
Baixar para ler offline
© Copyright 2018 Xilinx
OpenAMP Discussion -
Linaro2018HK
© Copyright 2018 XilinxPage 2
Agenda o SPDX Short Licenses Identifier
o Coding Guideline
o API Standardisation
o Coprocessor Image Format
o OpenAMP and Container
© Copyright 2018 XilinxPage 3
OpenAMP License
© Copyright 2018 Xilinx
● Why use SPDX licenses list short license identifiers in the
open source implementation?
○ Easy to use, machine-readable
○ Concise and now standardized format
○ https://spdx.org/sites/cpstandard/files/pages/files/using_spdx_license_list_short
_identifiers.pdf
● E.g.
○ Use this in the *.c and *.h files in the source codes in the OpenAMP open
source implementation:
■ /* SPDX-License-Identifier: BSD-3-Clause */
Page 4
SPDX Short Licenses Identifier
© Copyright 2018 XilinxPage 5
Coding Guideline
© Copyright 2018 Xilinx
● Linux coding style
● Safety Certification Consideration
● MISRA C, coding guidelines for style and naming conventions
● Unit testing
○ exercising parameters boundaries, interfaces
○ code coverage
■ 100% branch coverage
Page 6
Coding Guideline
© Copyright 2018 XilinxPage 7
API Standardisation
© Copyright 2018 Xilinx
With standardised flow and APIs, it allows a developer to run the same
application in different environment, e.g. Linux kernel, Linux
userspace, baremetal and RTOS.
● RPMsg
● Standardised APIs
● One common implementation
● Virtio config
● Standardised APIs
● One common implementation
● Supports more than RPMsg virtio driver in future
● Remoteproc
● Standardised APIs
● One common implementation
● Rempteproc drivers are platform dependent
● Libmetal
● Standardised APIs
● OS/platform dependent implementation
Page 8
APIs Standardisation
© Copyright 2018 XilinxPage 9
Libraries Layers
OS Linux/ RTOS/ Baremetal
Hardware
Peripheral
registers
Interrupts Memories
Remteproc
Remoteproc Drivers
da_to_vakick Start/stop/…
Virtio
master/slave
RPMsg
Mem i/o Mem i/o
Virtio Config Ops
Platform dependent
implementation
OS/platform
dependent
implementation
libmetal
Events/Interrupts
Registers/memories Input/Output
Devices management
Shared memory management
Locks
Services (structed buffers, resource sharing, …)
Application
© Copyright 2018 XilinxPage 10
Coprocessor Image Format
© Copyright 2018 Xilinx
In order to launch a Image on a coprocessor, the image includes the following:
• Binary
• Resources required to start the application
• Authentication
• Encryption
There is use case that the once started getting the image, the image data will come in streams, and
the master’s memory is very limited, authentication needs to be done before decrypt the image data.
• Today we have our own image format to solve this issue. How about standardise it in OpenAMP?
Page 11
Coprocessor Images
Image Format Authentication Encryption Streaming
ELF No out-of-box
solution
No out-of-box solution No
FIT yes No out-of-box solution No
PE (Portable
Executable)
yes No out-of-box solution No
OCI Image
Specification
yes No No
Tar ball (vendor
specific format)
Yes Yes Yes
© Copyright 2018 XilinxPage 12
OpenAMP and Container
© Copyright 2018 Xilinx
A container is a self contained execution environment that shares the kernel of the host system and
which is (optionally) isolated from other containers in the system.
Open Container Initiative(OCI) runtime specification defines container life cycles. Open source
container development platform such as docker provides solution to distribute and orchestrate
containers.
Containerisation is a very popular topics, can OpenAMP leverage container solutions?
● Deploy a application running on a coprocessor in a container?
● Deploy a application running on a coprocessor as deploying a container?
○ User should be able to deploy a coprocessor application in the same way as they deploy a
application on VM.
● Can we make use of solutions for data sharing/synchronisation between containers, or between
host and containers?
● Can we make use of container scheduling solution to schedule application on coprocessors?
○ e.g. Xilinx Zynq UltraScale+ MPSoC, Linux running on A53 has two Cortex-R5
coprocessors
● How about a OpenAMP runtime?
Page 13
Container and OpenAMP
© Copyright 2018 XilinxPage 14
API Standardisation Backup Slides
© Copyright 2018 Xilinx
Rpmsg APIs for user to transmit data with rpmsg(Remote
Processor Messaging framework.
● Based on Linux kernel implementation with additions
Page 15
Rpmsg APIs
© Copyright 2018 XilinxPage 16
Rpmsg APIs (1 / 2)
API Description
int rpmsg_send(struct rpmsg_endpoint *ept, void *data, int
len);
Send data to the target with RPMsg endpoint
int rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int
len, uint32_t dst);
Send data to the specified RPMsg endpoint address
int rpmsg_send_offchannel(struct rpmsg_endpoint *ept, void
*data, int len, uint32_t src, uint32_t dst);
Send data to the specified RPMsg endpoint address using
the specified source endpoint address
int rpmsg_trysend((struct rpmsg_endpoint *ept, void *data, int
len);
Send data to the target with RPMsg endpoint, if the endpoint
doesn’t have available buffer, it will return instead of
blocking until there is buffer available.
int rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, int
len, uint32_t dst);
Send data to the specified RPMsg endpoint address, it will
return instead of blocking until there is buffer available.
int rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, void
*data, int len, uint32_t src, uint32_t dst);
Send data to the specified RPMsg endpoint address using
the specified source endpoint address, it will return instead
of blocking until there is buffer available.
int (*rpmsg_callback)(struct rpmsg_endpoint *ept, void *data,
int len, uint32_t src, void *priv);
User defined RPMSg callback
void (*rpmsg_endpoint_destroy_callback)(struct
rpmsg_endpoint *ept, void *priv);
User defined RPMsg endpoint destroy callback
© Copyright 2018 XilinxPage 17
Rpmsg APIs (2 / 2)
API Description
struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_virtio
*rpmsgv, const char *ept_name, uint32_t src, uint32_t dest,
rpmsg_callback cb, rpmsg_destroy_callback destroy_cb, void
*priv);
Create a RPMsg endpoint from virtio device.
API is different to the Linux kernel implementation
void rpmsg_destroy_endpoint(struct rpmsg_endpoint *ept); Destroy RPMsg endpoint
API is different to the Linux kernel implementation
struct rpmst_virtio *rpmsg_vdev_init(struct virtio_dev *vdev,
void *shm, int len);
Initialize RPMsg virtio queues and shared buffers, the
address of shm can be ANY. In this case, function will get
shared memory from system shared memory pools.
If the vdev has RPMsg name service feature, this API will
create an name service endpoint. On the slave side, this API
will not return until the driver ready is set by the master side.
Not in Linux kernel
© Copyright 2018 Xilinx
Virtio config operations APIs are used to configure the virtio
device (1.0 Specification)
● For every configuration change, it should notify the other
side.
● A mailbox type of notification, with vdev id, and which field is changed.
Page 18
Virtio Config Ops APIs
© Copyright 2018 XilinxPage 19
Virtio Config Ops APIs
API Description
void (*get)(struct virtio_device *vdev, unsigned offset, void
*buf, unsigned len);
read the value of a configuration field
void (*set)(struct virtio_deviced *vdev, unsigned offset, void
*buf, unsigned len);
write the value of a configuration field
uint32_t (*generation)(struct virtio_device *vdev); read configure generation counter
u8 (*get_status)(struct virtio_device *vdev); read the virtio status byte
void (*set_status)(struct virtio_device *vdev, u8 status); write the virtio status byte
void (*reset)(struct virtio_device *vdev); set the virtio device (used by virtio front end)
int (*find_vqs)(struct virtio_device *, unsigned nvqs,
struct virtqueue *vqs[], vq_callback_t *callbacks[],
const char * const names[], const bool *ctx,
struct irq_affinity *desc);
find virtqueues and instantiate them
void (*del_vqs)(struct virtio_device *); free virtqueues found by find _vqs()
uint64_t (*get_features)(struct virtio_device *vdev); get the array of feature bits for this device.
int (*finalize_features)(struct virtio_device *vdev); confirm what device features we'll be using
© Copyright 2018 Xilinx
• In OpenAMP, the virtio device configuration space is in
shared memory
– It is in the resource table today
Page 20
Virtio Device Configuration Space (1/2)
id Viritio device id
Notified id (unique to remoteproc)
dfeatures virtio device features supported by the firmware
gfeatures host to write back the negotiated features that are supported by both
sides
config_len The length of the config space right after vrings
status The host will indicate its virtio progress
num_of_vrings Number of vrings
Vrings resource Resource description for vrings
Virtio driver
related config
space
Virtio driver defined configuration
© Copyright 2018 Xilinx
• It is missing the generation counter information
• Shall we add the generation counter to virtio config space?
– It is used for device to indicate there is change in the configuration
– We can have generation counter in the vdev resource space, and reserved for
future use
• Enable to support more than RPMsg virtio
• Virtio doesn’t support remote to reports its status. It is difficult
to restart connection if the remote restarts.
– Shall we work with virtio specification to introduce virtio backend to report its
status?
Page 21
Virtio Device Configuration Space (2/2)
© Copyright 2018 Xilinx
Remoteproc APIs provides:
• Resource table handling
• remote life cycles management
• Remote notification
• Provide virtio config operations
Page 22
Remoteproc APIs
© Copyright 2018 Xilinx
• The sequence of the resource in the resource table is the
sequence the remoteproc will handle the resource.
• Resource table can be provided from the application image
– The remoteproc will parse the resource table before loading the firmware
• Resource table is optional to application image
• Resource table should be parsed before remote application
starts
• If resource table is in the application image, it will be passed
before loading the image data into the target memory
Page 23
Remoteproc Resource Table Handling
© Copyright 2018 XilinxPage 24
Remoteproc Operations APIs
API (User defined remoteproc operations) Description
struct remoteproc_ops rproc_ops {
.init = user_rproc_init, /* mandatory */
.remove = user_rproc_remove, /* mandatory */
.mmap = user_rproc_mmap, /* optional, but will be used in both life cycle management and parsing rsc */
.handle_rsc = user_rproc_handle_vendor_rsc, /* optional, required if there is vendor specific resource in the rsc */
.start = user_rproc_start, /* optional, required for life cycle management */
.stop = user_rproc_stop, /* optional, required for life cycle management */
.shutdown = user_rproc_shutdown, /* optional, required for life cycle management */
.kick = user_rproc_kick, /* optional, required for communication */
};
struct remoteproc *(*user_remoteproc_init)(void *priv, struct remoteproc_ops
*ops);
User defined remoteproc initialization
void (*user_remoreproc_remove)(struct remoteproc *rproc); User defined removing remoteproc resource
void *(*user_remoteproc_mmap)(struct remoteproc *rproc, metal_phys_addr_t pa,
metal_phys_addr_t da, size_t size, unsigned int attribute, struct metal_io_region
**io);
User defined memory map operation, it will be used when passing
the resource table, and when parsing the remote application image.
int (*user_remoteproc_handle_rsc)(struct remoteproc *rproc, void *rsc, size_t len); User defined vendor specific resource handling
int (*user_remoteproc_start)(struct remoteproc *rproc); User defined remote processor start function
int (*user_remoteproc_stop)(struct remoteproc *rproc); User defined remote processor stop function, the processor
resource is not released.
int (*user_remoteproc_shutdown)(struct remoteproc *rproc); User defined remote processor offline function, the processor will
be shutdown and its resource will be released.
void (*user_remtoeproc_kick)(*struct remoteproc *rproc, int id) User defined kick operation
© Copyright 2018 XilinxPage 25
Remoteproc Operations APIs Examples
API (User defined remoteproc operations) Description
struct remoteproc_ops rproc_ops {
.init = user_rproc_init,
.remove = user_rproc_remove,
.mmap = user_rproc_mmap,
.handle_rsc = user_rproc_handle_vendor_rsc,
.start = user_rproc_start,
.stop = user_rproc_stop,
.shutdown = user_rproc_shutdown,
.kick = user_rproc_kick,
};
Remoteproc operations definition in case both LCM and
communication is required
struct remoteproc_ops rproc_ops {
.init = user_rproc_init,
.remove = user_rproc_remove,
.mmap = user_rproc_mmap,
.handle_rsc = user_rproc_handle_vendor_rsc,
.start = user_rproc_start,
.stop = user_rproc_stop,
.shutdown = user_rproc_shutdown,
};
Remooteproc operations definition in case only LCM is used
struct remoteproc_ops rproc_ops {
.init = user_rproc_init,
.remove = user_rproc_remove,
.mmap = user_rproc_mmap,
.handle_rsc = user_rproc_handle_vendor_rsc, /* optional */
.kick = user_rproc_kick,
};
Remoteproc operations definition in case only communication is
used
© Copyright 2018 XilinxPage 26
Remoteproc Core APIs (1 / 3)
API Description
struct virtio_dev *rproc_virtio_create(struct
remoteproc *rproc);
create remoteproc virtio device
struct handle_rsc_table(struct remoteproc *rproc,
struct resource_table *rsc_table, int len);
handle resource table
int remtoeproc_load(struct remoteproc *rproc, void
*fw, struct image_store_ops *store_ops);
load firmware for the remote processor system
int remoteproc_start(struct remoteproc *rproc); start the remote processor
int remoteproc_stop(struct remoteproc *rproc); stop the remote processor, halt the processor, but the resource of the
processor such as memory is not released
int remoteproc_shutdown(struct remoteproc
*rproc);
shutdown the remote processor and release its resource
int remoteproc_suspend(struct remoteproc *rproc) suspend the remote processor system
int remoteproc_resume(struct remoteproc *rproc) resume the remote processor system
struct remoteproc *remoteproc_init(struct
remoteproc_ops *ops, void *priv);
Initialize remote processor system instance
void remoteproc_remove(struct remoteproc *rproc) Remove remoteproc instance and its resource
© Copyright 2018 Xilinx
Libmetal provides an abstraction layer for device operation and
I/O region abstraction, and other helper functions such as list,
locks and so on.
● Libmetal APIs is used by rpmsg and remoteproc
implementation in OpenAMP code base for requesting
shared memories and operate shared memories.
Page 27
Libmetal APIs
© Copyright 2018 XilinxPage 28
Libmetal APIs used in OpenAMP library(1 / 3)
API Description
int metal_register_generic_device(struct metal_device *device); statically register a generic libemtal device. The
subsequent calls to metal_device_open() look up in this
list of pre-registered devices on the “generic” bus.
int metal_device_open(const char *bus_name, const char
*dev_name, struct metal_device **device);
open a libmetal device by name
int (*user_remoteproc_handle_rsc)(struct remoteproc *rproc, void
*rsc, size_t len);
User defined vendor specific resource handling
void metal_device_close(struct metal_device *device); close a libmetal device
struct metal_io_region *
metal_device_io_region(struct metal_device *device, unsigned
index)
Get an I/O region accessor for a device region
© Copyright 2018 XilinxPage 29
Libmetal APIs used in OpenAMP library (2 / 3)
API Description
#define METAL_IO_DECLARE((struct metal_io_region *io, void *virt,
const metal_phys_addr_t *physmap, size_t size,
unsigned page_shift, unsigned int mem_flags,
const struct metal_io_ops *ops)
open an libmetal I/O region dynamically
int metal_io_init((struct metal_io_region *io, void *virt,
const metal_phys_addr_t *physmap, size_t size,
unsigned page_shift, unsigned int mem_flags,
const struct metal_io_ops *ops);
open an libmetal I/O region dynamically
void metal_io_finish(struct metal_io_region *io); close a libmetal I/O region
size_t metal_io_region_size(struct metal_io_region *io) get the size of a libmetal I/O region
void *metal_io_virt(struct metal_io_region *io, unsigned long offset;);
unsigned long metal_io_virt_to_offset(struct metal_io_region *io, void *virt);
metal_phys_addr_t metal_io_phys(struct metal_io_region *io, unsigned long offset);
unsigned long metal_io_phys_to_offset(struct metal_io_region *io, metal_phys_addr_t phys);
void *metal_io_phys_to_virt(struct metal_io_region *io, metal_phys_addr_t phys);
metal_phys_addr_t
metal_io_virt_to_phys(struct metal_io_region *io, void *virt);
conversion between virtual address, physical
address, and offset
uint64_t metal_io_read8/16/32/64(struct metal_io_region *io, unsigned long offset);
void metal_io_write8/16/32/64(struct metal_io_region *io, unsigned long offset, uint64_t value);
metal I/O read/write
int metal_io_block_read(struct metal_io_region *io, unsigned long offset, void *restrict dst, int len);
int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);
int metal_io_block_set(struct metal_io_region *io, unsigned long offset, unsigned char value, int len);
metal I/O region block read/write/set
© Copyright 2018 XilinxPage 30
Libmetal APIs used in OpenAMP library (3 / 3)
API Description
struct metal_generic_shmem {
const char *name; /**< name of the shared memory */
struct metal_io_region io; /**< I/O region of the shared memory */
struct metal_list node; /**< list node */
};
int metal_shmem_register_generic(struct metal_generic_shmem
*shmem);
statically register a generic shared memory. Subsequent
calls to metal_shmem_open() look up in this list of pre-
registered regions.
int metal_shmem_open(const char *name, size_t size, struct
metal_io_region **io);
Open a shared memory with the specified name
© Copyright 2018 XilinxPage 31

Mais conteúdo relacionado

Mais procurados

Mais procurados (20)

Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)Storage tiering and erasure coding in Ceph (SCaLE13x)
Storage tiering and erasure coding in Ceph (SCaLE13x)
 
Comparing Next-Generation Container Image Building Tools
 Comparing Next-Generation Container Image Building Tools Comparing Next-Generation Container Image Building Tools
Comparing Next-Generation Container Image Building Tools
 
Android Security Internals
Android Security InternalsAndroid Security Internals
Android Security Internals
 
BKK16-110 A Gentle Introduction to Trusted Execution and OP-TEE
BKK16-110 A Gentle Introduction to Trusted Execution and OP-TEEBKK16-110 A Gentle Introduction to Trusted Execution and OP-TEE
BKK16-110 A Gentle Introduction to Trusted Execution and OP-TEE
 
Embedded Android : System Development - Part III
Embedded Android : System Development - Part IIIEmbedded Android : System Development - Part III
Embedded Android : System Development - Part III
 
Blazing Performance with Flame Graphs
Blazing Performance with Flame GraphsBlazing Performance with Flame Graphs
Blazing Performance with Flame Graphs
 
LAS16-402: ARM Trusted Firmware – from Enterprise to Embedded
LAS16-402: ARM Trusted Firmware – from Enterprise to EmbeddedLAS16-402: ARM Trusted Firmware – from Enterprise to Embedded
LAS16-402: ARM Trusted Firmware – from Enterprise to Embedded
 
Q4.11: ARM Architecture
Q4.11: ARM ArchitectureQ4.11: ARM Architecture
Q4.11: ARM Architecture
 
OSMC 2023 | What’s new with Grafana Labs’s Open Source Observability stack by...
OSMC 2023 | What’s new with Grafana Labs’s Open Source Observability stack by...OSMC 2023 | What’s new with Grafana Labs’s Open Source Observability stack by...
OSMC 2023 | What’s new with Grafana Labs’s Open Source Observability stack by...
 
Embedded Android : System Development - Part I
Embedded Android : System Development - Part IEmbedded Android : System Development - Part I
Embedded Android : System Development - Part I
 
"Learning AOSP" - Android Hardware Abstraction Layer (HAL)
"Learning AOSP" - Android Hardware Abstraction Layer (HAL)"Learning AOSP" - Android Hardware Abstraction Layer (HAL)
"Learning AOSP" - Android Hardware Abstraction Layer (HAL)
 
HKG15-311: OP-TEE for Beginners and Porting Review
HKG15-311: OP-TEE for Beginners and Porting ReviewHKG15-311: OP-TEE for Beginners and Porting Review
HKG15-311: OP-TEE for Beginners and Porting Review
 
Android IPC Mechanism
Android IPC MechanismAndroid IPC Mechanism
Android IPC Mechanism
 
OpenStack and MySQL
OpenStack and MySQLOpenStack and MySQL
OpenStack and MySQL
 
EBPF and Linux Networking
EBPF and Linux NetworkingEBPF and Linux Networking
EBPF and Linux Networking
 
Embedded Android : System Development - Part IV
Embedded Android : System Development - Part IVEmbedded Android : System Development - Part IV
Embedded Android : System Development - Part IV
 
Embedded Android : System Development - Part II (HAL)
Embedded Android : System Development - Part II (HAL)Embedded Android : System Development - Part II (HAL)
Embedded Android : System Development - Part II (HAL)
 
Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)Tensor Processing Unit (TPU)
Tensor Processing Unit (TPU)
 
U-boot and Android Verified Boot 2.0
U-boot and Android Verified Boot 2.0U-boot and Android Verified Boot 2.0
U-boot and Android Verified Boot 2.0
 
Android Things : Building Embedded Devices
Android Things : Building Embedded DevicesAndroid Things : Building Embedded Devices
Android Things : Building Embedded Devices
 

Semelhante a HKG18-318 - OpenAMP Workshop

Semelhante a HKG18-318 - OpenAMP Workshop (20)

Host Data Plane Acceleration: SmartNIC Deployment Models
Host Data Plane Acceleration: SmartNIC Deployment ModelsHost Data Plane Acceleration: SmartNIC Deployment Models
Host Data Plane Acceleration: SmartNIC Deployment Models
 
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...
 
P4_tutorial.pdf
P4_tutorial.pdfP4_tutorial.pdf
P4_tutorial.pdf
 
Android 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation reportAndroid 5.0 Lollipop platform change investigation report
Android 5.0 Lollipop platform change investigation report
 
Custom Script Execution Environment on TD Workflow @ TD Tech Talk 2018-10-17
Custom Script Execution Environment on TD Workflow @ TD Tech Talk 2018-10-17Custom Script Execution Environment on TD Workflow @ TD Tech Talk 2018-10-17
Custom Script Execution Environment on TD Workflow @ TD Tech Talk 2018-10-17
 
P4+ONOS SRv6 tutorial.pptx
P4+ONOS SRv6 tutorial.pptxP4+ONOS SRv6 tutorial.pptx
P4+ONOS SRv6 tutorial.pptx
 
Linkmeup v076(2019-06).2
Linkmeup v076(2019-06).2Linkmeup v076(2019-06).2
Linkmeup v076(2019-06).2
 
Investigation report on 64 bit support and some of new features in aosp master
Investigation report on 64 bit support and some of new features in aosp masterInvestigation report on 64 bit support and some of new features in aosp master
Investigation report on 64 bit support and some of new features in aosp master
 
BPF Hardware Offload Deep Dive
BPF Hardware Offload Deep DiveBPF Hardware Offload Deep Dive
BPF Hardware Offload Deep Dive
 
Build and deploy scientific Python Applications
Build and deploy scientific Python Applications  Build and deploy scientific Python Applications
Build and deploy scientific Python Applications
 
Leveraging open source for large scale analytics
Leveraging open source for large scale analyticsLeveraging open source for large scale analytics
Leveraging open source for large scale analytics
 
Using VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear ContainersUsing VPP and SRIO-V with Clear Containers
Using VPP and SRIO-V with Clear Containers
 
Pluggable Infrastructure with CI/CD and Docker
Pluggable Infrastructure with CI/CD and DockerPluggable Infrastructure with CI/CD and Docker
Pluggable Infrastructure with CI/CD and Docker
 
Deep Dive on Amazon EC2 Accelerated Computing
Deep Dive on Amazon EC2 Accelerated ComputingDeep Dive on Amazon EC2 Accelerated Computing
Deep Dive on Amazon EC2 Accelerated Computing
 
DPDK Summit 2015 - Intel - Keith Wiles
DPDK Summit 2015 - Intel - Keith WilesDPDK Summit 2015 - Intel - Keith Wiles
DPDK Summit 2015 - Intel - Keith Wiles
 
Extended and embedding: containerd update & project use cases
Extended and embedding: containerd update & project use casesExtended and embedding: containerd update & project use cases
Extended and embedding: containerd update & project use cases
 
Introduction to PaaS and Heroku
Introduction to PaaS and HerokuIntroduction to PaaS and Heroku
Introduction to PaaS and Heroku
 
PHP QA Tools
PHP QA ToolsPHP QA Tools
PHP QA Tools
 
BKK16-103 OpenCSD - Open for Business!
BKK16-103 OpenCSD - Open for Business!BKK16-103 OpenCSD - Open for Business!
BKK16-103 OpenCSD - Open for Business!
 
AWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated Computing
AWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated ComputingAWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated Computing
AWS Compute Evolved Week: Deep Dive on Amazon EC2 Accelerated Computing
 

Mais de Linaro

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Linaro
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
Linaro
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Linaro
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
Linaro
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Linaro
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
Linaro
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
Linaro
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
Linaro
 
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
Linaro
 

Mais de Linaro (20)

Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea GalloDeep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
 
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta VekariaArm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
 
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua MoraHuawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
 
Bud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qaBud17 113: distribution ci using qemu and open qa
Bud17 113: distribution ci using qemu and open qa
 
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
 
HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018HPC network stack on ARM - Linaro HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
 
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
 
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
 
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
 
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening KeynoteHKG18-100K1 - George Grey: Opening Keynote
HKG18-100K1 - George Grey: Opening Keynote
 
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainlineHKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
 
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and allHKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
 
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorHKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
 
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMUHKG18-TR08 - Upstreaming SVE in QEMU
HKG18-TR08 - Upstreaming SVE in QEMU
 
HKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8MHKG18-113- Secure Data Path work with i.MX8M
HKG18-113- Secure Data Path work with i.MX8M
 
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-120 - Devicetree Schema Documentation and Validation
 
HKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted bootHKG18-223 - Trusted FirmwareM: Trusted boot
HKG18-223 - Trusted FirmwareM: Trusted boot
 
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
HKG18-500K1 - Keynote: Dileep Bhandarkar - Emerging Computing Trends in the D...
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 

HKG18-318 - OpenAMP Workshop

  • 1. © Copyright 2018 Xilinx OpenAMP Discussion - Linaro2018HK
  • 2. © Copyright 2018 XilinxPage 2 Agenda o SPDX Short Licenses Identifier o Coding Guideline o API Standardisation o Coprocessor Image Format o OpenAMP and Container
  • 3. © Copyright 2018 XilinxPage 3 OpenAMP License
  • 4. © Copyright 2018 Xilinx ● Why use SPDX licenses list short license identifiers in the open source implementation? ○ Easy to use, machine-readable ○ Concise and now standardized format ○ https://spdx.org/sites/cpstandard/files/pages/files/using_spdx_license_list_short _identifiers.pdf ● E.g. ○ Use this in the *.c and *.h files in the source codes in the OpenAMP open source implementation: ■ /* SPDX-License-Identifier: BSD-3-Clause */ Page 4 SPDX Short Licenses Identifier
  • 5. © Copyright 2018 XilinxPage 5 Coding Guideline
  • 6. © Copyright 2018 Xilinx ● Linux coding style ● Safety Certification Consideration ● MISRA C, coding guidelines for style and naming conventions ● Unit testing ○ exercising parameters boundaries, interfaces ○ code coverage ■ 100% branch coverage Page 6 Coding Guideline
  • 7. © Copyright 2018 XilinxPage 7 API Standardisation
  • 8. © Copyright 2018 Xilinx With standardised flow and APIs, it allows a developer to run the same application in different environment, e.g. Linux kernel, Linux userspace, baremetal and RTOS. ● RPMsg ● Standardised APIs ● One common implementation ● Virtio config ● Standardised APIs ● One common implementation ● Supports more than RPMsg virtio driver in future ● Remoteproc ● Standardised APIs ● One common implementation ● Rempteproc drivers are platform dependent ● Libmetal ● Standardised APIs ● OS/platform dependent implementation Page 8 APIs Standardisation
  • 9. © Copyright 2018 XilinxPage 9 Libraries Layers OS Linux/ RTOS/ Baremetal Hardware Peripheral registers Interrupts Memories Remteproc Remoteproc Drivers da_to_vakick Start/stop/… Virtio master/slave RPMsg Mem i/o Mem i/o Virtio Config Ops Platform dependent implementation OS/platform dependent implementation libmetal Events/Interrupts Registers/memories Input/Output Devices management Shared memory management Locks Services (structed buffers, resource sharing, …) Application
  • 10. © Copyright 2018 XilinxPage 10 Coprocessor Image Format
  • 11. © Copyright 2018 Xilinx In order to launch a Image on a coprocessor, the image includes the following: • Binary • Resources required to start the application • Authentication • Encryption There is use case that the once started getting the image, the image data will come in streams, and the master’s memory is very limited, authentication needs to be done before decrypt the image data. • Today we have our own image format to solve this issue. How about standardise it in OpenAMP? Page 11 Coprocessor Images Image Format Authentication Encryption Streaming ELF No out-of-box solution No out-of-box solution No FIT yes No out-of-box solution No PE (Portable Executable) yes No out-of-box solution No OCI Image Specification yes No No Tar ball (vendor specific format) Yes Yes Yes
  • 12. © Copyright 2018 XilinxPage 12 OpenAMP and Container
  • 13. © Copyright 2018 Xilinx A container is a self contained execution environment that shares the kernel of the host system and which is (optionally) isolated from other containers in the system. Open Container Initiative(OCI) runtime specification defines container life cycles. Open source container development platform such as docker provides solution to distribute and orchestrate containers. Containerisation is a very popular topics, can OpenAMP leverage container solutions? ● Deploy a application running on a coprocessor in a container? ● Deploy a application running on a coprocessor as deploying a container? ○ User should be able to deploy a coprocessor application in the same way as they deploy a application on VM. ● Can we make use of solutions for data sharing/synchronisation between containers, or between host and containers? ● Can we make use of container scheduling solution to schedule application on coprocessors? ○ e.g. Xilinx Zynq UltraScale+ MPSoC, Linux running on A53 has two Cortex-R5 coprocessors ● How about a OpenAMP runtime? Page 13 Container and OpenAMP
  • 14. © Copyright 2018 XilinxPage 14 API Standardisation Backup Slides
  • 15. © Copyright 2018 Xilinx Rpmsg APIs for user to transmit data with rpmsg(Remote Processor Messaging framework. ● Based on Linux kernel implementation with additions Page 15 Rpmsg APIs
  • 16. © Copyright 2018 XilinxPage 16 Rpmsg APIs (1 / 2) API Description int rpmsg_send(struct rpmsg_endpoint *ept, void *data, int len); Send data to the target with RPMsg endpoint int rpmsg_sendto(struct rpmsg_endpoint *ept, void *data, int len, uint32_t dst); Send data to the specified RPMsg endpoint address int rpmsg_send_offchannel(struct rpmsg_endpoint *ept, void *data, int len, uint32_t src, uint32_t dst); Send data to the specified RPMsg endpoint address using the specified source endpoint address int rpmsg_trysend((struct rpmsg_endpoint *ept, void *data, int len); Send data to the target with RPMsg endpoint, if the endpoint doesn’t have available buffer, it will return instead of blocking until there is buffer available. int rpmsg_trysendto(struct rpmsg_endpoint *ept, void *data, int len, uint32_t dst); Send data to the specified RPMsg endpoint address, it will return instead of blocking until there is buffer available. int rpmsg_trysend_offchannel(struct rpmsg_endpoint *ept, void *data, int len, uint32_t src, uint32_t dst); Send data to the specified RPMsg endpoint address using the specified source endpoint address, it will return instead of blocking until there is buffer available. int (*rpmsg_callback)(struct rpmsg_endpoint *ept, void *data, int len, uint32_t src, void *priv); User defined RPMSg callback void (*rpmsg_endpoint_destroy_callback)(struct rpmsg_endpoint *ept, void *priv); User defined RPMsg endpoint destroy callback
  • 17. © Copyright 2018 XilinxPage 17 Rpmsg APIs (2 / 2) API Description struct rpmsg_endpoint *rpmsg_create_ept(struct rpmsg_virtio *rpmsgv, const char *ept_name, uint32_t src, uint32_t dest, rpmsg_callback cb, rpmsg_destroy_callback destroy_cb, void *priv); Create a RPMsg endpoint from virtio device. API is different to the Linux kernel implementation void rpmsg_destroy_endpoint(struct rpmsg_endpoint *ept); Destroy RPMsg endpoint API is different to the Linux kernel implementation struct rpmst_virtio *rpmsg_vdev_init(struct virtio_dev *vdev, void *shm, int len); Initialize RPMsg virtio queues and shared buffers, the address of shm can be ANY. In this case, function will get shared memory from system shared memory pools. If the vdev has RPMsg name service feature, this API will create an name service endpoint. On the slave side, this API will not return until the driver ready is set by the master side. Not in Linux kernel
  • 18. © Copyright 2018 Xilinx Virtio config operations APIs are used to configure the virtio device (1.0 Specification) ● For every configuration change, it should notify the other side. ● A mailbox type of notification, with vdev id, and which field is changed. Page 18 Virtio Config Ops APIs
  • 19. © Copyright 2018 XilinxPage 19 Virtio Config Ops APIs API Description void (*get)(struct virtio_device *vdev, unsigned offset, void *buf, unsigned len); read the value of a configuration field void (*set)(struct virtio_deviced *vdev, unsigned offset, void *buf, unsigned len); write the value of a configuration field uint32_t (*generation)(struct virtio_device *vdev); read configure generation counter u8 (*get_status)(struct virtio_device *vdev); read the virtio status byte void (*set_status)(struct virtio_device *vdev, u8 status); write the virtio status byte void (*reset)(struct virtio_device *vdev); set the virtio device (used by virtio front end) int (*find_vqs)(struct virtio_device *, unsigned nvqs, struct virtqueue *vqs[], vq_callback_t *callbacks[], const char * const names[], const bool *ctx, struct irq_affinity *desc); find virtqueues and instantiate them void (*del_vqs)(struct virtio_device *); free virtqueues found by find _vqs() uint64_t (*get_features)(struct virtio_device *vdev); get the array of feature bits for this device. int (*finalize_features)(struct virtio_device *vdev); confirm what device features we'll be using
  • 20. © Copyright 2018 Xilinx • In OpenAMP, the virtio device configuration space is in shared memory – It is in the resource table today Page 20 Virtio Device Configuration Space (1/2) id Viritio device id Notified id (unique to remoteproc) dfeatures virtio device features supported by the firmware gfeatures host to write back the negotiated features that are supported by both sides config_len The length of the config space right after vrings status The host will indicate its virtio progress num_of_vrings Number of vrings Vrings resource Resource description for vrings Virtio driver related config space Virtio driver defined configuration
  • 21. © Copyright 2018 Xilinx • It is missing the generation counter information • Shall we add the generation counter to virtio config space? – It is used for device to indicate there is change in the configuration – We can have generation counter in the vdev resource space, and reserved for future use • Enable to support more than RPMsg virtio • Virtio doesn’t support remote to reports its status. It is difficult to restart connection if the remote restarts. – Shall we work with virtio specification to introduce virtio backend to report its status? Page 21 Virtio Device Configuration Space (2/2)
  • 22. © Copyright 2018 Xilinx Remoteproc APIs provides: • Resource table handling • remote life cycles management • Remote notification • Provide virtio config operations Page 22 Remoteproc APIs
  • 23. © Copyright 2018 Xilinx • The sequence of the resource in the resource table is the sequence the remoteproc will handle the resource. • Resource table can be provided from the application image – The remoteproc will parse the resource table before loading the firmware • Resource table is optional to application image • Resource table should be parsed before remote application starts • If resource table is in the application image, it will be passed before loading the image data into the target memory Page 23 Remoteproc Resource Table Handling
  • 24. © Copyright 2018 XilinxPage 24 Remoteproc Operations APIs API (User defined remoteproc operations) Description struct remoteproc_ops rproc_ops { .init = user_rproc_init, /* mandatory */ .remove = user_rproc_remove, /* mandatory */ .mmap = user_rproc_mmap, /* optional, but will be used in both life cycle management and parsing rsc */ .handle_rsc = user_rproc_handle_vendor_rsc, /* optional, required if there is vendor specific resource in the rsc */ .start = user_rproc_start, /* optional, required for life cycle management */ .stop = user_rproc_stop, /* optional, required for life cycle management */ .shutdown = user_rproc_shutdown, /* optional, required for life cycle management */ .kick = user_rproc_kick, /* optional, required for communication */ }; struct remoteproc *(*user_remoteproc_init)(void *priv, struct remoteproc_ops *ops); User defined remoteproc initialization void (*user_remoreproc_remove)(struct remoteproc *rproc); User defined removing remoteproc resource void *(*user_remoteproc_mmap)(struct remoteproc *rproc, metal_phys_addr_t pa, metal_phys_addr_t da, size_t size, unsigned int attribute, struct metal_io_region **io); User defined memory map operation, it will be used when passing the resource table, and when parsing the remote application image. int (*user_remoteproc_handle_rsc)(struct remoteproc *rproc, void *rsc, size_t len); User defined vendor specific resource handling int (*user_remoteproc_start)(struct remoteproc *rproc); User defined remote processor start function int (*user_remoteproc_stop)(struct remoteproc *rproc); User defined remote processor stop function, the processor resource is not released. int (*user_remoteproc_shutdown)(struct remoteproc *rproc); User defined remote processor offline function, the processor will be shutdown and its resource will be released. void (*user_remtoeproc_kick)(*struct remoteproc *rproc, int id) User defined kick operation
  • 25. © Copyright 2018 XilinxPage 25 Remoteproc Operations APIs Examples API (User defined remoteproc operations) Description struct remoteproc_ops rproc_ops { .init = user_rproc_init, .remove = user_rproc_remove, .mmap = user_rproc_mmap, .handle_rsc = user_rproc_handle_vendor_rsc, .start = user_rproc_start, .stop = user_rproc_stop, .shutdown = user_rproc_shutdown, .kick = user_rproc_kick, }; Remoteproc operations definition in case both LCM and communication is required struct remoteproc_ops rproc_ops { .init = user_rproc_init, .remove = user_rproc_remove, .mmap = user_rproc_mmap, .handle_rsc = user_rproc_handle_vendor_rsc, .start = user_rproc_start, .stop = user_rproc_stop, .shutdown = user_rproc_shutdown, }; Remooteproc operations definition in case only LCM is used struct remoteproc_ops rproc_ops { .init = user_rproc_init, .remove = user_rproc_remove, .mmap = user_rproc_mmap, .handle_rsc = user_rproc_handle_vendor_rsc, /* optional */ .kick = user_rproc_kick, }; Remoteproc operations definition in case only communication is used
  • 26. © Copyright 2018 XilinxPage 26 Remoteproc Core APIs (1 / 3) API Description struct virtio_dev *rproc_virtio_create(struct remoteproc *rproc); create remoteproc virtio device struct handle_rsc_table(struct remoteproc *rproc, struct resource_table *rsc_table, int len); handle resource table int remtoeproc_load(struct remoteproc *rproc, void *fw, struct image_store_ops *store_ops); load firmware for the remote processor system int remoteproc_start(struct remoteproc *rproc); start the remote processor int remoteproc_stop(struct remoteproc *rproc); stop the remote processor, halt the processor, but the resource of the processor such as memory is not released int remoteproc_shutdown(struct remoteproc *rproc); shutdown the remote processor and release its resource int remoteproc_suspend(struct remoteproc *rproc) suspend the remote processor system int remoteproc_resume(struct remoteproc *rproc) resume the remote processor system struct remoteproc *remoteproc_init(struct remoteproc_ops *ops, void *priv); Initialize remote processor system instance void remoteproc_remove(struct remoteproc *rproc) Remove remoteproc instance and its resource
  • 27. © Copyright 2018 Xilinx Libmetal provides an abstraction layer for device operation and I/O region abstraction, and other helper functions such as list, locks and so on. ● Libmetal APIs is used by rpmsg and remoteproc implementation in OpenAMP code base for requesting shared memories and operate shared memories. Page 27 Libmetal APIs
  • 28. © Copyright 2018 XilinxPage 28 Libmetal APIs used in OpenAMP library(1 / 3) API Description int metal_register_generic_device(struct metal_device *device); statically register a generic libemtal device. The subsequent calls to metal_device_open() look up in this list of pre-registered devices on the “generic” bus. int metal_device_open(const char *bus_name, const char *dev_name, struct metal_device **device); open a libmetal device by name int (*user_remoteproc_handle_rsc)(struct remoteproc *rproc, void *rsc, size_t len); User defined vendor specific resource handling void metal_device_close(struct metal_device *device); close a libmetal device struct metal_io_region * metal_device_io_region(struct metal_device *device, unsigned index) Get an I/O region accessor for a device region
  • 29. © Copyright 2018 XilinxPage 29 Libmetal APIs used in OpenAMP library (2 / 3) API Description #define METAL_IO_DECLARE((struct metal_io_region *io, void *virt, const metal_phys_addr_t *physmap, size_t size, unsigned page_shift, unsigned int mem_flags, const struct metal_io_ops *ops) open an libmetal I/O region dynamically int metal_io_init((struct metal_io_region *io, void *virt, const metal_phys_addr_t *physmap, size_t size, unsigned page_shift, unsigned int mem_flags, const struct metal_io_ops *ops); open an libmetal I/O region dynamically void metal_io_finish(struct metal_io_region *io); close a libmetal I/O region size_t metal_io_region_size(struct metal_io_region *io) get the size of a libmetal I/O region void *metal_io_virt(struct metal_io_region *io, unsigned long offset;); unsigned long metal_io_virt_to_offset(struct metal_io_region *io, void *virt); metal_phys_addr_t metal_io_phys(struct metal_io_region *io, unsigned long offset); unsigned long metal_io_phys_to_offset(struct metal_io_region *io, metal_phys_addr_t phys); void *metal_io_phys_to_virt(struct metal_io_region *io, metal_phys_addr_t phys); metal_phys_addr_t metal_io_virt_to_phys(struct metal_io_region *io, void *virt); conversion between virtual address, physical address, and offset uint64_t metal_io_read8/16/32/64(struct metal_io_region *io, unsigned long offset); void metal_io_write8/16/32/64(struct metal_io_region *io, unsigned long offset, uint64_t value); metal I/O read/write int metal_io_block_read(struct metal_io_region *io, unsigned long offset, void *restrict dst, int len); int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len); int metal_io_block_set(struct metal_io_region *io, unsigned long offset, unsigned char value, int len); metal I/O region block read/write/set
  • 30. © Copyright 2018 XilinxPage 30 Libmetal APIs used in OpenAMP library (3 / 3) API Description struct metal_generic_shmem { const char *name; /**< name of the shared memory */ struct metal_io_region io; /**< I/O region of the shared memory */ struct metal_list node; /**< list node */ }; int metal_shmem_register_generic(struct metal_generic_shmem *shmem); statically register a generic shared memory. Subsequent calls to metal_shmem_open() look up in this list of pre- registered regions. int metal_shmem_open(const char *name, size_t size, struct metal_io_region **io); Open a shared memory with the specified name
  • 31. © Copyright 2018 XilinxPage 31