8. Function Trigger
public static class SimpleExampleWithOutput
{
[FunctionName("CopyQueueMessage")]
public static void Run(
[QueueTrigger("myqueue-items-source")] string myQueueItem,
[Queue("myqueue-items-destination")] out string myQueueItemCopy,
ILogger log)
{
// Business logic goes here.
}
}
Output Binding
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "queueTrigger",
"queueName": "%input-queue-name%",
"name": "myQueueItem"
}
],
"disabled": false,
"scriptFile": "..binFunctionApp1.dll",
"entryPoint": "FunctionApp1.QueueTrigger.Run"
}
function.json
9. Sounds great, but I need..
automated testing
to run on-premises
custom dependencies
custom hardware
automated deploymentreal-time monitoring
sub-second latency
network isolation complex workflows
long-running processes
identity management
secure credentials storage
versioning strategy
state managementto run in a VNET
10. Agenda
• Hosting Options
• Premium
• KEDA
• Monitoring and Diagnostics
• Application Insights
• Security
• MSI and KeyVault Integration
• Deployment
• Azure DevOps
• Workflows and State
• Durable Functions and Entities
12. • Serverless scale with bigger,
configurable instances
• Up to 4 cores 12Gb of memory
• Cold start controls
• Min plan size
• Pre-Warmed instances
• VNET connectivity
• Longer run duration
• ~25 minutes
• Predictable billing
• Max plan size
19. • Secure inbound HTTP access to your App
to one subnet in a VNET
• Allow secure outbound calls to resources
in a VNET
• Dependencies that you add can be
insecure
Internet
Functions
Runtime
HTTP Front-ends
Virtual Network
(VNET)
20. Virtual Network
(VNET)
• Leaving the multi-tenant world
• Your entire app is contained within a VNET
• Organizational controls over ingress / egress
• Limited scaling speed
Internet
Functions
Runtime
HTTP Front-ends
21. Orchestrates containerized workloads and
services.
Provides a clean interface for managing
distributed systems across many nodes,
including replication, scaling, and state
management.
App App
24. When to
consider KEDA
Run functions on-premises / Intelligent edge
Run functions alongside existing Kubernetes
investments or requirements
Run functions on a different platform or
cloud
Run functions with full control and
management of scale and compute
27. Spot the vulnerability!
module.exports = function (context, payload) {
if (payload.action != "opened") {
context.done();
return;
}
var comment = { "body": "Thank you for your contribution! We will get to it shortly." };
if (payload.pull_request) {
var pr = payload.pull_request;
context.log(pr.user.login, " submitted PR#", pr.number, ": ", pr.title);
SendGitHubRequest(pr.comments_url, comment, context); // posting a comment
}
context.done();
};
function SendGitHubRequest(url, requestBody, context) {
var request = require('request');
var githubCred = 'Basic ' + 'mattchenderson:8e254ed4';
request({
url: url,
method: 'POST',
headers: {
'User-Agent': 'mattchenderson',
'Authorization': githubCred
},
json: requestBody
}, function (error, response, body) {
if (error) {
context.log(error);
} else {
context.log(response.statusCode, body);
}
});
}
28. Secrets management
const msRestAzure = require('ms-rest-azure');
const KeyVault = require('azure-keyvault');
const vaultUri = process.env['GITHUB_SECRET_URI'];
// Value looks like: 'https://foo.vault.azure.net/secrets/gh'
//... Getting the event
let kvToken = msRestAzure.loginWithAppServiceMSI({
resource: 'https://vault.azure.net'
});
let keyVaultClient = new KeyVault.KeyVaultClient(kvToken);
keyVaultClient.getSecret(vaultUri).then(function (secret){
var githubHeader = 'Basic ' + secret;
//... Call GitHub
});
29. Managed identities for Azure Functions
Keep credentials out of code
Auto-managed identity in Azure AD
for Azure resource
Use local token endpoint to get
access tokens from Azure AD
Direct authentication with services,
or retrieve creds from Azure Key
Vault
Azure Functions
Azure Service
(e.g., ARM, Key Vault)
Your code
Local token
service
Credentials
1
2
3
Azure (inject and roll credentials)
30. Gets secrets out of App Settings
and into secrets management
Leverages the managed identity
of your function app
Versions required for initial
preview (goal of auto-rotation)
@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/mysecretversion)
Foo: mysecret
Foo: mysecret
Foo: mysecret
Foo: reference
Foo: mysecret
33. • GA of Functions Build task
• Easily add Functions to a CI/CD pipeline
• New streamlined CLI command
• az functionapp devops-pipeline
create
• Automatically configures DevOps to build
with new commits to your version
control
• Configures Github or Azure Repos
automatically
aka.ms/functions-azure-devops
37. // calls functions in sequence
public static async Task<object> Run(DurableOrchestrationContext ctx)
{
try
{
var x = await ctx.CallFunctionAsync("F1");
var y = await ctx.CallFunctionAsync("F2", x);
return await ctx.CallFunctionAsync("F3", y);
}
catch (Exception)
{
// global error handling/compensation goes here
}
}
Orchestrator Function
Activity Functions
38. public static async Task<object> Run(DurableOrchestrationContext context)
{
try
{
var x = await context.CallActivityAsync<object>("F1");
var y = await context.CallActivityAsync<object>("F2", x);
var z = await context.CallActivityAsync<object>("F3", y);
return await context.CallActivityAsync<object>("F4", z);
}
catch (Exception)
{
// Error handling or compensation goes here.
}
}
39. // An HTTP-triggered function starts a new orchestrator function instance.
public static async Task<HttpResponseMessage> Run(
HttpRequestMessage req,
DurableOrchestrationClient starter,
string functionName,
ILogger log)
{
// The function name comes from the request URL.
// The function input comes from the request content.
dynamic eventData = await req.Content.ReadAsAsync<object>();
string instanceId = await starter.StartNewAsync(functionName, eventData);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
return starter.CreateCheckStatusResponse(req, instanceId);
}
40. public static async Task Run(DurableOrchestrationContext context)
{
int jobId = context.GetInput<int>();
int pollingInterval = GetPollingInterval();
DateTime expiryTime = GetExpiryTime();
while (context.CurrentUtcDateTime < expiryTime)
{
var jobStatus = await context.CallActivityAsync<string>("GetJobStatus", jobId);
if (jobStatus == "Completed")
{
// Perform an action when a condition is met.
await context.CallActivityAsync("SendAlert", machineId);
break;
}
// Orchestration sleeps until this time.
var nextCheck = context.CurrentUtcDateTime.AddSeconds(pollingInterval);
await context.CreateTimer(nextCheck, CancellationToken.None);
}
// Perform more work here, or let the orchestration end.
}
42. “Hello MDC!”[“Hello MDC!”]
Orchestrator
Function
Activity
Function
Execution
History
var outputs = new List<string>();
outputs.Add(await context.CallActivityAsync<string>(“Hello”, “MDC”));
return outputs;
Orchestrator
Function
?
Activity
Function
“Hello MDC!”
Orchestrator Started
Execution Started
Task Scheduled, Hello, “MDC”
Orchestrator Completed
Task Completed, “Hello MDC!”
Orchestrator Started
Execution Completed, ["Hello MDC!"]
Orchestrator Completed
History Table
43. public static async Task Counter([EntityTrigger(EntityClassName = "Counter")] IDurableEntityContext ctx)
{
int currentValue = ctx.GetState<int>();
int operand = ctx.GetInput<int>();
switch (ctx.OperationName)
{
case "add":
currentValue += operand;
break;
case "subtract":
currentValue -= operand;
break;
case "reset":
await SendResetNotificationAsync();
currentValue = 0;
break;
}
ctx.SetState(currentValue);
}
44. • Entities process one operation at a time
• An entity will be automatically created if it
does not yet exist
• Operations can be non-deterministic
• Entity functions can perform external calls
(preferably with async APIs)
• Entities can invoke other entities, but only
one-way communication
Developing
entity
functions
47. Event-driven programming model with Kubernetes - KEDA
Dependency injection support for .NET
Extension bundles
Durable Functions stateful patterns
Streamlined Azure DevOps experience
New Serverless Library experience
Premium Functions hosting option
Support for PowerShell Core 6
https://aka.ms/FunctionsBuild2019
48. Title Speakers Code Time
Serverless web apps with Blazor, Azure Functions, and
Azure Storage
Jeff Hollan THR2003 Monday, May 6
4:30 PM - 4:50 PM
Closing the key gaps of serverless with Azure Functions Alex Karcher
Jeff Hollan
BRK3042 Tuesday, May 7
10:00 AM - 11:00 AM
6 things you need to know about serverless Colby Tresness THR3009 Tuesday, May 7
2:00 PM - 2:20 PM
Bring serverless apps to life with Azure SignalR Service Anthony Chu THR3008 Tuesday, May 7
4:00 PM - 4:20 PM
Where should I host my code? Choosing between
Kubernetes, Containers, and Serverless
Jeff Hollan THR2005 Wednesday, May 8
10:00 AM - 10:20 AM
Event-driven design patterns to enhance existing
applications using Azure Functions
Daria Grigoriu
Eduardo Laureano
BRK3041 Wednesday, May 8
2:00 PM - 3:00 PM
The good, the bad and the ugly of Serverless Burke Holland
Cecil Phillip
CFS2025 Wednesday, May 8
3:30 PM - 4:30 PM
Mixing Stateful and Serverless – workflow, orchestration,
and actors
Matthew Henderson THR3011 Wednesday, May 8
4:00 PM - 4:20 PM
Abstraction of servers, infrastructure, OS config
“Functions as a Service”
OS/framework patching
No need to manage infrastructure
Event-driven scale
Triggered by events within Azure or third-party services
React in near real time to events and triggers
Scales within seconds quickly, limitlessly*
No scale configuration required
Sub-second billing
Pay only for the time your code is running
The time it takes to ready an instance when no instance yet exists
Varies greatly on a number of factors like language and # of files
Today for C# in v1 is generally around 3-4 seconds today, 1-2 seconds in on-deck release
Few angles of attack
Pre-warmed “workers”
Zipped artifacts without extract (Zip Deploy)
Keep alive (keep warm longer)
Users can help mitigate by:
Using Zip Deploy if possible (paired with something like funcpack for Node)
Use C# Class Libraries over .csx for large functions
If push comes to shove, a “pinger” can keep warm
Function with blob full of files I want to encrypt
haven’t called it in last twenty minutes
Call premium and consumption
-- K8 is reactive: container CPU/memory
-- Fx is event-driven: ex. queue depth, Kafka stream length
-- partnered w/Red Hat
-- scale from 0-1000s of instances
-- containers consume events directly from source; no decoupling w/HTTP
-- routing through HTTP == data and context loss
-- extensible
-- Azure Functions are containerizable
-- deploy in-cloud or on-prem
-- Fx run integrated w/OpenShift [[WHAT IS THIS]]
-- adds event sources to K8s
-- we see most of our executions come from non-HTTP sources (HTTP only 30%)
-- Kafka, Azure Queues, Azure Service Bus, RabbitMQ, HTTP, and Azure Event Grid / Cloud Events. More triggers will continue to be added in the future including Azure Event Hubs, Storage, Cosmos DB, and Durable Functions.
-- runs alongside Virtual Kubelet and AKS Virtual Nodes [[ WHAT ARE THESE ]]
-- open source
-- webinar! May 28: aka.ms/keda-webinar
-- when to consider KEDA
Imagine a scenario where I have to take the output of a Function and use it as the input to call another Function. I’ll have to coordinate the chaining manually.
If I have to have a function that takes some sort of event and then parallelizes it into multiple Functions, I can still do that but how will I know when all Functions have finished executing so I can aggregate the results and move on.
What if I had to listen on multiple events and aggregate their outcome to determine which specific job or function to run in my application.
What if I wanted to do some kind of extended monitoring on an endpoint? For example, if I were to monitor the temperature of a remote machine and take action x if the temperature were lower than a certain threshold, else do y or run job y.
What if I have an API or endpoint that was running for a long time? I know Functions are short-lived but sometimes you guys put some serious load on them. Could there be a mechanism to provide status of the execution back to the client so they’re not left hanging?
And lastly, what if I wanted to get some sort of human interaction in there? For example, if I am to do some sort of 2FA in the middle of my function execution but also don’t want to wait forever because sometimes people take forever to reply especially when the texts are automated.
Today, I’m going to be talking about some of these problems – how you can approach them in regular FaaS? And how they can be simplified with the technology of Durable Functions.
In the function chaining pattern, a sequence of functions executes in a specific order. In this pattern, the output of one function is applied to the input of another function.
The async HTTP APIs pattern addresses the problem of coordinating the state of long-running operations with external clients. A common way to implement this pattern is by having an HTTP call trigger the long-running action. Then, redirect the client to a status endpoint that the client polls to learn when the operation is finished.
The monitor pattern refers to a flexible, recurring process in a workflow. An example is polling until specific conditions are met. You can use a regular timer trigger to address a basic scenario, such as a periodic cleanup job, but its interval is static and managing instance lifetimes becomes complex. You can use Durable Functions to create flexible recurrence intervals, manage task lifetimes, and create multiple monitor processes from a single orchestration.
Many automated processes involve some kind of human interaction. Involving humans in an automated process is tricky because people aren't as highly available and as responsive as cloud services. An automated process might allow for this by using timeouts and compensation logic.
Grasping how orchestrators use execution history to replay and rebuild their local state is key to understanding how Durable Functions works, so let’s walk through the execution of a simple orchestrator function.
The light blue box at the top of the slide is the orchestrator’s code.
“SayHello” our activity function, which returns “Hello” and whatever input you give it.
Our execution history is currently empty. As we start to record events, they’ll show up here. (Indicate area.)
1. A request is made to the orchestrator function.
2. The orchestrator starts and begins executing until it’s asked to await some async work. In this case, we want to call an activity function.
3. The orchestrator checks the execution history for a record of the activity function.
4. There’s no record of the activity function being called or completed, so the orchestrator schedules that work.
5. While the orchestrator waits on work to complete, it shuts down.
5. The scheduled activity function runs.
6. A record of this is added to the execution history. In this case we produced output, so that’s stored.
7. Now the orchestrator has more work to do. It restarts and executes its code **from the beginning** to build up its local state.
8. As before, the orchestrator executes until it reaches an await.
9. The orchestrator checks the execution history. This time there’s a record of the async work being done.
10. The activity function’s stored output is passed back to the orchestrator. In this case, the value is added to a list of strings.
11. The orchestrator continues executing. In a more complex orchestrator with multiple await calls, the checkpoint, schedule and replay steps would repeat for each one. This orchestrator runs to completion and returns its output.
12. And we’re done!
- How does the framework know to wake up
The sixth pattern is about aggregating event data over a period of time into a single, addressable entity. In this pattern, the data being aggregated may come from multiple sources, may be delivered in batches, or may be scattered over long-periods of time. The aggregator might need to take action on event data as it arrives, and external clients may need to query the aggregated data.