Azure Automation wants you to automate everything, everywhere. Hybrid Workers allow Azure Automation to reach new places within your infrastructure, allowing for more automation and less complexity. Learn how to deploy Hybrid Workers, balance automation workloads across groups of workers, trigger jobs off via web hooks, monitor jobs, remove scheduled tasks and much more.
9. ■Limited to specifying which Azure region
■Cannot be connected to Azure virtual networks
■No control over IP address
■Limited control over make up of Azure worker
Azure Worker Limitations
10. ■Runbooks running within your DC
■Uses OMS
■Support script, workflow and graphical runbooks
■No inbound firewall requirements
Hybrid Workers
11.
12. ■Collections of workers
■Runbooks are executed against groups
■Ideal for providing HA
■Share “run as” permissions
Hybrid Worker Groups
13.
14. ■Module Deployment
■Execution context
■No simple file or event triggers
■No prioritization of workers in a group
■Documentation
■Troubleshooting can be a challenge
Hybrid Worker Limitations
15. ■Start jobs from HTTPS requests
■Idea for application and 3rd party integration
■Great for starting jobs if Azure CMDLets are not installed
■Runbooks may need modifications to run from webhooks
Webhooks
Hi Everyone one, My name is Kieran Jacobsen, and today I will be talking to you about Microsoft Azure Automation and running automation tasks within your data centre using Hybrid Workers.
I wanted to start by saying how excited and honoured I am to be talking with you all today, this is an amazing community event and we have some astounding speakers and sessions today. I really hope that you all are enjoying the sessions as much as I have been.
Now I want to thank our sponsors, and in particular I want to make a special shout out to my employer, Readify. I just want to quickly talk about Readify. Readify is a company full of amazing, brilliant people who work with our clients to deliver outstanding software with velocity and uncommon sense. We provide a number of services including Software Development, BI, Cloud, Office 365 and SharePoint Consulting. We also have an amazing Managed Services Team.
Readify is currently hiring, we a number of roles available here in Melbourne and in other cities. If you are interested in what Readify can do to help you, or are interested in working for Readify, please feel free to come up and see me afterwards.
So just quickly a bit about me. My name is Kieran Jacobsen, I moved from Brisbane to Melbourne about 18 months ago when I joined Readify. I am a the Technical Lead for Infrastructure and Security within the Managed Services team. In a few weeks I will have been in the industry for 10 years, specialising in infrastructure, security and automation, with a focus on the Microsoft product stack. I am particularly interested in the automation of infrastructure security operations.
I am the maintainer of the PoshSecurity blog, where I write about a variety of topics, and also the maintainer of a number of open source PowerShell modules.
So the plan for todays presentation is to start with an introduction into Azure Automation, some basic concepts and the limitations of the Azure worker. We will then look at Hybrid Workers, Hybrid Worker Groups and cover off their limitations.
I am going to finish off looking at web hooks, including a more, real-world demonstration. We will look at a demo where we have a web hook, triggering a job on a hybrid worker. That Job will create an Active Directory user and notify a Slack channel of the users details.
Let’s take a look at Azure Automation.
Now Azure Automation has been around for 2 years now, and you would think that this point, it would be in fairly common use. From my experience, and even I am a late adopter, there hasn’t been a wide adoption of Azure Automation.
Microsoft’s goal was to provide a managed service for automation and scripting, with a big focus on simplifying the management of cloud systems. Microsoft focused heavily in the early days of Azure Automation on public cloud oriented tasks, this is unfortunately where a number of, in my opinion, poor design decisions crept in.
Azure Automation lives and breathes PowerShell. When it was first released, Automation only supported workflows, setting back its adoption considerable. Things changed last year with support for PowerShell scripts being introduced, along with a graphical runbook editor.
The reason I am a massive fan of Azure Automation is its availability. Azure is an extremely available platform, and Automation is built upon that availability. In more traditional automation or job scheduling platforms, one of the major risks was if the platform itself went down. The loss of our automation systems, even for a few minutes, could be utterly catastrophic, and even with clustering, HA and DR, it is always a scary risk. Moving our automation off to the cloud reduces these risks, they are not eliminated, just dramatically reduced.
Now there are just a few things we need to understand when we starting to look at Azure Automation.
We start by creating an Azure Automation account, an account contains everything to complete your automation goals, everything you need to make it happen. You can have more than one Automation account, and I myself use separate accounts to segregate different environments. Consider having a production and development account at a minimum.
Runbooks are our automation processes or procedures, runbooks are the tasks we want to execute in a repeatable fashion. A great example of a runbook would be a process, a script, that creates a virtual machine in Azure. The script accepts a name for a virtual machine, and creates storage, network connections, disks, network connectivity and even install extensions. Another example might be a runbook that creates a user account, specifying specific display properties, configuring exchange settings, skype settings and even third party services.
Assets are reusable components that are shared across runbooks. Assets could be variables, or schedules specifying when our runbooks are to be run, they could be PowerShell modules, credentials, certificates for authentication, or connections. Variables can be strings, Boolean values, integers or datetime values.
The next thing you need to understand are jobs. When we start a runbook, specifying parameters if required, the result will be a job. Jobs are executed by workers, and jobs have a lifecycle or state. Jobs can either be new, queued, running, failed, stopped, suspended, or completed.
The last element are workers. We often overlook what system is actually execution our automation tasks, and this is a folly that is often repeated in enterprise environments. Originally there was one type of worker, one that was fully managed by Azure, hosted independently and separate from our other Azure resources. If you are familiar with VSTS build and release, then think of the hosted agent.
So to begin with, lets take a look at Azure Automation. Switching to the Azure portal, we can see here I have an account, there are a number of items within the account including runbooks, assets, hybrid worker groups, DSC configurations, DSC Node configurations and DSC nodes. We can see some job statistics, including how many have completed, failed or were suspended.
Down at the bottom we can see that the source control of choice is GitHub, and we can see some details around the configurations of source control integration. Right now, Azure Automation only support GitHub, VSTS is coming soon. There is a user voice around if VSTS should be implemented.
Drilling into runbooks, we can see some of those I will be running tonight. Notice the authoring status. When we create a runbook or make changes to it, they will not take effect until a runbook is published. This allows you to safely make changes whilst keeping your production environment functioning.
Whilst we are here, let’s run our first runbook! Let’s select Get-MyFirstRunBook,all that this one does is return a hello world message. Let’s hit start, now you can see I have the option to execute this on Azure or a Hybrid worker, got now lets just run it on Azure. Now whilst we wait for the job to complete, we will see it move through a few states, queued, running and them hopefully completed. I can also view a list of all of the jobs by selecting the jobs tile under details. Here I can see jobs as they are running, as well as go back and view previous jobs.
Now we can see that this has completed, and we can see the output of the runbook.
Now you might be starting to wonder, well, what is wrong with the Azure worker? It looks ok to me!
The issues arise from the fact we get very limited control over the make up of the worker and where it is connected.
When we create our Automation Account, we can select which Azure region it is to be created in. Note that right now, only 5 regions are available, East US 2, South Central US, Japan East, Southeast Asia and West Europe. That doesn’t sound like a huge amount of choice. Your choice of region will control where your Azure workers are located, so choose well.
There currently isn’t an option to attach a worker to a virtual network, so we don’t get any of the connectivity options offered, nor is there any option to specify a static or reserved IP address. There is a chance each job will be executed from a different worker, with a different IP address.
Now you might not think this is a problem, but consider this integration scenario. We need to integrate with a partner organisation who provides our HR system, they restrict access to their API to authorised IP addresses. How do we achieve this? Without vnet connectivity, reserved IP addressing, our options look limited. Perhaps we could whitelist that specific Azure regions IP addresses? Microsoft does in fact provide a file containing all of the IPv4 subnets allocated to Azure. I have written a PowerShell module that assists with working with this file, allowing a specific regions addresses to be extracted. There is only one problem, the number of addresses is significant. Consider the Southeast Asia region, one of the smaller regions where Azure Automation is available, it consists of 67 separate ranges, with a total of about 239 thousand addresses! Some how, I doubt whitelisting every IP address is going to be feasible.
For those of you wondering, there are over 1700 subnets allocated to Azure, or approximately 5.8 million district IP addresses.
What about the makeup of an Azure worker? Microsoft controls almost everything here, from Operating System, patching and even PowerShell version. The only control we get is the ability to specify additional PowerShell modules, and from experience, that can be quite problematic. If you need something more complex than a PowerShell module, then you are out of luck.
Overall, these limitations can make it a hard for Azure Automation to be adopted into larger more, complex enterprise environments.
Enter Hybrid Workers.
Hybrid workers allow us to develop more advanced runbooks than we could previously, allowing for runbooks to access resources within your network, integrate with 3rd party frameworks, and give us finer grained control over the execution environment. They solve many of the limitations with the Azure worker.
To make use of Hybrid workers, you will need to implement the Operations Management Suite. Now I haven’t tested if hybrid workers will function if you are using OMS via the SCOM connector, however I have read of this being possible. For my production environments, and even this presentation they are direct attached. You will also need to install and configure the OMS Automation solution as well.
Hybrid workers support all three runbook types, and most importantly you don’t need to open any inbound firewall ports, instead the worker agents will connect out to Azure over HTTPS, and monitor for jobs that they need to perform. I have taken a peak at the internals, and all of this is achieved via Azure Service Bus. I really do wish I could hook PowerShell into custom Azure Service Bus instances as well, if anyone has any neat solutions, please let me know.
Now Microsoft’s documentation here refers a lot to resources within your local data centre, however I see hybrid workers as being highly useful to IAAS situations just as they are on premise.
Let’s take a look at hybrid workers.
So lets run our first job on a hybrid worker. For tonight's demonstrations, I have two Windows Server 2012R2 servers, they are domain controllers for a domain called CORE.
Firstly, I am going to show you the OMS console. In the OMS console, you can see that I have the automation solution added, and it is configured to my azure automation account, poshsecurity-aa.
Let’s go back to the Azure portal. Whilst I have my hybrid workers already configured and running, if you wanted to set your own up, there are two values you need, and we get both of these from the Key icon here. We need to take a note of one of the access keys, and then the URL endpoint for our azure automation account. Adding a hybrid worker is as simple as calling add-hybridrunbookworker, and specifying these two values and the name of the group to add them to. We will talk about the groups in a minute.
Let’s take a look at our group, if I go into Hybrid Worker Groups, we can see a single group. Digging in to that, we can see there is two hybrid workers, DC01 and DC02.
Now on to running our first hybrid job. I am going to run a job called Get-Hostname. This runbook simply outputs the hostname of the worker it is running on. If we hit start on this runbook, we will be asked once again where do we want to run this job, let’s select hybrid and then our domaincontrollers group. Now this is going to be queued up, and then executed, once it is completed, lets look at the output. As you can see, that DC01/DC02 the hostname of one of our works is displayed.
Hybrid Worker Groups are collections of workers, a little bit like a server farm, that can complete our automation activities. There is no reason why we couldn’t have multiple groups, each configured or placed in different places on our network. You might have one group setup that has access to your internal HR systems, another group might near your webserver farm to perform activities there.
When a Job is created, one, and only one worker in the group that job has been assigned to, will complete it. Don’t think of groups as load balancing, whilst they will to an extent distribute the jobs, this isn’t so much designed for load balancing and more designed for high availability. Now just to note, the failover isn’t as smooth and as seamless as it could be. If a worker does fail, it make take some time for everything to work it out. The main driver for work groups is to ensure that we always have a worker available to complete our automation tasks. Workers in a group do not need to be in the same data centre, they could represent geographically dispersed systems at multiple locations for availability.
Workers run jobs under the same execution also called a run as account. No matter what runbook job is sent to the group, they are all executed as the same account.
This time, why don’t we start a bunch of jobs and see what happens. I have some PowerShell code, here that will spin up 5 jobs for us, and then read the output back for us.
for ($a = 0; $a -le 10;$a++ )
{
"Starting Job $a"
$null = Start-AzureRmAutomationRunbook -Name 'Get-Hostname' -RunOn 'DomainControllers' -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa'
}
$Jobs = Get-AzureRmAutomationJob -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa' | select-object -first 10
foreach ($job in $Jobs)
{
(Get-AzureRmAutomationJobOutput -Id $job.id -ResourceGroupName 'poshsecurity-aa' -AutomationAccountName 'poshsecurity-aa').text
}
We should see that some ran on DC01 and others ran on DC02. Pretty neat Eh?
Now let's take a look at changing the account that these runbook jobs are running as.
So I have another runbook, Get-RunningUser, will simply return as output the user account that we are running as. Let's run it and see what it returns. So let's select to run on the hybrid worker. And we can see that it returns that the runbook was running as nt authority\system.
Now before we change the account jobs will be run as, we need to ensure we have a credential asset defined with the appropriate settings. If I go to assets, and then credentials, you can see I have one called AutomationAccount. These are domain credentials that we want to use to run our jobs.
Now if I go back into the group settings, then select "hybrid worker group settings". Now as you can see, we have the run as selected as "default". Let's select custom, next we will be asked to select a credental, and select the AutomationAccount.
I am going to save, and go back to runbooks, and run the get-runninguser. And if we look at the output, then we see that the account is core\azureautomation, which is the user it was configured for.
Who here is sick of all the jumping around in the portal yet? I know I am.
Unfortunately, all this comes with some limitations. Now most of these might not be a show stopper for you, they might not even be an issue, it is still best that you are aware of them.
Modules are not automatically deployed to hybrid workers. Unlike with Azure Workers, modules installed as assets will not be deployed automatically. Either script the prerequisite module install or use DSC. If you have come this far, why not sure Azure Automation DSC?
Execution context, as I mentioned earlier, is tied to the worker group. Now for most people, you probably don’t care about executing one runbook as a different user account than another. Thankfully there are some easy solutions to this one.
Now I for one would like to see file close triggers, and I would love if the story of trigging from event logs was much simpler. You certainly can trigger jobs from Windows events, but it is a lot of work.
One thing that would be nice to see is weighting or prioritization within the worker groups. It would be nice to be able to say, run the runbooks here on this worker, unless it is dead. Each hybrid worker in a group has the same chance to perform the job as the others. Whilst this might not cause issues to most people, there are probably situations where this could be an issue.
Now documentation has been a big limitation, but has greatly improved over the past few weeks.
Azure Automation doesn’t have a strong logging model when it comes to the hybrid workers, there are some diagnostic logs and traces on each worker, but they are more oriented to assist Microsoft support than for administrators on their own. I am hoping that perhaps OMS may start to fill in the gaps here.
These days one of the most common bits of automation buzzwords and lingo is web hooks, and even Azure Automation provides us with a method of trigging runbook execution from a single HTTPS request. Web hooks are great for 3rd party integration with things like VSTS, GitHub, Slack, SharePoint, PushOver etc.
You might want to trigger off jobs based on monitoring alerts, or specific events within a Slack chat room. Webhooks are very effective.
One thing I am looking to use Webhooks for is providing a method to trigger automation, without the overheads. Users don’t need to have the Azure CMDLets installed, nor do they need to have large complex scripts locally available. With a webhook, all they need is PowerShell and the webhook URL.
I have also made us of webhooks with SharePoint, allowing for users to launch automation tasks from a click of a button or a workflow.
Now webhooks are not perfect. They don’t integrate with the normal parameter mechanism that exists for runbooks. You are going to need to modify how your runbook is written to accept a specific webhook parameter object. This object will contain a bunch of information including the name of the webhook that triggered the job, the request headers and the request body. You are not going to get any support working with handling objects in the webhook request. If you want to send data to the hook, then you will need to convert it there and back. I recommend sending any data as a JSON formatted body and then converting back from JSON in your runbook.
I don’t want to scare you off web hooks, they are extremely powerful, and extremely useful in our automation life cycle.
So I am going to show you two demos on integrating with web hooks. We will start by creating our own webhook and calling it from PowerShell.
Firstly we go to the Runbook that we want to run, and select webhook. We then customize the settings, entering a name, expiry and make sure you copy the URL!!!
Now this runbook doesn't need parameters, but don't forget to say, run on hybrid worker. Then we hit create.
Now that it is created, we go to powershell, and call invoke-restmethod, the url we specified, and the method post. When this returns, it will return the Job ID for the job we just kicked off.
And If I switch back to the azure portal, we can see a job has been queued up, and has been executed successfully.
Now let’s look at a more interesting example. Let’s take a look at it in the ISE. New-ADUser is a runbook that accepts data via a webhook. Specifically firstname and lastname. It will then create an active directory user based upon that information. After it creates the account, it is going to send me a message in Slack with the accounts password.
I have also created a little function to kick the whole process off. So Let’s paste that into a PowerShell window, execute it, and then switch over to slack.
And there we have the password, and if I look in AD, we can see the account has been created.
So some quick links. I will be posting up the slide deck on my site at poshsecurity.com.
The runbooks are all available up on github.
I have also included a good reference on hybrid workers and one on webhooks. The webhook article goes into a great amount of detail on setting up a webhook and handling objects being passed in.
The last link is for the Azure Automation Authoring Toolkit, often called the Azure Automation ISE plugin. This provides you with an ISE integrated environment for working on runbooks and assets. The toolkit can make a working with Azure Automation much easier, now I only wish there was a similar plugin for Visual Studio Code.
So that is all for me today. I want to thank you all for listening to me today.
Does anyone have any questions?