Tag Archives: Automation

Subscribing to Teams Presence with Graph API using Power Platform

Since the support for getting Teams Presence via Microsoft Graph API came into public preview earlier this year, https://docs.microsoft.com/en-us/graph/api/presence-get?view=graph-rest-beta&tabs=http, many where seeing interesting scenarios for automating Microsoft 365 using this presence status. In fact many, myself included, created different solutions for setting light bulbs etc to reflect the Teams presence. If busy, then red lights, if available green lights, and so on. I have this setup myself using Philips Hue lights, and have created a PowerApp using Flows to control my lights:

There has been one problem though, getting Teams Presence has only been possible by continuously querying the API to check if the status has changed. This can be scheduled of course, but running this every minute or so seems like an excessive way to achieve the goal: being able to see if the Teams Presence has changed!

Until just a few days ago that is, where I noticed that Teams Presence now is in Preview for subscribing to change notifications in Microsoft Graph: https://docs.microsoft.com/en-us/graph/webhooks#supported-resources

In this blog post I wlll show how to create a subscription for change notifications for Teams Presence for a specified user, and the requirements for doing this. And I will use Power Platform and Power Automate Flows to achieve this.

Let’s start with the Requirements..

First of all, to be able to set up a change notification for a resource, you need the same API permission as the resource require itself, for Teams Presence this is one of the following:

  • Presence.Read, for being able to read your own users presence status.
  • Presence.Read.All, for being able to read all users in your organization’s presence status.

Currently in the Beta endpoint, getting presence is only supported for Delegated permissions using work account, so you will have to to this with a logged in user.

The other important requirement is that you need a webhook uri for where the change notification can send a POST request for the resource that has been changed. This webhook uri can be on your preferred platform, many are using Azure Functions for this, but it can also be a third party platform. I will do it using Power Automate, another alternative could be using Azure Logic Apps also.

This webhook uri need to be able to:

  • Process the initial creation of the change notification subscription. Microsoft Graph expects a text response with a validation token for it to be successful.
  • Process every change to the resource that you subscribe to, in this example a Teams Users Presence Status.

We will get into the details on this, but for now, we will start explore using Graph Explorer.

Explore Presence and Subscriptions with Graph Explorer

Go to the Microsoft Graph Docs site and find Graph Explorer there, or just go to https://aka.ms/ge, and log in with your work account.

Type the following GET query: https://graph.microsoft.com/beta/me/presence

You need to be a Teams user to run this, but if permissions are consented, you should be able to see your own presence status like my example below. If you are getting a permission error, make sure that one of the permissions for Presence.Read or Presence.Read.All er consented. This can already been done by your administrator, if not you need to consent to them yourself.

Great, let’s check another users’ presence status. PS! The permissions don’t require admin consent, and using delegated permissions every user can check other users presence status, but you need Presence.Read.All to do that.

You need the other users id property, so you can check that first using for example <a href=”https://graph.microsoft.com/v1.0/users/https://graph.microsoft.com/v1.0/users/<someotherusersUpn>/?$select=id.

And then query for the other user’s presence: <a href=”https://graph.microsoft.com/beta/users/https://graph.microsoft.com/beta/users/<userid>/presence

Another way to query for your or other users presences, that are more aligned to the change notifications is using the /beta/communications/presences endpoint: <a href=”https://graph.microsoft.com/beta/communications/presences/https://graph.microsoft.com/beta/communications/presences/<userid&gt;:

If you can run all these queries successfully, then you are ready to proceed to change notifications subscriptions.

You can check if you have any active subscriptions for change notifications using a GET query to: https://graph.microsoft.com/v1.0/subscriptions or https://graph.microsoft.com/beta/subscriptions, depending on if you are working with preview resources. If you have no active subscriptions this should return a successful but empty response like below:

The next part would be to create a subscription for change notifications for presence, but first we need to prepare the Flow for receiving the webhook uri.

Create a Power Automate Flow for Change Notification Webhook

Log in with your work account to flow.microsoft.com, and create a new flow of type automated and blank, skip the first part of using pre-selected triggers. Search for the Request type trigger that is called “When a HTTP request is received”:

Next add a Initialize Variable action for setting the ClientState which we will use later for the creating the subscription. Set the variable to anything you want, give a name to the Flow and then save. When saved the HTTP POST URL will show the value you later will use as the webhook uri for the change notifications:

The next part is the most important one, and that require a little knowledge of the creation process for change notifications subscriptions. This is documented here https://docs.microsoft.com/en-us/graph/webhooks#managing-subscriptions, but the high level steps are:

  1. When the Subscription is created, a POST request is sent with Content-Type text/plain and a query parameter that contains a validationToken. Microsoft Graph expects a 200 OK response and a text/plain body that contains the validationToken to be able to successfully create the subscription.
  2. Immediately after creation, and for every change notification, a POST request is sent with Content-Type application/json. Microsoft Graph expects a 202 Accepted response to this, if not it will continuously try to send the same change notification. You should also check the ClientState for verification, and of course include your own business logic for what to do in the Flow for when changes occur.

So your Flow needs to address these requirements, and there can be different ways to achieve this, but this is how I did it in my setup:

After the initialize variable, I created a Switch control action, using Content-Type from the HTTP Request trigger. In this way I can separate the logic in my flow for the initial creation and validation, and for the updates.

Validating the Subscription

If the Content-Type is text/plain; charset=utf-8, I will try to get the validation token sent by Microsoft Graph, and create a response that returns status 200 OK and the validation token back in a text/plain body.

I used the Compose data operation, and a custom expression that retrieves the validationToken from the query parameters. The expression I use is: @triggerOutputs()[‘queries’][‘validationToken’].

The Request Response action returns a status code of 200, set the Content-Type to text/plain and for Body select the Output from the Compose Validation Token.

The above steps should be sufficient for creating the subscription, it is recommended to validate the ClientState also, in my case I have done that in the next step.

Processing and Responding to Change Notifications

The change notifications will come as Content-Type “application/json; charset=utf-8”, so I add a case for that under my Switch. Microsoft Graph expects a 202 Accepted Response, it is recommended to respond this early in your flow, so that any later actions or your own business logic generates errors that prevents this response.

In the Response action I just return 202 Accepted. I have then added a Parse Json action to make it easier to reference the values later in the flow:

You can generate the schema from an existing sample, from the Microsoft Graph docs, referred above, or you can just use the following sample from me here:

{  “value”: [    
{      “subscriptionId”: “<subscriptionid>”,      
“clientState”: “MyGraphExplorerSecretClientState”,      
“changeType”: “updated”,      
“resource”: “communications/presences(‘<userid>’)”,      
“subscriptionExpirationDateTime”: “2020-07-11T20:04:40.9743268+00:00”,      
“resourceData”: {        
“@odata.type”: “#Microsoft.Graph.presence”,        
“@odata.id”: “communications/presences(‘<userid>’)”,        
“id”: null,        
“activity”: “Away”,        
“availability”: “Away”      },      
“tenantId”: “<tenantid>”    }  
]}

Now I can begin my business logic. As the value from the notification body is of type array, an Apply to each block will be needed, it will be automatically added if you select any of the attributes in the resulting actions. I will first verify that the clientState is matching the variable i defined in the beginning of the flow. This is a safestep that only the Graph subscription I created can call this Flow.

In the next part I have added a new Switch control, where I retrieve the presence status that have changed.

And this is where you add your own business logic, so I will stop here. In my own example (see PowerApp for Hue Lights above) I will control my lights based on presence updates, but this is a theme for a upcoming blog post.

With the Flow ready, we can now create the Subscription.

Creating Microsoft Graph Change Notification for Teams Presence

I will now create the Subscription in Graph Explorer, using a POST request to https://graph.microsoft.com/beta/subscriptions. In this POST request you will need a request body that contains the following:

Here is a quick explanation of the above values:

  • changeType: can be “updated”, “created”, “deleted”. Only “updated” is relevant for presence changes.
  • clientState: For verification in the webhook that the call is coming from the expected source, consider this a sensitive value.
  • notificationUrl: This is the webhook uri, and for my Flow this will be HTTP POST URL shown earlier.
  • resource: This is the user I want to get presence updates from. Note that you can get multiple user presences also using /communications/presences?$filter=id in ({id},{id}…) (https://docs.microsoft.com/en-us/graph/api/resources/webhooks?view=graph-rest-beta)
  • expirationDateTime: Every subscription only lasts for a specific time depending on resource type. The subscription needs to be renewed before expiry. For Teams Presence subscriptions only last one hour, same as chatMessage. See details for different types of resources: https://docs.microsoft.com/en-us/graph/api/resources/subscription?view=graph-rest-beta#properties. If you select an expiry time that is larger than supported, it will reset to one hour max.

Let’s run this query to create the subscription, if everything works, I will now get a 201 response that the subscription is created. Note the expiration time that is one hour from now:

This wouldn’t work if my Flow had failed, so let’s look into the run history on that. I see that I have 2 recent successful runs:

Let’s look into the first one. I see that the request is coming in as a text/plain with a validationToken, which I then return to MIcrosoft Graph:

Let’s look into the second run. This returns the first presence update, I can see that the user is “away”:

Now, let’s test this. With my user I go to Teams client, it should automatically change my presence from away to available or busy depending on my calendar, or I can set the status manually myself:

Checking the Flow runs again, I can indeed see that the Flow has been triggered via the Change Notification:

If I look into the details, the status update is Available:

If I bring up the Flow in a bigger picture I can indeed see that my logic ends up where the switch case checks for available status:

My Flow will now be triggered for every change as long as the subscription don’t expire.

Summary

In this blog post I have shown how you can create a Microsoft Graph Subscription for Change Notifications to selected users Teams Presence, using Power Automate to validate subscriptions and process the changes.

There are more work to do, as I need some logic for renewing and managing my subscriptions. But that will be the topic for the next blog post coming soon!

Thanks for reading, hope it has been useful 🙂

Remote Authentication and Controlling Philips Hue API using Postman

A while back I bought my first starter kit of Philips Hue lights and bridge, and since then I’ve bought several more bulbs, lamps, lights strips and more. One of the things I quickly got interested in was how to use the very well developed Hue API (https://developers.meethue.com/develop/hue-api/). You need to register and login with a Hue Developer Account, which is free, to be able to explore the API documentation, and for following the steps I describe in this blog post.

If you want to have a very quick look at the Hue API, you can start with this link which does not require a Hue Developer Account: https://developers.meethue.com/develop/get-started-2/. If you are familiar with Postman for exploring API’s, which may have brought you to this blog post to begin with, you should read on as I will describe and share some of the work I have been doing.

When I started exploring the API, I did what most people do: I searched online for tips and how-to’s. I found the following useful Postman Collection on GitHub: https://github.com/moldedbits/philips-hue-postman. So I downloaded that collection, and after configuring my Postman environment with the IP address to the bridge, and pushed the linked button to create a username, all described above on the get started link, I was good to go!

However, this let’s me explore the API only locally, requiring me to run the commands from a local device. This is where I should let you know my real reasons for wanting to use the Hue API, as I work with Microsoft and Cloud solutions daily. I wanted to look into scenarios where I can manipulate my Hue Lights dependent on events from the Microsoft Platform. It could be updating the lights to reflect my presence in Teams, it could be blinking the lights if some alerts happened, and so on. I want to be able to use Azure Functions, Power Platform, PowerShell etc. for controlling my Hue API.

To be able to do that, I need to be able to run Hue API commands remotely, and luckily the Hue API support that as well: https://developers.meethue.com/develop/hue-api/remote-api-quick-start-guide/.

Before I go into the details of this blog post on how I set up remote authentication and control of my Hue API using Postman, I first wanted to share a proof and sneak peak of a coming blog post that I have succeeded in what I’ve wanted, I can now authenticate and control my lights using for example Power Platform:

It was really the foundation of what I learned using Postman to do Remote Authentication to Hue API that made me able to create the solution above, so lets get into the details.

Create a New Remote Hue API app

The first thing you need to do after you have create a Hue Developer account, is to create a new Remote Hue API app here: https://developers.meethue.com/my-apps/.

You need to specify the following required fields:

  • App name: A display name for your remote app
  • Callback URL: This is needed for the Oauth2 consent and returning an authorization code. Specify https://app.getpostman.com/oauth2/callback here.
  • Application description: A short description for your remote app.
  • Optionally you can specify contact details.

After submitting your new app is ready, and also an AppId, ClientId and ClientSecret will be provided, for example:

You are now ready to start setting up your Postman client. If you don’t have Postman installed you can get it at https://www.postman.com/downloads/.

Create Postman Environment and Collection

In the Postman client I have saved all my variables and requests against the Hue API in an environment and collection. At the end of this blog post, I will share with you the link to a GitHub repository where you download these samples for you to start with yourself.

As I will be using several variables with my Postman queries against the Hue API, I’ve created an environment like the following:

Most of these variables are empty to begin with, but you can start by filling out:

  • ClientId: From the app registration at your Hue Developer Account.
  • ClientSecret: From the app registration.
  • AppId: From the app registration.
  • DeviceId: This is the id of the client you will be running queries from. It could be anything like the name of the computer, or the environment you are running from. I just chose my surname and postman.
  • DeviceName: Anything Display Name for the device you run the queries from.

The rest of the variables will be empty for now, but they will be calculated and updated based on responses later.

Also, since collections and variables can be synced and shared with other users or devices, I keep the sensitive variables like client id and secret only in current value column.

After creating my environment, I’ve created a collection with folders for my requests, for example like the following:

We can now proceed to start with Remote Authentication to Hue API.

Remote Authentication to Hue API

Hue Remote API use Oauth2 authorization code flow to get an access token that has to be included in the HTTP Authorization Header for every request to the API. We will use the app registration we did earlier to authenticate with the developer account and grant application access, and with the resulting authorization code we can request the access token to be used in the requests.

The documentation for Remote API is here: https://developers.meethue.com/develop/hue-api/remote-authentication/, and I will walk through the following requests in my Postman Collection:

Get Authorization Code

To get an Authorization code you need to construct an Url like the following:

https://api.meethue.com/oauth2/auth?clientid=ClientId&response_type=code&state=anystring&appid=AppId&deviceid=DeviceId&devicename=DeviceName

You can use Postman to construct the Url like the following, but you cannot really run the query from here as the Postman browser does not support the response:

You can use the code view on the right to get a preview of the request, and the copy it from there and paste it to your browser:

When you execute the authorization code url in your browser, you will be asked to log in with your Hue Developer account, if you aren’t already, and then you need to grant permission to your app registration:

After successfully granting permission, you will be redirected to the Callback Url you specified earlier, and from there you can get the authorization code:

Copy that code and paste it in the environment variable for AuthorizationCode as shown below:

Now that we have the Authorization Code, we can proceed requesting the Access Token. There are two methods for getting an access token:

  • Basic Authentication, which sends your client id and secret as base64 encoded strings in the authorization header.
  • Digest Authentication, which use a more secure challenge-response handshake that handle the credentials more securely.

See documentation for more details on whether to use basic or digest. I will show both in the following.

Get Access Token with Basic Authentication

To get an access token with basic authentication, create a POST request to the token endpoint, specifying the following query parameters:

  • code=authorization code you received in the previous step, using the environment variable
  • grant_type=authorization_code

In addition to that you need to set the Authorization Header to type Basic Authm and specify the ClientId and ClientSecret as shown below:

Postman has this great feature called tests that you can use to run javascript commands before (pre-request) or after queries. You can use this to save response details and update your environment variables. In my example I’m saving the response which would include the access and refresh tokens, and update my environment variables with that:

If I now run this request (and you haven’t waited to long to the authorization code to expire), you should get a successful response:

These tokens expire after the specified amount in seconds. For the Access Token this is 168 hours = 7 days. The Refresh Token expire after 112 days.

This means that after 7 days, you will need to use the refresh token to get a new access token. And when you do that, you will get a new access token valid for 7 more days, and a new refresh token again valid for another 112 days. This means in theory that as long as you refresh the access token regularly, this can last “forever” for your client.

You can also verify that the environment variables for access token and refresh token has been updated based on my javascript from above:

Refresh Token with Basic Authentication

If your access token has expired, you will get a response like the following:

To refresh the access token, you need to use the oauth2/refresh endpoint and with the query parameter grant_type?refresh_token:

The same as getting access token with basic authentication, also when refreshing you need to specify the authorization header to use basic auth, and with client id and secret as username and password:

In Headers, specify that the Content-Type is application/x-www-form-urlencoded:

And in the Body of the request, specify the environment variable for refresh token as form parameter like this:

Under Tests I have the same javascript as shown earlier, which lets me save the refreshed access token and refresh token to my environment again. When I run the request to refresh the access token, I should get a successful response like the following:

Now that I have an access token and refresh token, I’m all set for doing the necessary remote config for getting a username to use in the requests, but first I will show how to do Digest Authentication. If you want you can skip right to the Remote Configure Link Button and Create User section below.

Get Access Token with Digest Authentication

When you want to get an access token using Digest Authentication, you will need to start over again with getting an authorization code as described earlier. After that I will just request an access token with a POST request to the oauth2/token endpoint with the query parameters of code=authorization code & grant_type=authorization_code. This is basically the same request as I used with basic authentication, but with one important difference: I will not specify the Authorization Header to be basic auth.

When I run this request without a authorization header, I will get a 401 unauthorized response like the following. In the response Header and the WWW-Authenticate key I will get the first part of the challenge-response needed for Digest Authentication, the value called Nonce:

As with previous requests, I run a javascript test that extracts the response, in this case it is a little more complicated as i need to examine the response header and split so that I can get the Nonce value. This is so saved to the Nonce environment variable like this:

Now that I have the Nonce I can proceed requesting an access token with Digest Authentication. This time I go to the request Headers and add an Authorization Header with a value as shown below:

This requires a little more explanation. The Authorization header consist of the following:

Digest username=”{{ClientId}}”, realm=”oauth2_client@api.meethue.com”, nonce=”{{Nonce}}”, uri=”/oauth2/token”, response=”{{ResponseMD5Hash}}”

, where Nonce is the value I retrieved from the WWW-Authenticate response header in the first request, and the response is a calculated MD5 Hash as explained in the documentation linked earlier from Hue Remote API. But how did I get this value calculated? This time around I’m using a pre-request script to calculate that, based on the formula from the documentation (marked in green comment below):

So this little pre-request script calculates and sets the ResponseMD5Hash environment variable, so that I have that available when constructing the Authorization Header for Digest authentication.

Before I proceed with requesting an access token using Digest Authentication, I need to put together a Test script that do 2 tests:

  1. Check if there is a WWW-Authenticate response header, and if so save the Nonce to environment variable.
  2. Check if there is a successful json response and if so save the access and refresh tokens to environment variables.

My script now looks like this:

This means that I will need to run the request twice in succession, the first time I will get a 401 Unauthorized response, in which I will get the Nonce:

The second time I will get the 200 OK and a JSON response:

I now have an access token using Digest Authentication:

Refresh Token with Digest Authentication

When the Access Token expire, I will need to use the Refresh Token to get a new Access Token with Digest Authentication. Refreshing using Digest is based on the same challenge-response flow as getting the access token.

First I need to construct a POST request to the oauth2/refresh endpoint like below:

Then I have to add to the Header Content-Type: application/x-www-form-urlencoded, and the Authorization using Digest like shown below:

This Digest Authorization Header is the same as with requesting the access token, with one important difference, uri is “/oauth2/refresh“.

Next, in the Body I need to specify the refresh token:

In the Pre-request Script, make sure to use “/oauth2/refresh” when calculating the md5 hash:

And the Tests scripts are the same as with getting an access token:

Remember that refreshing the access token using Digest will follow the same pattern as getting the access token, you will need to run the refresh request twice in succession for getting a 401 Unauthorized and then 200 OK like shown above.

Get Access Token with Postman Oauth2 Authorization

With the collection of requests shown above, you should now be able to use the access token and refresh when needed for as long time as you need to. Before I proceed to configuration and accessing my lights, I will add one more method of getting access token with Postman: Using the built-in Oauth2 Authorization method.

Lets start by creating the following GET request, this is just a simple request to get configuration from my HUE bridge:

Next, go to Authorization, select Oauth 2.0 and add Authorization data to Request Headers as shown below. Then click on the Get New Access Token button:

In the following dialog, give a name for your Token, and make sure that grant type is Authorization Code. Select Authorize using browser, and note the Callback URL. You need to copy this URL. Construct an Auth URL using your environment variables like I have shown below, the Access Token URL, specify ClientId and ClientSecret variables, type any string for state, and make sure that Client Authentication is Basic Auth header:

Now, before you click on Request Token, you need to go back to your Hue Developer Account and change the Callback URL to the one above:

When you have done that, you can click on Request Token, which will launch you default browser:

You need to be logged in with your Hue Developer account and grant permission:

After that you will get the following message when redirected:

You might have some issues with pop up blocker or redirects:

If everything works as expected, your Postman client will receive the callback and complete then authentication:

And you should receive your Access Token:

Note the little warning about partially support:

Click the Use Token button, which should fill in the Access Token field. Now click Send on the request:

This should now response with some config details of your Hue Bridge, like the name and versions. If you go to Headers and show the hidden Headers you will see that Authorization Header has been set with Bearer and your Access Token.

So going forward, and for the rest of this blog post, you now have two options for using Access Token:

  1. Use the Access Token from environment variables that I have shown how you can retrieve with Basic or Digest Authentication, and Refresh when needed.
  2. Use the Oauth2 Authorization for every request (or set it on the parent level of your collection of requests). When the Access Token expire, it won’t refresh automatically, so you need to Request a new one.

Remote Configure Link Button and Create User

Now for the next part we will have to remote configure the link button, so that the bridge will accept creating a new user for us to be able to control the lights.

Enable Link Button

Users of Hue will remember that you from time to time that you need to push the link button on your bridge when adding new users. With Remote API we can do this without having to run around the house 😉

Create a PUT request like the following. On Headers specify Authorization and value to be Bearer {{AccessToken}}, which will get your environment variable from the previously retrieved access token. PS! For all the following requests: Always include this Authorization: Bearer {{AccessToken}} header!

Specify Content-Type: application/json:

Next, set the Body to {“linkbutton”: true}:

And then Send the Request, you should get a success response like below:

You now have 30 seconds to add your user..

Add Whitelist Identifier

Next, create a POST request to the Bridge endpoint that specify devicetype in the Body, like shown below, where devicetype is a named description of your client and app, for example:

Run the request, but if 30 seconds have passed, you will get this response:

In that case, just run the PUT request again to enable the link button, and try again. When successful you should get a response like the following:

This username have to be included in every Hue Remote API request later, so make sure to save it in your environment variables:

Get All Configuration

Your starting point now for all requests are https://api.meethue.com/bridge/{{username}}/ (and remember to include the Authorization: Bearer {{AccessToken}} header always).

If I want to list all configuration, I can run this GET request:

Which will provide me with all the configuration details of my Bridge:

We are now ready to start controlling the Lights and more!

Control Lights Remotely

If will now show you some sample requests for controlling lights via Hue Remote API:

List All Lights

If I want to list all my lights, I can just run a GET request: https://api.meethue.com/bridge/{{username}}/lights/:

The response will show all lights, numbered from 1 to count of lights connected to your bridge. I’ve collapsed some details in the response above, but you can see from my light number 12 that the state is “on”, you can see the brightness, color and more.

PS! Every light is a new typed object, and not an array, which I find a bit unfortunate, so you need to be aware of that if you want to loop through your lights for example.

If you want to get a specific light, just add the light number at the end of the GET request URI, like this: https://api.meethue.com/bridge/{{username}}/lights/12

Turn a Light On/Off

You can control the state of a specific light by doing a PUT request to your specified light number, for example light number 12: https://api.meethue.com/bridge/{{username}}/lights/12/state

You need a Request Body that tells what the light state should be. For example to turn the light off, just include a body with {“on”:false}, the response will confirm success (and you should be able to visually confirm as well ;):

Likewise, to turn on again:

Controlling Light State and Colors

Now you can really start playing with your lights with custom states, you can change brightness, go from cold white to warm white, or maybe set a color scene for the millions of colors available if your light supports color. More info can be found here:

https://developers.meethue.com/develop/get-started-2/core-concepts/#controlling-light

Light color are primarly controlled by the CIE color space, where you specify an array of two values between 0 and 1 expressed as x and y.

Source: https://developers.meethue.com/develop/get-started-2/core-concepts/#controlling-light

Here is an example where I both control brightness (1-254) and a custom color:

Now, you can just experiment and learn for yourself. A good tip is to use your Hue App on your mobile phone to set the color you like, and then use GET to get the light state color settings.

Summary and Github Repo

I hope you have learnt a lot about Hue Remote API when reading this, I certainly learnt a lot exploring and documenting this via Postman. This understanding has provided me with the knowledge and tools to look into automation and integration scenarios with the Microsoft solutions I use daily in my job. I’ve already been doing this with PowerShell, Azure Functions, Logic Apps and Power Automate to name a few. There is a blog post coming soon on Power Platform and how I did that to control my lights based on Teams Presence and more.

So whether you are a developer that loves to write code, or maybe you are more into the low-code, no-code solutions, I hope this has been helpful.

I promised I will share my Postman Collection and Requests for Hue Remote API, so here it is: https://github.com/JanVidarElven/HueRemoteAPI-Postman-Collections

Knock your self (or your lights) out, and please feel free to comment or contribute 🙂

Schedule Tesla Heating with Microsoft Flow from Calendar @microsoftflow

My Skill colleague Pål-Andre Kjøniksen wrote a blog post on how he can control the heating in his Tesla Model S based on his Office 365 Calendar appointments and Microsoft Flow.

https://www.skill.no/blogg/samhandlingsbloggen/2017-01-25-microsoft-flow-styrer-varmen-i-teslaen/.

The blog post is in Norwegian, but it should be fairly simple to follow the steps, where the idea was that if he creates a Calendar appointment that starts with “Tesla:”, the heating will start up 20 minutes before that, making sure that the car is warm when he drives off.

Displaying Azure Automation Runbook Stats in OMS via Performance Collection and Operations Manager

Wouldn’t it be great to get some more information about your Azure Automation Runbooks in the Operations Management Suite Portal? That’s a rhetorical question, of course the answer will be yes!

While Azure Automation is a part of the suite of components in OMS, today you only get the following information from the Azure Automation blade:

The blade shows the number of runbooks and jobs from the one Automation Account you have configured. You can only configure one Automation Account at a time, and for getting more details you are directed to the Azure Portal.

I wanted to use my OMS-connected Operations Manager Management Group, and use a PowerShell script rule to get some more statistics for Azure Automation and display that in OMS Log Analytics as Performance Data. I will do this using the “Sample Management Pack – Wizard to Create PowerShell script Collection Rules” described in this blog article http://blogs.msdn.com/b/wei_out_there_with_system_center/archive/2015/09/29/oms-collecting-nrt-performance-data-from-an-opsmgr-powershell-script-collection-rule-created-from-a-wizard.aspx.

I will use the AzureRM PowerShell Module for the PowerShell commands that will connect to my Azure subscription and get the Azure Automation Runbooks data.

Getting Ready

Before I can create the PowerShell script rule for gettting the Azure Automation data, I have to do some preparations first. This includes:

  1. Importing the “Sample Management Pack – Wizard to Create PowerShell script Collection Rules” to my Operations Manager environment.
    1. This can be downloaded from Technet Gallery at https://gallery.technet.microsoft.com/Sample-Management-Pack-e48040f7.
  2. Install the AzureRM PowerShell Module (at the chosen Target server for the PowerShell Script Rule).
    1. I chose to install it from the PowerShell Gallery using the instructions here: https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/
    2. If you are running Windows Server 2012 R2, which I am, follow the instructions here to support the PowerShellGet module, https://www.powershellgallery.com/GettingStarted?section=Get%20Started.
  3. Choose Target for where to run the AzureRM commands from
    1. Regarding the AzureRM and where to install, I decided to use the SCOM Root Management Server Emulator. This server will then run the AzureRM commands against my Azure Subscription.
  4. Choose account for Run As Profile
    1. I also needed to think about the run as account the AzureRM commands will run under. As we will see later the PowerShell Script Rules will be set up with a Default Run As Profile.
    2. The Default Run As Profile for the RMS Emulator will be the Management Server Action Account, if I had chosen another Rule Target the Default Run As Profile would be the Local System Account.
    3. Alternatively, I could have created a custom Run As Profile with a user account that have permissions to execute the AzureRM cmdlets and connect to and read the required data from the Azure subscription, and configure the PowerShell Script rules to use that.
    4. I decided to go with the Management Server Action Account, in my case SKILL\scom_msaa. This account will execute the AzureRM PowerShell cmdlets, so I need to make sure that I can login to my Azure subscription using that account.
  5. Next, I started PowerShell ISE with “Run as different user”, specifying my scom_msaa account. I run the commands below, as I wanted to save the password for the user I’m going to connect to the Azure subscription and get the Automation data. I also did a test import-module of the AzureRM modules I will need in the main script.

The commands above are here in full:


# Prepare to save encrypted password

# Verify that logged on as scom_msaa
whoami

# Get the password
$securepassword = Read-Host -AsSecureString -Prompt Enter Azure AD account password:

# Filepath for encrypted password file
$filepath = C:\users\scom_msaa\AppData\encryptedazureadpassword.txt

# Save password encrypted to file
ConvertFrom-SecureString -SecureString $securepassword | Out-File -FilePath $filepath

Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM
Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM.Profile
Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM.Automation

At this point I’m ready for the next step, which is to create some PowerShell commands for the Script Rule in SCOM.

Creating the PowerShell Command Script for getting Azure Automation data

First I needed to think about what kind of Azure Automation and Runbook data I wanted to get from my Azure Subscription. I decided to get the following values:

  • Job Count Last Day
  • Job Count Last Month
  • Job Count This Month
  • Job Minutes This Month
  • Runbooks in New State
  • Runbooks in Published State
  • Runbooks in Edit State
  • PowerShell Workflow Runbooks
  • Graphical Runbooks
  • PowerShell Script Runbooks

I wanted to have the statistics for Runbooks Jobs to see the activity of the Runbooks. As I’m running the Free plan of Azure Automation, I’m restricted to 500 minutes a month, so it makes sense to count the accumulated job minutes for the month as well.

In addition to this I want some statistics for the number of Runbooks in the environment, separated on New, Published and Edit Runbooks, and the Runbook type separated on Workflow, Graphical and PowerShell Script.

The PowerShell Script Rule for getting these data will be using the AzureRM PowerShell Module, and specifically the cmdlets in AzureRM.Profile and AzureRM.Automation:

To log in and authenticate to Azure, I will use the encrypted password saved earlier, and create a Credential object for the login:

Initializing the script with date filters and setting default values for variables. I decided to create the script so that I can get data from all the Resource Groups I have Automation Accounts in. This way, If I have multiple Automation Accounts, I can get statistics combined for each of them:

Then, looping through each Resource Group, running the different commands to get the variable data. Since I potentially will loop through multiple Resource Groups and Automation Accounts, the variables will be using += to add to the previous loop value:

After setting each variable and exiting the loop, the $PropertyBag can be filled with the values for the different counters:

The complete script is shown below for how to get those Azure Automation data via SCOM and PowerShell Script Rule to to OMS:


# Debug file
$debuglog = $env:TEMP+\powershell_perf_collect_AA_stats_debug.log

Date | Out-File $debuglog

Who Am I: | Out-File $debuglog -Append
whoami |
Out-File $debuglog -Append

$ErrorActionPreference = Stop

Try {

If (!(Get-Module –Name AzureRM)) { Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM }
If (!(Get-Module –Name AzureRM.Profile)) { Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM.Profile }
If (!(Get-Module –Name AzureRM.Automation)) { Import-Module C:\Program Files\WindowsPowerShell\Modules\AzureRM.Automation }

# Get Cred for ARM
$filepath = C:\users\scom_msaa\AppData\encryptedazureadpassword.txt
$userName = myAzureADAdminAccount
$securePassword = ConvertTo-SecureString (Get-Content -Path $FilePath)
$cred = New-Object -TypeName System.Management.Automation.PSCredential ($username, $securePassword)

# Log in and sett active subscription
Login-AzureRmAccount -Credential $cred

$subscriptionid = mysubscriptionID

Set-AzureRmContext -SubscriptionId $subscriptionid

$API = new-object -comObject MOM.ScriptAPI

$aftertime = $(Get-Date).AddHours(1)
$afterdate_lastday = $(Get-Date).AddDays(1)
$afterdate_lastmonth = $(Get-Date).AddDays(30)
$afterdate_thismonth = $(Get-Date).AddDays(($(Get-Date).Day)+1)

$AutomationRGs = @(MyResourceGroupName1,MyResourceGroupName2)

$PropertyBags=@()

$newrunbooks = 0
$publishedrunbooks = 0
$editrunbooks = 0
$scriptrunbooks = 0
$graphrunbooks = 0
$powershellrunbooks = 0
$jobcountlastday = 0
$jobcountlastmonth = 0
$jobcountthismonth = 0
$jobminutesthismonth = 0

ForEach ($AutomationRG in $AutomationRGs) {

$rmautomationacct = Get-AzureRmAutomationAccount -ResourceGroupName $AutomationRG

$newrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.State -eq New}).Count

$publishedrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.State -eq Published}).Count

$editrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.State -eq Edit}).Count

$scriptrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.RunbookType -eq Script}).Count

$graphrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.RunbookType -eq Graph}).Count

$powershellrunbooks += (Get-AzureRmAutomationRunbook -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
|
Where {$_.RunbookType -eq PowerShell}).Count

$jobcountlastday += (Get-AzureRmAutomationJob -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
-StartTime
$afterdate_lastday).Count

$jobcountlastmonth += (Get-AzureRmAutomationJob -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
-StartTime
$afterdate_lastmonth).Count

$jobcountthismonth += (Get-AzureRmAutomationJob -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
-StartTime
$afterdate_thismonth.ToLongDateString()).Count

$jobsthismonth = Get-AzureRmAutomationJob -AutomationAccountName $rmautomationacct.AutomationAccountName -ResourceGroupName $AutomationRG `
-StartTime
$afterdate_thismonth.ToLongDateString() | Select-Object RunbookName, StartTime, EndTime, CreationTime, LastModifiedTime, @{Name=RunningTime;Expression={[TimeSpan]::Parse($_.EndTime $_.StartTime).TotalMinutes}}, @{Name=Month;Expression={($_.EndTime).Month}}

$jobminutesthismonth += [int][Math]::Ceiling(($jobsthismonth | Measure-Object -Property RunningTime -Sum).Sum)

}

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Job Count Last Day)
$PropertyBag.AddValue(Value, [UInt32]$jobcountlastday)
$PropertyBags += $PropertyBag

Job Count Last Day: | Out-File $debuglog -Append
$jobcountlastday | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Job Count Last Month)
$PropertyBag.AddValue(Value, [UInt32]$jobcountlastmonth)
$PropertyBags += $PropertyBag

Job Count Last Month: | Out-File $debuglog -Append
$jobcountlastmonth | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Job Count This Month)
$PropertyBag.AddValue(Value, [UInt32]$jobcountthismonth)
$PropertyBags += $PropertyBag

Job Count This Month: | Out-File $debuglog -Append
$jobcountthismonth | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Job Minutes This Month)
$PropertyBag.AddValue(Value, [UInt32]$jobminutesthismonth)
$PropertyBags += $PropertyBag

Job Minutes This Month: | Out-File $debuglog -Append
$jobminutesthismonth | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Runbooks in New State)
$PropertyBag.AddValue(Value, [UInt32]$newrunbooks)
$PropertyBags += $PropertyBag

Runbooks in New State: | Out-File $debuglog -Append
$newrunbooks | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Runbooks in Published State)
$PropertyBag.AddValue(Value, [UInt32]$publishedrunbooks)
$PropertyBags += $PropertyBag

Runbooks in Published State: | Out-File $debuglog -Append
$publishedrunbooks | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Runbooks in Edit State)
$PropertyBag.AddValue(Value, [UInt32]$editrunbooks)
$PropertyBags += $PropertyBag

Runbooks in Edit State: | Out-File $debuglog -Append
$editrunbooks | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, PowerShell Workflow Runbooks)
$PropertyBag.AddValue(Value, [UInt32]$scriptrunbooks)
$PropertyBags += $PropertyBag

PowerShell Workflow Runbooks: | Out-File $debuglog -Append
$scriptrunbooks | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Graphical Runbooks)
$PropertyBag.AddValue(Value, [UInt32]$graphrunbooks)
$PropertyBags += $PropertyBag

Graphical Runbooks: | Out-File $debuglog -Append
$graphrunbooks | Out-File $debuglog -Append

$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, PowerShell Script Runbooks)
$PropertyBag.AddValue(Value, [UInt32]$powershellrunbooks)
$PropertyBags += $PropertyBag

PowerShell Script Runbooks: | Out-File $debuglog -Append
$powershellrunbooks | Out-File $debuglog -Append

$PropertyBags

} Catch {

Error Catched: | Out-File $debuglog -Append
$(
$_.Exception.GetType().FullName) | Out-File $debuglog -Append
$(
$_.Exception.Message) | Out-File $debuglog -Append

}

PS! I have included debugging and logging in the script, be aware though that doing $ErrorActionPreference=Stop will end the script if any errors, for example with logging, so it might be an idea to remove the debug logging when confirmed that everything works.

In the next part I’m ready to create the PowerShell Script Rule.

Creating the PowerShell Script Rule

In the Operations Console, under Authoring, create a new PowerShell Script Rule as shown below:

  1. Select the PowerShell Script (Performance – OMS Bound) Rule:I have created a custom destination management pack for this script.
  2. Specifying a Rule name and Rule Category: Performance Collection. As mentioned earlier in this article the Rule target will be the Root Management Server Emulator:
  3. Selecting to run the script every 30 minutes, and at which time the interval will start from:
  4. Selecting a name for the script file and timeout, and entering the complete script as shown earlier:
  5. For the Performance Mapping information, the Object name must be in the \\FQDN\YourObjectName format. For FQDN I used the Target variable for PrincipalName, and for the Object Name AzureAutomationRunbookStats, and adding the “\\” at the start and “\” between: \\$Target/Host/Property[Type=”MicrosoftWindowsLibrary7585010!Microsoft.Windows.Computer”]/PrincipalName$\AzureAutomationRunbookStatsI specified the Counter name as “Azure Automation Runbook Stats”, and the Instance and Value are specified as $Data/Property(@Name=’Instance’)$ and $Data/Property(@Name=Value)$. These reflect the PropertyBag instance and value created in the PowerShell script:
  6. After finishing the Create Rule Wizard, two new rules are created, which you can find by scoping to the Root Management Server Emulator I chose as target. Both Rules must be enabled, as they are not enabled by default:

At this point we are finished configuring the SCOM side, and can wait for some hours to see that data are actually coming into my OMS workspace.

Looking at Azure Automation Runbook Stats Performance Data in OMS

After a while I will start seeing Performance Data coming into OMS with the specified Object and Counter Name, and for the different instances and values.

In Log Search, I can specify Type=Perf ObjectName=AzureAutomationRunbookStats, and I will find the Results and Metrics for the specified time frame.

In the example above I’m highlighting the Job Minutes This Month counter, which will steadily increase for each month, and as we can see the highest value was 107 minutes, after when the month changed to March we were back at 0 minutes. After a while when the number of job minutes increases it will be interesting to follow whether this counter will go close to 500 minutes.

This way, I can now look at Azure Automation Runbook stats as performance data, showing different scenarios like how many jobs and runbook job minutes there are over a time period. I can also look at how what type of runbooks I have and what state they are in.

I can also create saved searches and alerts for my search criteria.

Creating OMS Alerts for Azure Automation Runbook Counters

There is one specific scenario for Alerts I’m interested in, and that is when I’m approaching my monthly limit on 500 job minutes.

“Job Minutes This Month” is a counter that will get the sum of all job minutes for all runbook jobs across all automation accounts. In the classic Azure portal, you will have a usage overview like this:

With OMS I would get this information over a time period like this:

The search query for Job Minutes This Month as I have defined it via the PowerShell Script Rule in OMS is:

Type=Perf ObjectName=AzureAutomationRunbookStats InstanceName=”Job Minutes This Month”

This would give me all results for the defined time period, but to generate an alert I would want to look at the most recent results, for example for the last hour. In addition, I want to filter the results for my alert when the number of job minutes are over the threshold of 450 which means I’m getting close to the limit of 500 free minutes per month. My query for this would be:

Type=Perf ObjectName=AzureAutomationRunbookStats InstanceName=”Job Minutes This Month” AND TimeGenerated>NOW-1HOUR AND CounterValue > 450

Now, in my test environment, this will give med 0 results, because I’m currently at 37 minutes:

Let’s say, for sake of testing an Alert, I add criteria to include 37 minutes as well, like this:

This time I have 2 results. Let’s create an alert for this, press the ALERT button:

For the alert I give it a name and base it on the search query. I want to check every 60 minutes, and generate alert when the number of results is greater than 1 so that I make sure the passing of the threshold is consistent and not just temporary.

For actions I want an email notification, so I type in a Subject and my recipients.

I Save the alert rule, and verify that it was successfully created.

Soon I get my first alert on email:

Now, that it works, I can remove the Alert and create a new one without the OR CounterValue=37, this I leave to you 😉

With that, this blog post is concluded. Thanks for reading, I hope this post on how to get more insights on your Azure Automation Runbook Stats in OMS and getting data via NRT Perfomance Collection has been useful 😉

Creating SCSM Incidents from OMS Alerts using Azure Automation – Part 2

This is the second part of a 2-part blog article that will show how you can create a new Service Manager Incident from an Azure Automation Runbook using a Hybrid Worker Group, and with OMS Alerts search for a condition and generate an alert which triggers this Azure Automation Runbook for creating an Incident in Service Manager via a Webhook and some contextual data for the Alert.

In Part 1 of the blog I prepared my Service Manager environment, and created Azure Automation Runbook and Assets to run via Hybrid Worker for generating incidents in Service Manager. In this second part of the blog I will configure my Operations Management Suite environment for OMS Alerting and Alert Remediation, and create an OMS Alert that will trigger this PowerShell Runbook.

Configuring OMS Alerting and Remediation

If you haven’t already for your OMS Workspace, you will need to enable the OMS Alerting and Alert Remediation under Settings and Preview Features. This is shown in the picture below:

Creating the OMS Alert

The next step is to create the OMS Alert. To do this I will need to do a Log Search with the criteria I want. For my example in this article, I will use an EventLog Search where I have previously added Azure AD Application Proxy Connector Event Log to OMS, and where I also have created a custom field for events where “The Connector was unable to connect to the service due to networking issues”.

The result of this Log Search is shown below, where I have 7 results in the last 7 days:

When I enabled OMS Alerting and Remediation under Settings, I can now see that I have a new Alert button at the bottom of the screen. I click on that to create my new OMS Alert.

I give the OMS Alert a descriptive name, using my current search query, and checking every 15 minutes for this alert. I can also specify a threshold over a specified time windows, in this case I want the Alert to trigger if there are more than 0 occurrences. If I want to I can also send an email notification to specified recipient(s).

Since I want to generate a SCSM Incident when this OMS Alert triggers, I select to Enable Remediation and select my Create-SCSMIncident Runbook.

After saving the OMS Alert I get a successful confirmation, and a link to where I can see my configured Alerts:

While in Preview I can only create up to 10 Alerts, and I can also remove them but not edit existing for now:

That is all I need to configure in Operations Management Suite to get the OMS Alert to trigger. Now I need to go back to the Azure Portal and configure some changes for my PowerShell Runbook!

Configuring Azure Automation PowerShell Runbook for Webhook and Hybrid Worker Group

In the Azure Portal and under my Automation Account and the PowerShell Runbook I created for Create-SCSMIncident (see Part 1), there will now automatically be created a Webhook for OMS Alert Remediation. This Webhook has a expiry date of one year ahead of creation.

I now need to specify the Parameters for the Webhook, so that it runs on my Hybrid Worker group:

After I have specified the Hybrid Worker group, any OMS Alerts will now trigger this Runbook and run on my local environment, and in this case create the SCSM incident as specified in the PowerShell Runbook. But, I also want to have some contextual data in the Incident, so I need to look at the Webhook data in the next step.

Configuring and using Webhook for contextual data in Runbook

Whenever the OMS Alert triggers the remediation Azure Automation Runbook via the Webhook, event information will be submitted from OMS to the Runbook via WebhookData input parameter.

An example of this is shown in the image below, where the WebhookData Input Parameter contains event information formatted as JSON (JavaScript Object Notation):

So, I need to configure my PowerShell Runbook to process this WebhookData, and to use that information when creating the Incident.

Let’s first take a look at the WebhookData. If I copy the input from above to for example Visual Studio Code, I can see clearer that the WebhookData consists of a WebhookName, RequestBody and RequestHeader. The values I’m looking for are in the RequestBody and SearchResults:

I update my PowerShell Runbook so that I can process the WebhookData, and get the WebhookName, WebhookHeaders and WebhookBody. When I have the WebhookBody, I can get the SearchResults and by using ConvertFrom-JSON loop trough the value array to get the fields I’m looking for like this:

In this case I want the Source, EventID and RenderedDescription, which also corresponds to the values from the Alert in OMS, as shown below. I then use these values for the Incident Title and Description in the PowerShell Runbook.

The complete Azure Automation PowerShell Runbook is shown below:

param (
[
object]$WebhookData
)

if ($WebhookData -ne $null) {

# Get Webhook Data
$WebhookName = $WebhookData.WebhookName
$WebhookHeaders = $WebhookData.RequestHeader
$WebhookBody = $WebhookData.RequestBody

# Writing Webhook Data to verbose output
Write-Verbose Webhook name: ‘$WebhookName’
Write-Verbose Webhook header:
Write-Verbose $WebhookHeaders
Write-Verbose Webhook body:
Write-Verbose $WebhookBody

# Searching Webhook Data for Value Results
$SearchResults = (ConvertFrom-JSON $WebhookBody).SearchResults
$SearchResultsValue = $SearchResults.value
Foreach ($item in $SearchResultsValue)
{
# Getting Alert Source, EventID and RenderedDescription
$AlertSource = $item.Source
Write-Verbose Alert Name: ‘$AlertSource’
$AlertEventId = $item.EventID
Write-Verbose Alert EventID: ‘$AlertEventId’
$AlertDescription = $item.RenderedDescription
Write-Verbose Alert Description: ‘$AlertDescription’
}

# Setting Incident Title and Description based on OMS Alert
$incident_title = OMS Alert: + $AlertSource
$incident_desc = $AlertDescription
}
else
{
# Setting Generic Incident Title and Description
$incident_title = Azure Automation Generated Alert
$incident_desc = This Incident is generated from an Azure Automation Runbook via Hybrid Worker
}

# Getting Assets for SCSM Management Server Name and Credentials
$scsm_mgmtserver = Get-AutomationVariable -Name SCSMServerName
$credential = Get-AutomationPSCredential -Name SCSMAASvcAccount

# Create Remote Session to SCSM Management Server
#
(Specified credential must be in Remote Management Users local group and SCSM operator)
$session = New-PSSession -ComputerName $scsm_mgmtserver -Credential $credential

# Import module for Service Manager PowerShell CmdLets
$SMDIR = Invoke-Command -ScriptBlock {(Get-ItemProperty hklm:/software/microsoft/System Center/2010/Service Manager/Setup).InstallDirectory} -Session $session
Invoke-Command -ScriptBlock { param($SMDIR) Set-Location -Path $SMDIR } -Args $SMDIR -Session $session
Import-Module .\Powershell\System.Center.Service.Manager.psd1 -PSSession $session

# Create Incident
Invoke-Command -ScriptBlock { param ($incident_title, $incident_desc)

# Get Incident Class
$IncidentClass = Get-SCSMClass -Name System.WorkItem.Incident

# Get Prefix for Incident IDs
$IncidentPrefix = (Get-SCSMClassInstance -Class (Get-SCSMClass -Name System.WorkItem.Incident.GeneralSetting)).PrefixForId

# Set Incident Properties
$Property = @{Id=$IncidentPrefix{0}
Title
= $incident_title
Description
= $incident_desc
Urgency
= System.WorkItem.TroubleTicket.UrgencyEnum.Medium
Source
= SkillSCSM.IncidentSourceEnum.OMS
Impact
= System.WorkItem.TroubleTicket.ImpactEnum.Medium
Status
= IncidentStatusEnum.Active
}

# Create the Incident
New-SCSMClassInstance -Class $IncidentClass -Property $Property -PassThru

} -Args $incident_title, $incident_desc -Session $session

Remove-PSSession $session

After publishing the updated Runbook I’m ready for the OMS Alert to trigger.

When the OMS Alert triggers

The next time this OMS Alert triggers, I can verify that the Runbook is started and an Incident is created. Since I also wanted an email notification, I also received that:

In Operations Management Suite, I search for any OMS Alerts generated by using the query “Type=Alert SourceSystem=OMS”:

In Azure Automation, I can see that the Runbook has launched a job:

And most importantly, I can see that the Incident is created in Service Manager with the info I specified:

That concludes this two-part blog article on how to create SCSM Incidents from OMS Alerts. OMS Automation rocks!

Creating SCSM Incidents from OMS Alerts using Azure Automation – Part 1

There has been some great announcements recently for OMS Alerts in Public Preview (http://blogs.technet.com/b/momteam/archive/2015/12/02/announcing-the-oms-alerting-public-preview.aspx) and Webhooks support for Hybrid Worker Runbooks (https://azure.microsoft.com/en-us/updates/hybrid-worker-runbooks-support-webhooks/). This opens up for some scenarios I have been thinking about.

This 2-part blog will show how you can create a new Service Manager Incident from an Azure Automation Runbook using a Hybrid Worker Group, and with OMS Alerts search for a condition and generate an alert which triggers this Azure Automation Runbook for creating an Incident in Service Manager via a Webhook and some contextual data for the Alert.

This is the first part of this blog post, so I will start by preparing the Service Manager environment, creating the Azure Automation Runbook, and testing the Incident creation via the Hybrid Worker.

Prepare the Service Manager Environment

First I want to prepare my Service Manager Environment for the Incident creation via Azure Automation PowerShell Runbooks. I decided to create a new Incident Source Enumeration for ‘Operations Management Suite’, and also to create a new account with permissions to create incidents in Service Manager to be used in the Runbooks.

To create the Source I edited the Library List for Incident Source like this:

To make it easier to refer to this Enumeration Value in PowerShell scripts, I define my own ID in the corresponding Management Pack XML:

And specifying the DisplayString for the ElementID for the Languages I want:

The next step is to prepare the account for the Runbook. As Azure Automation Runbooks on Hybrid Workers will run as Local System, I need to be able to run my commands as an account with permissions to Service Manager and to create Incidents.

I elected to create a new local Active Directory account, and give that account permission to my Service Manager Management Server.

With the new account created, I added it to the Remote Management Users local group on the Service Manager Management Server:

Next I added this account to the Advanced Operators Role Group in Service Manager:

Adding the account to the Advanced Operators group is more permission than I need for this scenario, but will make me able to use the same account for other work item scenarios in the future.

With the Service Manager Enviroment prepared, I can go to the next step which is the PowerShell Runbook in Azure Automation.

Create an Azure Automation Runbook for creating SCSM Incidents

I created a new PowerShell Script based Runbook in Azure Automation for Creating Incidents. This Runbook are using a Credential Asset to run Remote PowerShell session commands to my Service Manager Management Server. The Credential Asset is the local Active Directory Account I created in the previous step:

I also have created a variable for the SCSM Management Server Name to be used in the Runbook.

The PowerShell Runbook can then be created in Azure Automation, using my Automation Assets, and connecting to Service Manager for creating a new Incident as specified:

The complete PowerShell Runbook is show below:

# Setting Generic Incident Title and Description
$incident_title = Azure Automation Generated Alert
$incident_desc = This Incident is generated from an Azure Automation Runbook via Hybrid Worker

# Getting Assets for SCSM Management Server Name and Credentials
$scsm_mgmtserver = Get-AutomationVariable -Name SCSMServerName
$credential = Get-AutomationPSCredential -Name SCSMAASvcAccount

# Create Remote Session to SCSM Management Server
#
(Specified credential must be in Remote Management Users local group and SCSM operator)
$session = New-PSSession -ComputerName $scsm_mgmtserver -Credential $credential

# Import module for Service Manager PowerShell CmdLets
$SMDIR = Invoke-Command -ScriptBlock {(Get-ItemProperty hklm:/software/microsoft/System Center/2010/Service Manager/Setup).InstallDirectory} -Session $session
Invoke-Command -ScriptBlock { param($SMDIR) Set-Location -Path $SMDIR } -Args $SMDIR -Session $session
Import-Module .\Powershell\System.Center.Service.Manager.psd1 -PSSession $session

# Create Incident
Invoke-Command -ScriptBlock { param ($incident_title, $incident_desc)

# Get Incident Class
$IncidentClass = Get-SCSMClass -Name System.WorkItem.Incident

# Get Prefix for Incident IDs
$IncidentPrefix = (Get-SCSMClassInstance -Class (Get-SCSMClass -Name System.WorkItem.Incident.GeneralSetting)).PrefixForId

# Set Incident Properties
$Property = @{Id=$IncidentPrefix{0}
Title
= $incident_title
Description
= $incident_desc
Urgency
= System.WorkItem.TroubleTicket.UrgencyEnum.Medium
Source
= SkillSCSM.IncidentSourceEnum.OMS
Impact
= System.WorkItem.TroubleTicket.ImpactEnum.Medium
Status
= IncidentStatusEnum.Active
}

# Create the Incident
New-SCSMClassInstance -Class $IncidentClass -Property $Property -PassThru

} -Args $incident_title, $incident_desc -Session $session

Remove-PSSession $session
The script should be pretty straightforward to interpret. The most important part is that it would require to be run on a Hybrid Worker Group with Servers that can connect via PowerShell Remote to the specified Service Manager Management Server. The Incident that will be created are using a few variables for incident title and description (these will be updated for contextual data from OMS Alerts in part 2), and some fixed data for Urgency, Impact and Status, along with my custom Source for Operations Management Suite (ref. the Enumeration Value created in the first step).

After publishing this Runbook I’m ready to run it with a Hybrid Worker.

Testing the PowerShell Runbook with a Hybrid Worker

Now I can run my Azure Automation PowerShell Runbook. I select to run it on my previously defined Hybrid Worker Group.

The Runbook is successfully completed, and the output is showing the new incident details:

I can also see the Incident created in Service Manager:

That concludes this first part of this blog post. Stay tuned for how to create an OMS Alert and trigger this Runbook in part 2!

Shut Down Azure Servers for Earth Hour – 2015 Edition

One year ago, I published a blog article for shutting down, and later restart again, Azure Servers for one hour duration during Earth Hour 2014: https://systemcenterpoint.wordpress.com/2014/03/28/earth-hour-how-to-shut-down-and-restart-your-windows-azure-services-with-automation/.

In that article, I used three different Automation technologies to accomplish that:

  • Scheduled PowerShell script
  • System Center 2012 R2 Orchestrator
  • Service Management Automation in Windows Azure Pack

Today is Earth Hour 2015 (www.earthhour.org). While the Automation technologies referred still can be used for shutting down and restarting Azure Servers, I thought I should create an updated blog article using Azure Automation that has been launched during the last year.

This new example are built on the following:

  1. An Azure SQL Database with a table for specifying which Cloud Services and VM Names that should be shut down during Earth Hour
  2. An Azure Automation Runbook which connects to the Azure SQL Database, reads the Servers specified and shuts them down one by one (or later starts them up one by one).
  3. Two Schedules, one that triggers when the Earth Hour starts and one that triggers when Earth Hour begins, and calls the Runbook.

Creating a Azure SQL Database or a SQL Server is outside the scope of this article, but the table I have created is defined like this:

CREATE TABLE dbo.EarthHourServices
(
    ID int NOT NULL,
    CloudService varchar(50) NULL,
    VMName varchar(50) NULL,
    StayProvisioned bit NULL,
CONSTRAINT PK_ID PRIMARY KEY (ID)
)
GO

The StayProvisioned is a boolean data value where I can specify if VM’s should only be stopped, or stopped and deallocated.

This table is then filled with values for the servers I want to stop.

The Azure Automation Runbook I want to create have some requirements:

  1. I need to create a PowerShell Credential Asset for the SQL Server username and password
  2. I need to be able to Connect to my Azure Subscription. Previously I have been using the Connect-Azure solution (https://gallery.technet.microsoft.com/scriptcenter/Connect-to-an-Azure-f27a81bb) for connecting to my specified Azure Subscription. This is still working and I’m using this method in this blog post, but now depreciated and you should use this guide instead: http://azure.microsoft.com/blog/2014/08/27/azure-automation-authenticating-to-azure-using-azure-active-directory/.

This is the Runbook I have created:

workflow EarthHour_StartStopAzureServices
{
    param
    (
        # Fully-qualified name of the Azure DB server 
        [parameter(Mandatory=$true)] 
        [string] $SqlServerName,
        # Credentials for $SqlServerName stored as an Azure Automation credential asset
        [parameter(Mandatory=$true)] 
        [PSCredential] $SqlCredential,
        # Action, either Start or Stop for the specified Azure Services 
        [parameter(Mandatory=$true)] 
        [string] $Action
    )

    # Specify Azure Subscription Name
    $subName = 'My Azure Subscription'
    # Connect to Azure Subscription
    Connect-Azure `
        -AzureConnectionName $subName
    Select-AzureSubscription `
        -SubscriptionName $subName 

    inlinescript
    {

        # Setup credentials   
        $ServerName = $Using:SqlServerName
        $UserId = $Using:SqlCredential.UserName
        $Password = ($Using:SqlCredential).GetNetworkCredential().Password
        
        # Create connection to DB
        $Database = "SkillAutomationRepository"
        $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
        $DatabaseConnection.ConnectionString = "Server = $ServerName; Database = $Database; User ID = $UserId; Password = $Password;"
        $DatabaseConnection.Open();

        # Get Table
        $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
        $DatabaseCommand.Connection = $DatabaseConnection
        $DatabaseCommand.CommandText = "SELECT ID, CloudService, VMName, StayProvisioned FROM EarthHourServices"
        $DbResult = $DatabaseCommand.ExecuteReader()

        # Check if records are returned from SQL database table and loop through result set
        If ($DbResult.HasRows)
        {
            While($DbResult.Read())
            {
                # Get values from table
                $CloudService = $DbResult[1]
                $VMname = $DbResult[2]
                [bool]$StayProvisioned = $DbResult[3] 
 
                 # Check if we are starting or stopping the specified services
                If ($Using:Action -eq "Stop") {

                    Write-Output "Stopping: CloudService: $CloudService, VM Name: $VMname, Stay Provisioned: $StayProvisioned"
                
                    $vm = Get-AzureVM -ServiceName $CloudService -Name $VMname
                    
                    If ($vm.InstanceStatus -eq 'ReadyRole') {
                        If ($StayProvisioned -eq $true) {
                            Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -StayProvisioned
                        }
                        Else {
                            Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -Force
                        }
                    }
                                       
                }
                ElseIf ($Using:Action -eq "Start") {

                    Write-Output "Starting: CloudService: $CloudService, VM Name: $VMname, Stay Provisioned: $StayProvisioned"

                    $vm = Get-AzureVM -ServiceName $CloudService -Name $VMname
                    
                    If ($vm.InstanceStatus -eq 'StoppedDeallocated' -Or $vm.InstanceStatus -eq 'StoppedVM') {
                        Start-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name    
                    }
                     
                }
 
            }
        }

        # Close connection to DB
        $DatabaseConnection.Close() 
    }    

}

And this is my schedules which will run the Runbook when Earth Hour Begins and Ends. The Scedules specify the parameters I need to connect to Azure SQL and the Action for either Stop VM’s or Start VM’s.

Good luck with automating your Azure Servers and remember to turn off the lights as well J!

Copy SMA Runbooks from one Server to another

Recently I decided to scrap my old Windows Azure Pack environment and create a new environment for Windows Azure Pack partly based in Microsoft Azure. As a part of this reconfiguration I have set up a new SMA server, and wanted to copy my existing SMA runbooks from the old Server to the new Server.

This little script did the trick for me, hope it can be useful for others as well.


# Specify old and new SMA servers
$OldSMAServer = "myOldSMAServer"
$NewSMAServer = "myNewSMAServer"

# Define export directory
$exportdir = 'C:\_Source\SMARunbookExport\'

# Get which SMA runbooks I want to export, filtered by my choice of tags
$sourcerunbooks = Get-SmaRunbook -WebServiceEndpoint https://$OldSMAServer | Where { $_.Tags -iin ('Azure','Email','Azure,EarthHour','EarthHour,Azure')}

# Loop through and export definition to file, on for each runbook
foreach ($rb in $sourcerunbooks) {
    $exportrunbook = Get-SmaRunbookDefinition -Type Draft -WebServiceEndpoint https://$OldSMAServer -name $rb.RunbookName
    $exporttofile = $exportdir + $rb.RunbookName + '.txt'
    $exportrunbook.Content | Out-File $exporttofile
}

# Then loop through and import to new SMA server, keeping my tags
foreach ($rb in $sourcerunbooks) {
    $importfromfile = $exportdir + $rb.RunbookName + '.txt'
    Import-SmaRunbook -Path $importfromfile -WebServiceEndpoint https://$NewSMAServer -Tags $rb.Tags
}

# Check my new SMA server for existence of the imported SMA runbooks
Get-SmaRunbook -WebServiceEndpoint https://$NewSMAServer |  FT RunbookName, Tags



Hidden Network Adapters in Azure VM and unable to access network resources

I have some Azure VM’s that I regulary Stop (deallocate) and Start using Azure Automation. The idea is to cut costs while at night or weekends, as these VM’s are not used then anyway. I recently had a problem with one of these Virtual Machines, I was unable to browse or connect to network resources, could not connect to the domain to get Group Policy updates and more. When looking into it, I found out that I had a lot of hidden Network Adapters in Device Manager. The cause of this is that every time a VM is shut down and deallocated, on next start it will provision a new network adapter. The old network adapter is kept hidden. The result of this over time as I automate shut down and start every day, is that I get a lot of these, as shown below: I found in some forums that the cause of the network browse problem I had with the server could be related to this for Azure VM’s. I don’t know the actual limit, or if it’s a fixed value, but the solution would be to uninstall these hidden network adapters. Although it is easy to right click and uninstall each network adapter, I wanted to create a PowerShell Script to be more efficient. There are no native PowerShell cmdlets or Commands that could help me with this, so after some research I ended with a combination of these two solutions:

I then ended up with the following PowerShell script. The script first get all hidden devices of type Microsoft Hyper-V Network Adapter and their InstanceId. Then for each device uninstall/remove with DevCon.exe. The Script:

Set-Location C:\_Source\DeviceManagement

Import-Module .\Release\DeviceManagement.psd1 -Verbose

# List Hidden Devices

Get-Device -ControlOptions DIGCF_ALLCLASSES | Sort-Object -Property Name | Where-Object {($_.IsPresent -eq $false) -and ($_.Name -like “Microsoft Hyper-V Network Adapter*”) } | ft Name, DriverVersion, DriverProvider, IsPresent, HasProblem, InstanceId -AutoSize

# Get Hidden Hyper-V Net Devices

$hiddenHypVNics = Get-Device -ControlOptions DIGCF_ALLCLASSES | Sort-Object -Property Name | Where-Object {($_.IsPresent -eq $false) -and ($_.Name -like “Microsoft Hyper-V Network Adapter*”) }

# Loop and remove with DevCon.exe

ForEach ($hiddenNic In $hiddenHypVNics) {

$deviceid = “@” + $hiddenNic.InstanceId

.\devcon.exe -r remove $deviceid

}

And after a while all hidden network adapter devices was uninstalled: In the end I booted the VM and after that everything was working on the network again!

Earth Hour – How to Shut Down and Restart your Windows Azure Services with Automation

The upcoming Saturday, 29th of March 2014, Earth Hour will be honored between 8:30 PM and 9:30 PM in your local time zone. Organized by the WWF (www.earthhour.org), Earth Hour is an incentive where amongst other activities people are asked to turn off their lights for an hour.

I thought this also was an excellent opportunity to give some examples of how you can use automation technologies to shut down and restart Windows Azure Services. Let’s turn off the lights of our selected Windows Azure servers for an hour!

I will use three of my favorite automation technologies to accomplish this:

  1. Windows PowerShell
  2. System Center 2012 R2 Orchestrator
  3. Windows Azure Pack with SMA

I also wanted some kind of list of which services I would want to turn off, as not all of my services in Azure are available to be turned off. There are many ways to accomplish this, I chose to use a comma separated text file. My CSV file includes the name of the Cloud Services in Azure I want to shut down (and later restart), and all VM roles inside the specified Cloud Services will be shut down.

In the following, I will give examples for using each of the automation technologies to accomplish this.

Windows PowerShell

In my first example, I will use Windows PowerShell and create a scheduled PowerShell Job for the automation. This has a couple of requirements:

  1. I must download and install Windows Azure PowerShell.
  2. I must register my Azure Subscription so I can automate it from PowerShell.

For instructions for these prerequisites, see http://www.windowsazure.com/en-us/documentation/articles/install-configure-powershell.

First, I start PowerShell at the machine that will run my PowerShell Scheduled Jobs. I prefer PowerShell ISE for this, and as the job will automatically run on Saturday evening, I will use a server machine that will be online at the time the schedule kicks off.

The following commands creates job triggers and options for the job:

# Job Trigger

$trigger_stop = New-JobTrigger -Once -At “03/29/2014 20:30”

$trigger_start = New-JobTrigger -Once -At “03/29/2014 21:30”

# Job Option

$option = New-ScheduledJobOption -RunElevated

After that, I specify some parameters, as my Azure subscription name and importing the CSV file that contains the name of the Cloud Services I want to turn off and later start again:

# Azure Parameters

$az_subscr = my subscription name

$az_cloudservices = Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8

Then I create some script strings, one for the stop job, and one for the start job. It is out of the scope of this blog article to explain in detail the PowerShell commands used here, but you will see that the CSV file is imported, and for each Cloud Service a command is run to stop or start the Azure Virtual Machines.

Another comment is that for the Stop-AzureVM command I chose to use the switch “-StayProvisioned”, which will make the server to keep its internal IP address and the Cloud Service will keep its public IP address. I could also use the “-Force” switch which will deallocated the VM’s and Cloud Services. Deallocating will save costs but also risks that the VM’s and Cloud Services will be provisioned with different IP addresses when restarted.

# Job Script Strings

$scriptstring_stop = “Import-Module Azure `

Set-AzureSubscription -SubscriptionName ‘” + $az_subscr + “‘ `

Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8 | ForEach-Object { Get-AzureVM -ServiceName `$_.CloudService | Where-Object {`$_.InstanceStatus -eq ‘ReadyRole’} | ForEach-Object {Stop-AzureVM -ServiceName `$_.ServiceName -Name `$_.Name -StayProvisioned } } “

$scriptstring_start = “Import-Module Azure `

Set-AzureSubscription -SubscriptionName ‘” + $az_subscr + “‘ `

Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8 | ForEach-Object { Get-AzureVM -ServiceName `$_.CloudService | Where-Object {`$_.InstanceStatus -eq ‘StoppedDeallocated’} | ForEach-Object {Start-AzureVM -ServiceName `$_.ServiceName -Name `$_.Name } } “

# Define Job Script Blocks

$sb_stop = [scriptblock]::Create($scriptstring_stop)

$sb_start = [scriptblock]::Create($scriptstring_start)

To create the PowerShell Jobs my script strings must be formatted as script blocks, as shown above. Next I specify my credentials that will run the job, and then register the jobs. I create two jobs for each the stop and start automation, and with the configured script blocks, triggers, options and credentials.

# Get Credentials

$jobcreds = Get-Credential

# Register Jobs

Register-ScheduledJob -Name Stop_EarthHourSkillAzure -ScriptBlock $sb_stop -Trigger $trigger_stop -ScheduledJobOption $option -Credential $jobcreds

Register-ScheduledJob -Name Start_EarthHourSkillAzure -ScriptBlock $sb_start -Trigger $trigger_start -ScheduledJobOption $option -Credential $jobcreds

And that’s it! The jobs are now registered and will run at the specified schedules.

If you want to test a job, you can do this with the command:

# Test Stop Job

Start-Job -DefinitionName Stop_EarthHourSkillAzure

You can also verify the jobs in Task Scheduler, under Task Scheduler Library\Microsoft\Windows\PowerShell\ScheduledJobs:

System Center 2012 R2 Orchestrator

Since its release with System Center 2012, Orchestrator have become a popular automation technology for data centers. With Integration Packs for Orchestrator, many tasks can be performed by creating Orchestrated Runbooks that automates your IT processes. Such Integration Packs exists for System Center Products, Active Directory, Exchange Server, SharePoint Server and many more. For this task, I will use the Integration Pack for Windows Azure.

To be able to do this, the following requirements must be met:

  1. A System Center 2012 SP1 or R2 Orchestrator Server, with Azure Integration Pack installed.
  2. Create and configure an Azure Management Certificate to be used from Orchestrator.
  3. Create a Windows Azure Configuration in Orchestrator.
  4. Create the Runbooks and Schedules.

I will not get into detail of these first requirements. Azure Integration Pack can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=39622.

To create and upload an Azure Management Certificate, please follow the guidelines here: http://msdn.microsoft.com/en-us/library/windowsazure/gg551722.aspx.

Note that from the machine you create the certificate, you will have the private key for then certificate in your store. You must then do two things:

  1. Upload the .cer file (without the private key) to Azure. Note your subscription ID.
  2. Export the certificate to a .pfx file with password, and transfer this file to your Orchestrator Server.

In Runbook Designer, create a new Windows Azure configuration, with .pfx file location and password, and the subscription ID. For example like this:

You are now ready to use the Azure activities in your Orchestrator Runbooks.

In Orchestrator it is recommended to create different Runbooks that call upon each other, rather than creating a very big Runbook that do all the work. I have created the following Runbooks and Schedules:

  • A generic Recycle Azure VM Runbook, that either Starts, Shutdown or Restarts Azure VM’s based on input parameters for Action to do, Cloud Service, Deployment and VM instance names.
  • An Earth Hour Azure Runbook, that reads Cloud Services from CSV file and gets the Deployment Name and VM Role Name, with input parameter for Action. This Runbook will call the first generic Runbook.
  • I will have two Schedule Runbooks, that are Started, and will kick off at ..
  • .. two specified Schedules, for Earth Hour Start and Stop.

Let’s have a look:

My first generic Recycle Azure VM Runbook looks like this:

This Runbook is really simple, based on input parameter for Action either Start, Shutdown and Restart are sent to the specified VM instance, Deployment and Cloud Service. The input parameters for the Runbook is:

I also have an ActivityID parameter, as I also use this Runbook for Service Requests from Service Manager.

My parameters are then used in the activities, for example for Shutdown:

After this I have the Earth Hour Azure Runbook. This Runbook looks like this:

This Runbook takes one input parameter, that would be Start or Shutdown for example:

First, I read the Cloud Services from my CSV file:

I don’t read from line 1 as it contains my header. Next I get my Azure deployments, where the input is from the text file:

The next activity I had to think about, as the Azure Integration Pack doesn’t really have an activity to get all VM role instances in a Deployment. But, the Get Deployment task returns a Configuration File XML, and I was able to do a Query XML activity XPath Query for getting the VM instance role names:

After that I do a Send Platform Event just for debug really, and last I call my first Recycle Azure Runbook, with the parameters from my Activities:

I don’t use ActivityID here as this Runbook is initiated from Orchestrator and not a Service Manager Service Request.

Until now I can now start this Runbook manually, specify Start or Shutdown actions, and the Cloud Services from the CSV file will be processed.

To schedule these to run automatically, I will need to create some Schedules first:

These schedules are created like this. I specify last Saturday in the month.

And with the hours 8:00-9:00 PM. (I will check against this schedule at 8:30 PM)

The other schedule for Start is configured the same way, only with Hours from 9:00-10:00 PM.

The last thing I have to do is to create two different Runbooks, which monitors time:

And:

These two Runbooks will check for the monitor time:

If the time is correct, it will check againt the Schedule, and if that is correct, it will call the Earth Hour Azure with Start and Shutdown.

The scheduling functionality is a bit limited in Orchestrator for running Runbooks only one time, so in reality these Runbooks if left running will actually run every last Saturday every month.

Anyway, the last thing to do before the weekend is to kick of these two last Runbooks, and it will run as planned on Saturday night.

Windows Azure Pack with SMA

The last automation technology I want to show is using SMA, Service Management Automation, with Windows Azure Pack. Service Management Automation (SMA) is a new feature of System Center 2012 R2 Orchestrator, released in October 2013. At the same time Windows Azure Pack (WAP) was released as a component in System Center 2012 R2 and Windows Server 2012 R2. I will not get into great detail of WAP and SMA here, but these solutions are very exciting for Private Cloud scenarios. SMA does not require Windows Azure Pack, but it is recommended to install WAP to use as a portal and user interface for authoring and administering the SMA runbooks.

The example I will use here have the following requirements:

  1. I must download and install Windows Azure PowerShell.
  2. I must configure a connection to my Azure Subscription.
  3. I must create a connection object in SMA to Windows Azure.
  4. I must create the SMA runbooks.

The prerequisites above are well described here: http://blogs.technet.com/b/privatecloud/archive/2014/03/12/managing-windows-azure-with-sma.aspx.

So I will concentrate on creating the the SMA runbooks.

The SMA runbooks will use PowerShell workflow. I will create two SMA runbooks, one for ShutDown and one for Starting the Azure Services.

My PowerShell workflow for the ShutDown will then look like this:

workflow
EarthHour_ShutDownAzureServices

{

    # Get the Azure connection


$con = Get-AutomationConnection -Name My-SMA-AzureConnection

    # Convert the password to a SecureString to be used in a PSCredential object


$securepassword = ConvertTo-SecureString -AsPlainText -String $con.Password -Force

    # Create a PS Credential Object

    $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $con.Username, $securepassword


inlinescript

{


Import-Module “Azure”

 # Select the Azure subscription

 Select-AzureSubscription -SubscriptionName my subscription name

 # Import Cloud Services from CSV file and Stop VMs for each specified service

 Import-Csv -Path C:\_Source\EarthHour_CloudServices.csv -Encoding UTF8 | `

 ForEach-Object { Get-AzureVM -ServiceName $_.CloudService | `

 Where-Object {$_.InstanceStatus -eq ‘ReadyRole’} | `

 ForEach-Object {Stop-AzureVM -ServiceName $_.ServiceName -Name $_.Name -StayProvisioned } }

} -PSComputerName $con.ComputerName -PSCredential $cred

}

As for the first automation example with PowerShell Job I read the Cloud Services from a CSV file, and loop through the Cloud Services and Azure VM’s for the Stop commands.

Similarly I have the workflow for Start the Azure services again:

workflow
EarthHour_StartAzureServices

{

    # Get the Azure connection

 $con = Get-AutomationConnection -Name My-SMA-AzureConnection

    # Convert the password to a SecureString to be used in a PSCredential object

 $securepassword = ConvertTo-SecureString -AsPlainText -String $con.Password -Force

    # Create a PS Credential Object

    $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $con.Username, $securepassword

 inlinescript

{

 Import-Module “Azure”

 # Select the Azure subscription

 Select-AzureSubscription -SubscriptionName my subscription name

 # Import Cloud Services from CSV file and Stop VMs for each specified service

 Import-Csv -Path C:\_Source\EarthHour_CloudServices.csv -Encoding UTF8 | `

 ForEach-Object { Get-AzureVM -ServiceName $_.CloudService | `

 Where-Object {$_.InstanceStatus -eq ‘StoppedDeallocated’ -Or $_.InstanceStatus -eq ‘StoppedVM’ } | `

 ForEach-Object {Start-AzureVM -ServiceName $_.ServiceName -Name $_.Name } }

} -PSComputerName $con.ComputerName -PSCredential $cred

}

These workflows can now be tested I you want by running Start action. I however want to schedule them for Earth Hour on Saturday. This is really easy to do using the SMA assets for schedules:

These schedules are then linked to each of the SMA runbooks:

And similarly, with details:

That concludes this blog post. Happy automating, and remember to turn off your lights!