Tag Archives: Service Manager

Experts and Community unite at last ever #SCU_Europe 2016! #ExpertsLive next

This years SCU Europe 2016, for the first time outside Switzerland in the 4th year running, was held in Berlin at the BCC (Berlin Congress Center) close to the Alexander Platz in the eastern parts of “Berlin Mitte”.

 

 

The intro video introducing the Experts:

Let’s begin with the end: at the closing note SCUE general Marcel Zehner announced and with a little bit of emotion that this was the last ever SCU Europe to be held.. You and your organization should be proud of what you have achieved, Marcel, it is one of the best community conferences around, and I have been fortunate to be able to visit all 4 starting with Bern in 2013, Basel in 2014 and 2015, and now Berlin in 2016. It’s only cities with B’s is it? In fact, you never know what twists and turns your career takes, but looking back I’m not sure I would be where I am now in turn of being presenter, MVP and community influencer myself if I had not travelled alone to Bern 4 years ago, that’s where I really started working with and for the Community (with a capitol C)!

Luckily SCU Europe will continue as Experts Live Europe next year! Same place at BCC, same organization and format, and the same dates only next year it will be: 23rd – 25th of August 2017. A new web page was launched, www.expertslive.eu, and Twitter (@ExpertsLiveEU) and Facebook have been changed to reflect that. The hash tag #SCU_Europe will eventually be inactive and you should now use #ExpertsLive.

image

I think this is a very good decision, there has already been discussion on that the name “System Center Universe” is not really reflecting the content and focus of the conference, now embracing the Cloud, with content areas for Management, Productivity, Security, DevOps, Automation, Data Platform and more. ExpertsLive, originally a 1-day community conference in Netherland running each year back from 2009 and with up to 1200 participants, will now be a network of conferences, ranging from region based (ExpertsLive Europe, but also SCU APAC and SCU Australia will be ExpertsLive APAC and Australia next year), and local, country based ExpertsLive like the one in Netherlands, but more will come.

image

The closing note video announcing Experts Live Europe:

This year at SCU Europe I was one of the Experts and presented two sessions on “Premium Identity Management and Protection with Azure AD” and “Deep Dive: Publishing Applications with Azure AD”. I also took part in a “Ask-the-Experts” area together with Cameron Fuller and Kevin Greene where we took questions on the topic System Center 2016. I participated on a discussion panel on Friday morning with Markus Wilhelm from Microsoft Germany on the subject Defense Strategies and Security, and of course we had the Meet and greet with the Experts at the Networking party. It was a really great experience speaking at this conference, thanks for having me!

 

 

 

 

The content of the conference this year was great, and for the first time there was 5 tracks, with over 70 sessions presented! All presentations and session recordings will be at Channel 9 in a few weeks time, so make sure you look at anything you missed or want to see again if you where there, or if you weren’t at the conference this year you can look at your sessions of interest.

I was travelling with a group this year, both from my company and some of our customers, in total we were 7 in the group, and also had 3 cancellations the last week before the conference from some customers that could not make it after all. Moving the conference to Berlin is a big part of why it now was easier to attract more Nordic attendance I think. We stayed at the Park Inn by Radisson right by the Alexander Platz and BCC, so it was really central and nice.

 

 

 

 

In good tradition there are a lot of parties and social networking going on. On the first night there are the Sponsors and Speakers Party, which was held in Mio right by the TV Tower by Alexander Platz, on Thursday we had the attendee Networking Party at the conference center. Later that night our group and some more partners/customers of Squared Up went on to another party at Cosmic Kaspar. It was really hot, so basically the party was at the pavement! On the last day we had the Closing Drinks, sponsored by Cireson and itnetX at Club Carambar, also close to the Alexander Platz. In addition, there are a lot of unofficial gatherings going on, lots of laughs and new and old friends have a good time.

 

 

 

 

 

 

See you next year at Experts Live Europe in Berlin 23-25th August, 2017!

Publish the itnetX ITSM Portal with Azure AD App Proxy and with Conditional Access

Last week at SCU Europe 2016 in Berlin, I presented a session on Application Publishing with Azure AD. In one of my demos I showed how to use Azure AD Application Proxy to publish an internal web application like the itnetX ITSM Portal. The session was recorded and will be available later at itnetX’s Vimeo channel and on Channel 9.

In this blog post I will detail the steps for publishing the portal in Azure AD, and also how to configure Conditional Access for Users and Devices. Device compliance and/or Domain join conditional access recently went into preview for Azure AD Applications, so this will be a good opportunity to show how this can be configured and how the user experience is.

Overview

itnetX has recently released a new HTML based ITSM Portal for Service Manager, and later there will be an analyst portal as well.

This should be another good scenario for using the Azure AD Application Proxy, as the ITSM Portal Web Site needs to be installed either on the SCSM Management Server or on a Server that can connect to the Management Server internally.

In this blog article I will describe how to publish the new ITSM Portal Web Site. This will give me some interesting possibilities for either pass-through or pre-authentication and controlling user and device access.

There are two authentication scenarios for publishing the ITMS Portal Web Site with Azure AD App Proxy:

  1. Publish without pre-authentication (pass through). This scenario is best used when ITSM Portal is running Forms Authentication, so that the user can choose which identity they want to log in with.
  2. Publish with pre-authentication. This scenario will use Azure AD authentication, and is best used when ITSM Portal Web Site is running Windows Authentication so that we can have single sign-on with the Azure AD identity. Windows Authentication is also default mode for ITSM Portal installations.

I will go through both authentication scenarios here.

I went through these steps:

Configure the itnetX ITSM Portal Web Site

First I make sure that the portal is available and working internally. I have installed it on my SCSM Management Server, in my case with the URL http://azscsmms2:82.

In addition to that, I have configured the ITSM Portal to use Forms Authentication, so when I access the URL I see this:

image

Create the Application in Azure AD

In this next step, I will create the Proxy Application in Azure AD where the ITSM Portal will be published. To be able to create Proxy Applications I will need to have either an Enterprise Mobility Suite license plan, or an Azure AD Basic/Premium license plan. App Proxy require at least Azure AD Basic for end-users accessing applications, and if using Conditional Access you will need a Azure AD Premium license. From the Azure Management Portal and Active Directory, under Applications, I add a new Application and select to “Publish an application that will be accessible from outside your network”:

I will then give a name for my application, specify the internal URL and pre-authentication method. I name my application “itnetX ITSM Portal”, use http://azscsmms2:82/ as internal URL and choose Passthrough as Pre-Authentication method.

After the Proxy Application is added, there are some additional configurations to be done. If I have not already, Application Proxy for the directory have to be enabled. I have created other Proxy Applications before this, so I have already done that.

After I have uploaded my own custom logo for the application, I see this status on my quickstart blade for the application:

image

I also need to download the Application Proxy connector, install and register this on a Server that is member of my own Active Directory. The Server that I choose can be either on an On-Premise network, or in an Azure Network. As long as the Server running the Proxy connector can reach the internal URL, I can choose which Server that best fits my needs.

When choosing pass through as authentication method, all users can directly access the Forms Based logon page as long as they know the external URL. Assigning accounts, either users or groups, will only decide which users that will see the application in the Access Panel or My Apps.

image

I now need to make additional configurations to the application, and go to the Configure menu. From here I can configure the name, external URL, pre-authentication method and internal URL, if I need to change something.

I choose to change the External URL so that I use my custom domain, and note the warning about creating a CNAME record in external DNS. After that I hit Save so that I can configure the Certificate.

image

After that I upload my certificate for that URL, and I can verify the configuration for the external and internal URL:image

When using passthrough I don’t need to configure any internal authentication method.

I have to select a connector group, where my installed Azure AD App Proxy Connectors are installed, and choose to have the default setting for URL translation. Internal authentication is not needed when using Pass Through authentication:

image

If I want, I can allow Self-Service Access to the published application. I have configured this here, so that users can request access to the application from the Access Panel (https://myapps.microsoft.com). This will automatically create an Azure AD Group for me, which I either can let users join automatically or via selected approvers:

image

After I have configured this, I am finished at this step, and can test the application using pass through.

Testing the application using pass through

When using Pass through I can go directly to the external URL, which in my case is https://itsmportal.elven.no. And as expected, I can reach the internal Forms Based login page:

image

For the users and groups I have assigned access to, they will also see the itnetX ITSM Portal application in the Access Panel (https://myapps.microsoft.com) or in My Apps, this application is linked to the external URL:

image

This is how the Access Panel looks in the coming new look:

image

Now I’m ready to do the next step which is change Pre-Authentication and use Azure AD Authentication and Single Sign-On.

Change Application to use Azure AD Authentication as Preauthentication

First I will reconfigure the Azure AD App Proxy Application, by changing the Preauthentication method to Azure Active Directory.

Next I need to configure to use Internal Authentication Method “Windows Integrated Authentication”. I also need to configure the Service Principal Name (SPN). Here I specify HTTP/portalserverfqdn, in my example this is HTTP/azscsmms2.elven.local.

image

PS! A new preview feature is available, to choose which login identity to delegate. I will continue using the default value of User principal name.

Since I now will use pre-authentication, it will be important to remember to assign individual users or groups to the Application. This enables me to control which users who will see the application under their My Apps and who will be able access the application’s external URL directly. If users are not given access they will not be able to be authorized for the application.

Enable Windows Authentication for itnetX ITSM Portal

The itnetX ITSM Portal site is configured for Windows Authentication by the default, but since I reconfigured the site to use Forms Authentication earlier, I just need to reverse that now. See installation and configuration documentation for that.

It is a good idea at this point to verify that Windows Integrated Authentication is working correctly by browsing internally to the ITSM Portal site. Your current logged on user (if permissions are correct) should be logged in automatically.

Configure Kerberos Constrained Delegation for the Proxy Connector Server

I now need to configure so that the Server running the Proxy Connector can impersonate users pre-authenticating with Azure AD and use Windows Integrated Authentication to the Squared Up Server.

I find the Computer Account in Active Directory for the Connector Server, and on the Delegation tab click on “Trust this computer for delegation to specified services only”, and to “Use any authentication protocol”. Then I add the computer name for the web server that the ITSM Portal is installed on and specify the http service as shown below (I already have an existing delegation set up):

image

This was the last step in my configuration, and I am almost ready to test.

If you, like me, have an environment consisting on both On-Premise and Azure Servers in a Hybrid Datacenter, please allow room for AD replication of these SPN’s and more.

Testing the published application with Azure AD Authentication!

Now I am ready to test the published proxy application with Azure AD Authentication.

When I go to my external URL https://itsmportal.elven.no, Azure AD will check if I already has an authenticated session, or else I will presented with the customized logon page for my Azure AD:

image

Remember from earlier that I have assigned the application either to a group of all or some users or directly to some pilot users for example.

If I log in with an assigned user, I will be directly logged in to the ITSM Portal:

image

However, if I try to log in with an Azure AD account that hasn’t been assigned access to the application, I will see this message:

image

This means that the pre-authentication works and I can control who can access the application via Azure AD.

Conditional Access for Users and Devices

When using Azure AD as preauthentication, I can also configure the application for conditional access for users and devices. Remember this is a Azure AD Premium feature.

From the the configuration settings for the application I can configure Access Rules via MFA and location, and Access Rules for devices which now is in Preview:

image

If I enable Access Rules for MFA and location I see the following settings, where I can either for all users or for selected groups require multi-factor authentication, or require multi-factor when not at work, or block access completely from outside work. I have define my network location IP ranges for that to take effect.

image

If I enable Access Rules for devices, I see the following settings. I can select for all users or selected groups that will have device based access rules applied (and any exceptions to that).

I can choose between two device rules:

  • All devices must me compliant
  • Only selected devices must be compliant, other devices will be allowed access

If I select all devices, a sub option for windows devices shows where I need to select between domain joined or marked as compliant, or just marked as compliant or domain joined selectively.

image

If I select the second option, I can even specify which devices will be checked for compliancy:

image

So I can with different access rules for both MFA, location and selected devices, in addition to the Azure AD Preauthentication, apply the needed conditional access for my application.

In this case I will select device rules for compliant/domain joined, and for all the different devices. This will mean that for users to access the ITSM Portal, their device must either be MDM enrolled (iOS, Android, Windows Phone) or in the case of Windows devices either be MDM enrolled, Azure AD Joined, Compliant or Domain Joined. Domain joined computers must be connected to Azure AD via the steps described here: https://azure.microsoft.com/en-us/documentation/articles/active-directory-azureadjoin-devices-group-policy/.

After I’m finished reconfiguring the Azure AD App Proxy Application, I can save and continue and test with my devices.

Testing device based conditional access

Lets see first when I try to access the ITSM Portal via an unknown device:

image

On the details I see that my device is Unregistered, so I will not be able to access the application.

Now, in the next step I can enroll my Windows 10 Device either through MDM or via Azure AD Join. In this scenario I have added my Windows 10 to Azure AD Join:

image

If I look at the Access Panel and Profile I will also se my devices:

image

The administrator can see the Device that the user has registered in Azure Active Directory:

image

Lets test the published ITSM Portal again:

image

Now I can see that my device has been registered, but that it is not compliant yet, so I still cannot access the ITSM Portal.

When I log on to the Client Manage Portal (https://portal.manage.microsoft.com), I can see that my Windows 10 Device not yet are Compliant:

image

So when I investigate, fix whatever issues this device has and then re-check compliance, I can successfully verify that I should be compliant and good to go:

image

After that, I’m successfully able to access the ITSM Portal again, this time after my device has been checked for compliance:

image

Summary

In this blog post we have seen have to publish and configure the itnetX ITSM Portal with Azure AD Application Proxy, using both pass-through authentication and Azure AD Preauthentication with Kerberos constrained delegation for single sign-on.

With the additional possibility for conditional access for users and devices, we have seen that we can require either MFA or location requirements, and device compliance for mobile platforms and windows devices.

Hope this has been an informative blog post, thanks for reading!

PS! In addition to access the application via the Access Panel (https://myapps.microsoft.com), I can use the App Launcher menu in Office 365 and add the ITSM Portal to the App chooser:

image

This will make it easy for my users to launch the application:

image

Notifying End Users that Incident is Closed When They Reply by Exchange Connector

A common challenge by using Exchange Connector and Service Manager is that when the Incident is set to status Closed, users still can reply to the Incident Record by using E-mail. Even though the Incident is Read-Only when Closed, it is still possible to create related End User Comments via the Exchange Connector.

While the Exchange Connector cannot be configured in a way that it will not create those End User Comments based on Status, it would be nice to at least inform the user that the Incident is now Closed, and that they must create a new Incident record either via E-mail or the HTML portal.

There have been some solutions to this in the community. Some use different Incident Templates that sets the Incident to Active whenever users reply, and the re-Close them with another template (https://itblog.no/3192). Some extends the Incident Work Item Class by adding an UpdatedByEndUser property, and use that to control their notification subscriptions (http://www.scsm.se/?p=564).

I have been using another solution for a while at different implementations, and it seems to work fine. So I thought I would write a short blog post on that.

Overview

My solution will use a periodic notification subscription, targeting the Incident class, and using some criteria that will check that:

  • The Incident Status is Closed
    AND
  • The End User Comment Entered Date is >= Incident Closed Date

    AND

  • Incident Closed Date >= yyyy-MM-ddThh:mm:ss

I will get more into why I use these criteria later in the post.

In addition to this notification subscription, I also have “normal” subscriptions like sending e-mails to End Users when the Incident is created and resolved, as well as sending e-mails to End User when the Assigned User provides Analysts Comments, and sending e-mails to Assigned User when the End User Comments. I will not get into those here.

So let’s start to set this up.

Creating a New Management Pack

I will create a new Management Pack to store this notification subscription and the e-mail template I will use.

After creating the Management Pack, I export it and give it a more meaningful ID and file name.

Usually I do this by using search and replace on the generated ID with my own ID name as shown below:

After, save the MP as SkillSCSM.NotifyClosedIncident.xml, and delete the original MP and re-import this one.

Closed Incident E-Mail Notification Template

Next I will create the E-Mail Notification Template that will be used by the subscription to notify end users that the Incident is Closed and instruct them to create a new Incident.

This Notification Template will target the Incident Class, and I will use my new MP:

I specify a subject and HTML body:

After this I am ready to start creating the subscription.

Notification Subscription for End User Comments on Closed Incidents

I will create the Notification Subscription by specifying to “Periodically notify when object meet a criteria” and target the class to Incident.

By using this periodic subscription, there are some risks, that I will mitigate with my criteria. The risks are that this will backtrack and fire for all old incidents from before the subscription was created, and that changes to the criteria later could mean that it will send out notifications once more to users that already have received it. But, as long as the criteria is defined correctly, this should not be a problem.

When specifying criteria, there are something I cannot achieve with the wizard and have to do in the XML later.

For now, you would have to add these criteria via the wizard:

  • [Incident] Status equals Closed
    AND
  • [Trouble Ticket] Closed Date greater than or equal to (your date today)
    AND
  • Has User Comment [Work Item Comments Log] Entered date greater than or equal to (your date today)

For testing purposes, you could also add criteria for ID so that you set it to a fixed Incident ID while you are testing.

Later, in the XML I will change one of the criteria so that: Has User Comment Entered date >= Trouble Ticket Closed Date.

I will select to Notify once:

Specify my E-Mail Notification Template:

Finish and Create.

Editing the Management Pack XML

Exporting my MP XML I now can see the following criteria in the image below:

  • First, Status are set to Equal the Enumeration GUID for Closed
  • The second and third expressions are the ClosedDate and Comment EnteredDate, which are set to static. I will change these to evaluate to each other in the next step
  • The fourth expression is just for testing, as I have specified a single Incident ID, this I will remove later.

In the XML, edit the 3rd expression, to an expression like this. I want the EnteredDate for the End User Comment to be later (GreaterEqual) than the Incident ClosedDate. Note also that I keep the manual ClosedDate to today’s date. This is because I don’t want this rule to affect old Incidents, as when I import this MP, all incidents will be evaluated!

After this change, I can reimport the MP XML, and wait for it to start processing.

Verifying and Testing

At the Administration Pane in Service Manager Console, under Workflows and Status, I can find the workflow in question. I can see that it has triggered for an Incident where there has been an End User Comment on the Closed Incident:

The Affected User gets this email:

Let’s try to reply once more with an End User Comment by replying to this email again. From the History I can see that after the Incident was closed, there are two End User Comments. But the Notification will only occur once per Incident. So after the first time I send emails with End User Comments to the Closed Incident, I will get only one notification.

Setting the solution from Test to Production

The next step is to set this solution into production. In my XML I had a criteria for just one Incident ID:

I will remove that now and re-import the XML MP. On the other hand, I will keep the fixed ClosedDate expression. The reason is of course to not send notifications for old Incidents that have had comments after their closed dates.

To summarize, my criteria expressions will be:

  • Status Equal Closed (Enum GUID)
  • ClosedDate GreaterEqual (Date) (This Fixed Date should be the date you import the Management Pack, and updated every time you do maintenance on the MP XML)
  • (Comment) EnteredDate GreaterEqual (TroubleTicket) ClosedDate

Maintenance and important things to note

There are some situations that are important to note with this solution. As this is a periodic notification subscription, with an “only once” recurrence, Service Manager will keep track of which Incidents for which the workflow engine sends Closed Incident notifications based on the criteria defined.

But there is an important exception to be aware of:

  • Changing and re-importing the MP XML. When you do that you risk that all subscriptions will run again. Therefore, remember to update the Fixed Date criteria, so that older Incidents that are closed are not sending out notifications to the users that have commented.

For example, changing my MP XML above resulted in a second notification to the end user:

Editing the Notification Subscription must now be done in XML from now on. Trying to edit it in the Console will result in a greyed out dialog:

To summarize, I now have a solution for sending out e-mails to End Users that send comments to Incidents after it is Closed. They will only get that notification once, not every time they comment on Closed Incidents.

Thanks for reading, hope it has been helpful!

Displaying Service Manager Service Requests Stats in OMS via Performance Collection and Operations Manager

Following up on my previous blog article on how to collect Service Manager and Incident statistics via Operations Manager to OMS, https://systemcenterpoint.wordpress.com/2016/02/19/collecting-service-manager-incident-stats-in-oms-via-powershell-script-performance-collection-in-operations-manager/, this blog article will show how to get statistics for Service Requests and display that as Performance Data in OMS.

Getting Ready

Please see the link above for the first article on getting SCSM data to OMS, and instructions for configuring your Operations Manager environment and SMLets PowerShell module. I will use that same setup as basis for this article.

Creating the PowerShell Command Script for getting SCSM Service Request data

First I needed to think about what kind of Service Request data I wanted to get from SCSM. I decided to get the following values:

  • Submitted Service Requests
  • In Progress Service Requests
  • Completed Service Requests
  • Failed Service Requests
  • Cancelled Service Requests
  • Closed Service Requests
  • Service Requests Opened Last Day
  • Service Requests Opened Last Hour
  • Service Requests Completed Last Day
  • New Service Requests

Now, first some thoughts on why I chose these data to get for Service Requests (SR). When New Service Requests are created, they are quickly changed to a status of either Submitted (no activities in the template the Service Request was created from) or In Progress (where there are one or more activities). SR’s with no activities are manually set to Completed by the analyst when finished, or Cancelled if the SR will not be delivered. SR’s with activities gets the status Completed when all activities are completed as well, but might also get the status of Failed if any activities fails as well, for example runbook automation activities that might fail. Finally, when SR’s are completed, they will eventually be Closed.

So it makes sense to track all these statuses as performance data. You might ask why look at New status for SR’s, when this is only an intermittent status quickly changing to either Submitted or In Progress? Well, if there is a problem with the Workflow service at the Service Manager Management Server, SR’s can be stuck in New status. This is something I would want to be able to see in OMS, and even create an Alert for.

In addition to tracking the individual status values, I also want to see data on how many SR’s are created the last day, last hour, and also how many SR’s are Completed the last day. These will be nice performance indicators.

These Service Request values will be retrieved by the Get-SCSMObject cmdlet in SMLets, using different criteria. Unlike Get-SCSMIncident which query directly for Incident Records, I have to create the PowerShell command a little bit different, by specifying the Class Object (via Get-SCSMClass) and also filter Status Enumeration Id for Service Requests via Get-SCSMEnumeration. For example to get all SR’s with a status of In Progress, I use the following command:

The complete script is shown below for how to get those SR data via SCOM to OMS:

# Debug file
$debuglog = $env:TEMP+\powershell_perf_collect_SR_stats_debug.log

Date | Out-File $debuglog

Who Am I: | Out-File $debuglog -Append
whoami |
Out-File $debuglog -Append

$ErrorActionPreference = Stop

Try {

Import-Module C:\Program Files\WindowsPowerShell\Modules\SMLets

$API = new-object -comObject MOM.ScriptAPI

$scsmserver = az-scsm-ms01

$beforetime = $(Get-Date).AddHours(1)
$beforedate = $(Get-Date).AddDays(1)

$PropertyBags=@()

$inprogressrequests = 0
$inprogressrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.InProgress$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, In Progress Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$inprogressrequests)
$PropertyBags += $PropertyBag

In Progress Service Requests: | Out-File $debuglog -Append
$inprogressrequests | Out-File $debuglog -Append

$completedrequests = 0
$completedrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.Completed$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Completed Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$completedrequests)
$PropertyBags += $PropertyBag

Completed Service Requests: | Out-File $debuglog -Append
$completedrequests | Out-File $debuglog -Append

$submittedrequests = 0
$submittedrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.Submitted$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Submitted Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$submittedrequests)
$PropertyBags += $PropertyBag

Submitted Service Requests: | Out-File $debuglog -Append
$submittedrequests | Out-File $debuglog -Append

$cancelledrequests = 0
$cancelledrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.Canceled$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Cancelled Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$cancelledrequests)
$PropertyBags += $PropertyBag

Cancelled Service Requests: | Out-File $debuglog -Append
$cancelledrequests | Out-File $debuglog -Append

$failedrequests = 0
$failedrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.Failed$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Failed Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$failedrequests)
$PropertyBags += $PropertyBag

Failed Service Requests: | Out-File $debuglog -Append
$failedrequests | Out-File $debuglog -Append

$closedrequests = 0
$closedrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.Closed$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Closed Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$closedrequests)
$PropertyBags += $PropertyBag

Closed Service Requests: | Out-File $debuglog -Append
$closedrequests | Out-File $debuglog -Append

$newrequests = 0
$newrequests = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (Status -eq ‘ + (Get-SCSMEnumeration -ComputerName $scsmserver ServiceRequestStatusEnum.New$).Id +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, New Service Requests)
$PropertyBag.AddValue(Value, [UInt32]$newrequests)
$PropertyBags += $PropertyBag

New Service Requests: | Out-File $debuglog -Append
$newrequests | Out-File $debuglog -Append

$requestsopenedlasthour = 0
$requestsopenedlasthour = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (CreatedDate -gt ‘ + $beforetime +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Service Requests Opened Last Hour)
$PropertyBag.AddValue(Value, [UInt32]$requestsopenedlasthour)
$PropertyBags += $PropertyBag

Service Requests Opened Last Hour: | Out-File $debuglog -Append
$requestsopenedlasthour | Out-File $debuglog -Append

$requestsopenedlastday = 0
$requestsopenedlastday = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (CreatedDate -gt ‘ + $beforedate +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Service Requests Opened Last Day)
$PropertyBag.AddValue(Value, [UInt32]$requestsopenedlastday)
$PropertyBags += $PropertyBag

Service Requests Opened Last Day: | Out-File $debuglog -Append
$requestsopenedlastday | Out-File $debuglog -Append

$requestscompletedlastday = 0
$requestscompletedlastday = @(Get-SCSMObject -Computername $scsmserver –Class (Get-SCSMClass -ComputerName $scsmserver -Name System.WorkItem.ServiceRequest$) -Filter (CompletedDate -gt ‘ + $beforedate +)).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Service Requests Completed Last Day)
$PropertyBag.AddValue(Value, [UInt32]$requestscompletedlastday)
$PropertyBags += $PropertyBag

Service Requests Completed Last Day: | Out-File $debuglog -Append
$requestscompletedlastday | Out-File $debuglog -Append

$PropertyBags

} Catch {

Error Catched: | Out-File $debuglog -Append
$(
$_.Exception.GetType().FullName) | Out-File $debuglog -Append
$(
$_.Exception.Message) | Out-File $debuglog -Append

}

 

PS! I have included debugging and logging in the script, be aware though that doing $ErrorActionPreference=Stop will end the script if any errors, for example with logging, so it might be an idea to remove the debug logging when confirmed that everything works.

In the next part I’m ready to create the PowerShell Script Rule.

Creating the PowerShell Script Rule

In the Operations Console, under Authoring, create a new PowerShell Script Rule as shown below:

    1. Select the PowerShell Script (Performance – OMS Bound) Rule:I have created a custom destination management pack for this script.
    2. Specifying a Rule name and Rule Category: Performance Collection. As mentioned earlier in this article the Rule target will be the Root Management Server Emulator:
    3. Selecting to run the script every 30 minutes, and at which time the interval will start from:
    4. Selecting a name for the script file and timeout, and entering the complete script as shown earlier:
    5. For the Performance Mapping information, the Object name must be in the \\FQDN\YourObjectName format. For FQDN I used the Target variable for PrincipalName, and for the Object Name ServiceManagerServiceRequestStats, and adding the “\\” at the start and “\” between: \\$Target/Host/Property[Type=”MicrosoftWindowsLibrary7585010!Microsoft.Windows.Computer”]/PrincipalName$\ServiceMgrServiceRequestStatsI specified the Counter name as “Service Manager Service Request Stats”, and the Instance and Value are specified as $Data/Property(@Name=’Instance’)$ and $Data/Property(@Name=Value)$. These reflect the PropertyBag instance and value created in the PowerShell script:
    6. After finishing the Create Rule Wizard, two new rules are created, which you can find by scoping to the Root Management Server Emulator I chose as target. Both Rules must be enabled, as they are not enabled by default:

 

At this point we are finished configuring the SCOM side, and can wait for some hours to see that data are actually coming into my OMS workspace.

Looking at Service Manager Service Request Performance Data in OMS

After a while I will start seeing Performance Data coming into OMS with the specified Object and Counter Name, and for the different instances and values.

In Log Search, I can specify Type=Perf ObjectName=ServiceMgrServiceRequestStats, and I will find the Results and Metrics for the specified time frame.

I can now look at Service Manager stats as performance data, showing different scenarios like how many active, pending, resolved and closed incidents there are over a time period. I can also look at how many incidents are created by each hour or by each day.

I can also create saved searches and alerts for my search criteria.

Creating OMS Alerts for Service Request Counters

A couple of scenarios are interesting for Alerts when some of the Service Request counters pass a threshold.

Failed Service Requests is a status that will be set when an Activity in the SR fails, for example a Runbook Automation Activity. Normally you would expect that analysts would follow up on requests that fails directly in Service Manager, but it could make sense to generate an alert if the number of failed requests increases over a predefined threshold.

The search query for Failed Service Requests in OMS is:

Type=Perf ObjectName=ServiceMgrServiceRequestStats InstanceName=”Failed Service Requests”

This would give me all results for the defined time period, but to generate an alert I would want to look at the most recent results, for example for the last hour. In addition, I want to filter the results for my alert when the number of failed requests are over the threshold of 10. My query for this would be:

Type=Perf ObjectName=ServiceMgrServiceRequestStats InstanceName=”Failed Service Requests” AND TimeGenerated>NOW-1HOUR AND CounterValue > 10

In my test environment, this gives me the following result, showing that I have a total of 29 failed requests:

Ok, 29 failed requests are a lot, but as this is a test environment and a lot of these are old requests, I would need to do some cleaning up. I want to create an alert for this, so I press the Alert button:

For the alert I give it a name and base it on the search query. I want to check every 60 minutes, and generate alert when the number of results is greater than 1 so that I make sure the passing of the threshold is consistent and not just temporary.

For actions I want an email notification, so I type in a Subject and my recipients.

I Save the alert rule, and verify that it was successfully created.

Soon I get my first alert on email:

There is also a second scenario for an alert, and that is when Service Requests get stuck in a New status. Normally this would be when Service Manager workflows are not running, so it will be important to get notified on that.

The following search query, using countervalue > 1 will provide the results for my alert as I want to get notified as soon there is more than one value in the results:

Type=Perf ObjectName=ServiceMgrServiceRequestStats InstanceName=”New Service Requests” AND TimeGenerated>NOW-1HOUR AND CounterValue > 1

And I can create my alert as the following image:

With that, this blog post is concluded. Thanks for reading, I hope the posts on OMS and getting SCSM data via NRT Perfomance Collection has been useful 😉

Collecting Service Manager Incident Stats in OMS via PowerShell Script Performance Collection in Operations Manager

I have been thinking about bringing in some key Service Manager statistics to Microsoft Operations Management Suite. The best way to do that now is to use the NRT Performance Data Collection in OMS and PowerShell Script rule in my Operations Manager Management Group that I have connected to OMS. The key solution to make this happen are the “Sample Management Pack – Wizard to Create PowerShell script Collection Rules” described in this blog article http://blogs.msdn.com/b/wei_out_there_with_system_center/archive/2015/09/29/oms-collecting-nrt-performance-data-from-an-opsmgr-powershell-script-collection-rule-created-from-a-wizard.aspx.

With this solution I can practically get any data I want into OMS via SCOM and PowerShell Script, so I will start my solution for bringing in Service Manager Stats by defining some PowerShell commands to get the values I want. For that I will use the SMLets PowerShell Module for Service Manager.

For this blog article, I will focus on Incident Stats from SCSM. In a later article I will get in some more SCSM data to OMS.

Getting Ready

I have to do some preparations first. This includes:

  1. Importing the “Sample Management Pack – Wizard to Create PowerShell script Collection Rules”
    1. This can be downloaded from Technet Gallery at https://gallery.technet.microsoft.com/Sample-Management-Pack-e48040f7
  2. Install the SMLets for SCSM PowerShell Module (at the chosen Target server for the PowerShell Script Rule).
    1. I chose to install it from the PowerShell Gallery at https://www.powershellgallery.com/packages/SMLets
    2. If you are running Windows Server 2012 R2, which I am, follow the instructions here to support the PowerShellGet module, https://www.powershellgallery.com/GettingStarted?section=Get%20Started.
  3. Choose Target for where to run the SCSM commands from
    1. Regarding the SMLets and where to install, I decided to use the SCOM Root Management Server Emulator. This server will then run the SCSM commands against the Service Manager Management Server.
  4. Choose account for Run As Profile
    1. I also needed to think about the run as account the SCSM commands will run under. As we will see later the PowerShell Script Rules will be set up with a Default Run As Profile.
    2. The Default Run As Profile for the RMS Emulator will be the Management Server Action Account, if I had chosen another Rule Target the Default Run As Profile would be the Local System Account.
    3. Alternatively, I could have created a custom Run As Profile with a SCSM user account that have permissions to read the required data from SCSM, and configured the PowerShell Script rules to use that.
    4. I decided to go with the Management Server Action Account, and make sure that this account is mapped to a Role in SCSM with access to the work items I want to query against, any Operator Role will do but you could chose to scope and restrict more if needed:

At this point I’m ready for the next step, which is to create some PowerShell commands for the Script Rule in SCOM.

Creating the PowerShell Command Script for getting SCSM data

First I needed to think about what kind of Incident data I wanted to get from SCSM. I decided to get the following values:

  • Active Incidents
  • Pending Incidents
  • Resolved Incidents
  • Closed Incidents
  • Incidents Opened Last Day
  • Incidents Opened Last Hour

These values will be retrieved by the Get-SCSMIncident cmdlet in SMLets, using different criteria. The complete script is shown below:

# Debug file
$debuglog = $env:TEMP+\powershell_perf_collect_debug.log

Date | Out-File $debuglog

Who Am I: | Out-File $debuglog -Append
whoami |
Out-File $debuglog -Append

$ErrorActionPreference = Stop

Try {

Import-Module C:\Program Files\WindowsPowerShell\Modules\SMLets

$API = new-object -comObject MOM.ScriptAPI

$scsmserver = MY-SCSM-MANAGEMENTSERVER-HERE

$beforetime = $(Get-Date).AddHours(1)
$beforedate = $(Get-Date).AddDays(1)

$PropertyBags=@()

$activeincidents = 0
$activeincidents = @(Get-SCSMIncident -ComputerName $scsmserver -Status Active).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Active Incidents)
$PropertyBag.AddValue(Value, [UInt32]$activeincidents)
$PropertyBags += $PropertyBag

Active Incidents: | Out-File $debuglog -Append
$activeincidents | Out-File $debuglog -Append

$pendingincidents = 0
$pendingincidents = @(Get-SCSMIncident -ComputerName $scsmserver -Status Pending).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Pending Incidents)
$PropertyBag.AddValue(Value, [UInt32]$pendingincidents)
$PropertyBags += $PropertyBag

Pending Incidents: | Out-File $debuglog -Append
$pendingincidents | Out-File $debuglog -Append

$resolvedincidents = 0
$resolvedincidents = @(Get-SCSMIncident -ComputerName $scsmserver -Status Resolved).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Resolved Incidents)
$PropertyBag.AddValue(Value, [UInt32]$resolvedincidents)
$PropertyBags += $PropertyBag

Resolved Incidents: | Out-File $debuglog -Append
$resolvedincidents | Out-File $debuglog -Append

$closedincidents = 0
$closedincidents = @(Get-SCSMIncident -ComputerName $scsmserver -Status Closed).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Closed Incidents)
$PropertyBag.AddValue(Value, [UInt32]$closedincidents)
$PropertyBags += $PropertyBag

Closed Incidents: | Out-File $debuglog -Append
$closedincidents | Out-File $debuglog -Append

$incidentsopenedlasthour = 0
$incidentsopenedlasthour = @(Get-SCSMIncident -CreatedAfter $beforetime -ComputerName $scsmserver).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Incidents Opened Last Hour)
$PropertyBag.AddValue(Value, [UInt32]$incidentsopenedlasthour)
$PropertyBags += $PropertyBag

Incidents Opened Last Hour: | Out-File $debuglog -Append
$incidentsopenedlasthour | Out-File $debuglog -Append

$incidentsopenedlastday = 0
$incidentsopenedlastday = @(Get-SCSMIncident -CreatedAfter $beforedate -ComputerName $scsmserver).Count
$PropertyBag = $API.CreatePropertyBag()
$PropertyBag.AddValue(Instance, Incidents Opened Last Day)
$PropertyBag.AddValue(Value, [UInt32]$incidentsopenedlastday)
$PropertyBags += $PropertyBag

Incidents Opened Last Day: | Out-File $debuglog -Append
$incidentsopenedlastday | Out-File $debuglog -Append

$PropertyBags

} Catch {

Error Catched: | Out-File $debuglog -Append
$(
$_.Exception.GetType().FullName) | Out-File $debuglog -Append
$(
$_.Exception.Message) | Out-File $debuglog -Append

}

Some comments about the script. I usually like to include some debug logging in the script when I develop the solution. This way I can keep track of what happens underway in the script or get some exceptions if the script command fails. Be aware though that doing $ErrorActionPreference=Stop will end the script if any errors, so it might be an idea to remove the debug logging when confirmed that everything works.

In the next part I’m ready to create the PowerShell Script Rule.

Creating the PowerShell Script Rule

In the Operations Console, under Authoring, create a new PowerShell Script Rule as shown below:

  1. Select the PowerShell Script (Performance – OMS Bound) Rule:I have created a custom destination management pack for this script.
  2. Specifying a Rule name and Rule Category: Performance Collection. As mentioned earlier in this article the Rule target will be the Root Management Server Emulator:
  3. Selecting to run the script every 30 minutes, and at which time the interval will start from:
  4. Selecting a name for the script file and timeout, and entering the complete script as shown earlier:
  5. For the Performance Mapping information, the Object name must be in the \\FQDN\YourObjectName format. For FQDN I used the Target variable for PrincipalName, and for the Object Name ServiceManagerIncidentStats, and adding the “\\” at the start and “\” between: \\$Target/Host/Property[Type=”MicrosoftWindowsLibrary7585010!Microsoft.Windows.Computer”]/PrincipalName$\ServiceMgrIncidentStats I specified the Counter name as “Service Manager Incident Stats”, and the Instance and Value are specified as $Data/Property(@Name=’Instance’)$ and $Data/Property(@Name=Value)$. These reflect the PropertyBag instance and value created in the PowerShell script:
  6. After finishing the Create Rule Wizard, two new rules are created, which you can find by scoping to the Root Management Server Emulator I chose as target. Both Rules must be enabled, as they are not enabled by default:
  7. Looking into the Properties of the rules, we can make edits to the PowerShell script, and verify that the Run as profile is the Default. This is where I would change the profile if I wanted to create my own custom profile and run as account for it.

At this point we are finished configuring the SCOM side, and can wait for some hours to see that data are actually coming into my OMS workspace.

Looking at Service Manager Performance Data in OMS

After a while I will start seeing Performance Data coming into OMS with the specified Object and Counter Name, and for the different instances and values.

In Log Search, I can specify Type=Perf ObjectName=ServiceMgrIncidentStats, and I will find the Results and Metrics for the specified time frame.

I can now look at Service Manager stats as performance data, showing different scenarios like how many active, pending, resolved and closed incidents there are over a time period. I can also look at how many incidents are created by each hour or by each day.

Finally, I can also create saved searches and alerts that creates for example alerts when the number of incidents for any counters are over a set value.

Thanks for reading, and look for more blog posts on OMS and getting SCSM data via NRT Perfomance Collection in the future 😉

Publish Operations and Service Manager Consoles as Azure RemoteApp Programs

This blog post will show how you can publish System Center 2012 R2 Management Consoles like the Operations Console and Service Console as Azure Remote Apps.

I already have an environment in Azure that runs Service Manager and Operations Manager 2012 R2. Now I want remote clients and platforms to be able to run these consoles as remote apps.

When planning, I decided to use a Hybrid Collection, and the overall steps was (as documented in https://azure.microsoft.com/en-us/documentation/articles/remoteapp-create-hybrid-deployment/):

  1. Decide what image to use for your collection. You can create a custom image or use one of the Microsoft images included with your subscription.
  2. Set up your virtual network.
  3. Create a collection.
  4. Join your collection to your local domain.
  5. Add a template image to your collection.
  6. Configure directory synchronization. Azure RemoteApp requires that you integrate with Azure Active Directory by either 1) configuring Azure Active Directory Sync with the Password Sync option, or 2) configuring Azure Active Directory Sync without the Password Sync option but using a domain that is federated to AD FS. Check out the configuration info for Active Directory with RemoteApp.
  7. Publish RemoteApp apps.
  8. Configure user access.

Azure Remote App can only be configured in the classic Azure Management Portal (manage.windowsazure.com), it’s on the roadmap for support in the new portal.azure.com and with Azure Resource Manager.

Step 1 – Create Custom Image for Installing Service Manager and Operations Consoles

There are several options available for creating a custom image. As my System Center environment already was set up in Azure, it made sense to create and use an image on an Azure VM, https://azure.microsoft.com/en-us/documentation/articles/remoteapp-image-on-azurevm/.

Create Azure VM for Image

First I created a VM based on template for Remote Desktop Session Host:

The VM template must be configured to run on a VNet that can contact the Domain Controller for the Domain that the System Center Servers are joined to.

After the VM was provisioned, I joined it to my AD Domain and rebooted.

After that I was ready to install and configure the Service Manager and Operations Consoles with all requirements on the image. I also added the most recent Update Rollups for System Center 2012 R2 Consoles (that already are updated on the System Center Servers of course).

A this point I tested the Service Manager and Operations Consoles, and was successful in connecting the Management Servers.

Finally, the Azure VM needed to be Sysprepped. The desktop contains a validation script, it will validate the image for Azure RemoteApp requirements, and ask to launch Sysprep:

Then:

After running Sysprep the VM shuts down and we are ready for capturing the virtual machine.

Capture the virtual machine

To capture the image follow the instructions as specified here https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/.

I give the virtual machine an image name and description, and confirm that I have run Sysprep. Note that the VM will be deleted after the image is captured:

When finished, the captured image is deleted from the virtual machines instances, and available under images:

At this point I have a ready image to use. I can create new VM’s based on this image if I want, or I can use it as a RemoteApp image, which I will return to later. Now I’m ready for the next step.

Step 2 – Set up virtual network

Most will already have this in place if you have an existing Azure Subscription with Azure VMs. The requirement is that the Azure RemoteApp collection and image must be able to connect to the existing infrastructure. In this scenario where I want to use Service Manager and Operations Manager Consoles as Azure RemoteApp, I would need to be able to reach the Management Servers. Basically there are 3 scenarios:

  1. The Management Servers and the RemoteApp programs are on the same Azure Virtual Network.
  2. The Management Servers are on on-premise infrastructure, and the Virtual Network must be configured with a VPN to the on-premise infrastructure.
  3. The Management Servers are on another Virtual Network, and these two Virtual Networks must be connected to each other via Azure Site-to-Site VPN.
    1. A variant on the third scenario, where the Management Servers are created in a Resouce Group in ARM (Azure Resource Manager).

And of course, my scenario was the last one, as I have created my System Center 2012 R2 environment in a Resource Group using Azure Resource Manager J.

Since I cannot configure and deploy Azure RemoteApp in ARM, I needed to create a Site-to-Site VPN between my Azure Service Management environment (ASM) and ARM environment.

There are some good guidance on how to do that in this article: https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-arm-asm-s2s/.

Since I already had VNets and VM’s in place I had to tweak a little in the guide, but I was able to make it work successfully. I will not get into details on that here, as I will focus on Azure RemoteApp, but that might be stuff for another blog post later :).

PS! Make sure that the Virtual Network has at least one DNS server to provide name resolution!

Step 3 – Create a collection

At this stage I was ready to create a RemoteApp Collection. I have signed up for a trial as notified in the image below, and selected to create a RemoteApp Collection with a Basic Plan. I specified my Virtual Network and Subnet that has connection to the ARM environment, and select to Join Local Domain.

After creating the RemoteApp Collection, I get a status of Input Required:

At the Quick Start menu, the steps to follow are highlighted:

Step 4 – Join collection to local domain

The next step is to join the local domain.

I specify my Active Directory Domain information, optionally specifying a OU for the RDS hosts that will provide the service. I also need a service account that has permissions to add computers to the domain.

After successfully joining the domain, the first step is acknowledged and I’m ready for the next step:

Step 5 – Add template image to collection

I now need to add the previously created Azure VM image I captured to the RemoteApp Collection.

Import the image into the Azure RemoteApp image library

First I add the image to the RemoteApp library:

I select to import from the Virtual Machines library:

Find my image:

Give it a name for the RemoteApp template and specify a location consistent with my Azure infrastructure:

After that the Upload Pending status will be there for a while, about 15-20 minutes in my case:

And then the RemoteApp image is ready:

Link collection to existing template image

Now I can link the uploaded image to the collection:

Selecting my image template:

And after that provisioning of the collection with the image starts. This will take a while longer:

While its being provisioned, I can see that the status is pending, and that the operation can take up to 1 hour:

Provisioning status:

Finally, after provisioning:

Then we are ready for the last final steps before we can start testing!

Step 6 – Configure directory synchronization

Azure RemoteApp requires that you integrate with Azure Active Directory by either 1) configuring Azure Active Directory Sync with the Password Sync option, or 2) configuring Azure Active Directory Sync without the Password Sync option but using a domain that is federated to AD FS.

This is already in place in this environment, so the users I want to configure access for are already in Azure AD.

Step 7 – Publish RemoteApp apps

I can now publish the Remote App Programs from the provisioned image:

I select the Service Manager Console and Operations Console:

Verify that the RemoteApp programs are published:

Step 8 – Configure user access

Next I need to configure the users that should have access to the RemoteApp Programs:

After that I’m ready for testing!

Testing the RemoteApp Programs

Now I can test the RemoteApp Programs.

HTML5 Remote App Web

First I test with the new HTML5 Remote App Web Client, this is now in Public Preview, and available at the following URL: https://www.remoteapp.windowsazure.com/web.

When logging in I’m presented with the following Work Resources:

I can successfully launch the Service Manager and Operations Consoles. I can also easily switch between them in the top menu bar:

Azure Remote App Desktop App

Next I test with the downloadable Azure RemoteApp Desktop Application. After signing in I can see my work resources and launch them:

The Azure RemoteApp client also seamlessly integrate the RemoteApp Programs at the Start and All Apps Menu in Windows 10:

Windows 10 Mobile Remote Desktop App

There are Remote Desktop Apps for all mobile platforms (Windows Mobile, iPhone, iOS, Android). Here I have tested the Windows 10 Mobile App:

And I can launch the RemoteApp Program on my phone:

Windows 10 Continuum Support

There is a new Remote Desktop Preview App out for Windows 10 Mobile that support Continuum, but this Preview App does not currently support Azure RemoteApp. That will come later down the line though, and when that comes I will update this blog post or create a related post with my experiences on that!

Conclusion

This was quite fun to work with, and the whole process with the 8 steps above worked like a charm. The most challenging thing was to create the Site-to-Site VPN between my ASM and ARM environment in fact. Outside that I never had any errors or problems.

I’m eagerly awaiting Azure RemoteApp support for Azure Resource Manager though!

Getting SCSM Object History using PowerShell and SDK

Some objects in the Service Manager CMDB, does not have the History tab. For example for Active Directory Users:

image

This makes it more difficult to keep track of changes made by either the AD Connector or Relationship Changes.

Some years ago this blog post from Technet showed how you could access object history programmatically using the SDK and a Console Application: http://blogs.technet.com/b/servicemanager/archive/2011/09/16/accessing-object-history-programmatically-using-the-sdk.aspx

I thought I could accomplish the same using PowerShell and the SDK instead, so I wrote the following script based on the mentioned blog article:

# Import module for Service Manager PowerShell CmdLets
$SMDIR    = (Get-ItemProperty 'hklm:/software/microsoft/System Center/2010/Service Manager/Setup').InstallDirectory
Set-Location -Path $SMDIR
If (!(Get-Module –Name System.Center.Service.Manager)) { Import-Module ".\Powershell\System.Center.Service.Manager.psd1" }

# Connect to Management Server
$EMG = New-Object Microsoft.EnterpriseManagement.EnterpriseManagementGroup "localhost"

# Specify Object Class
# In this example AD User
$aduserclass = Get-SCSMClass -DisplayName "Active Directory User"

# Get Instance of Class
$aduser = Get-SCSMClassInstance -Class $aduserclass | Where {$_.UserName -eq 'myusername'}

# Get History of Object Changes
$listhistory = $emg.EntityObjects.GetObjectHistoryTransactions($aduser)

# Loop History and Output to Console
ForEach ($emoht in $listhistory) {

    Write-Host "*************************************************************" -ForegroundColor Cyan
    Write-Host $emoht.DateOccurred `t $emoht.ConnectorDisplayName `t $emoht.UserName -ForegroundColor Cyan
    Write-Host "*************************************************************" -ForegroundColor Cyan

    ForEach ($emoh in $emoht.ObjectHistory) {

        If ($emoh.Values.ClassHistory.Count -gt 0) {
        
            Write-Host "*************************" -ForegroundColor Yellow
            Write-Host "Property Value Changes"  -ForegroundColor Yellow
            Write-Host "*************************"  -ForegroundColor Yellow

            ForEach ($emoch in $emoh.Values.ClassHistory) {
                ForEach ($propertyChange in $emoch.PropertyChanges) {
                    $propertyChange.GetEnumerator() | % {
                        Write-Host $emoch.ChangeType `t $_.Key `t $_.Value.First `t $_.Value.Second -ForegroundColor Yellow
                    }
                }
            }

        }
        
        If ($emoh.Values.RelationshipHistory.Count -gt 0) {

            Write-Host "*************************" -ForegroundColor Green
            Write-Host "Relationship Changes" -ForegroundColor Green
            Write-Host "*************************" -ForegroundColor Green

            ForEach ($emorh in $emoh.Values.RelationshipHistory) {

                $mpr = $emg.EntityTypes.GetRelationshipClass($emorh.ManagementPackRelationshipTypeId)
                Write-Host $mpr.DisplayName `t $emorh.ChangeType `t (Get-SCSMClassInstance -Id $emorh.SourceObjectId).DisplayName `t (Get-SCSMClassInstance -Id $emorh.TargetObjectId).DisplayName -ForegroundColor Green
            
            }

        }

    }

}

 

Running the script outputs the source of the changes, and any Property or Relationship changes, as shown in the below sample image:

image

Creating SCSM Incidents from OMS Alerts using Azure Automation – Part 2

This is the second part of a 2-part blog article that will show how you can create a new Service Manager Incident from an Azure Automation Runbook using a Hybrid Worker Group, and with OMS Alerts search for a condition and generate an alert which triggers this Azure Automation Runbook for creating an Incident in Service Manager via a Webhook and some contextual data for the Alert.

In Part 1 of the blog I prepared my Service Manager environment, and created Azure Automation Runbook and Assets to run via Hybrid Worker for generating incidents in Service Manager. In this second part of the blog I will configure my Operations Management Suite environment for OMS Alerting and Alert Remediation, and create an OMS Alert that will trigger this PowerShell Runbook.

Configuring OMS Alerting and Remediation

If you haven’t already for your OMS Workspace, you will need to enable the OMS Alerting and Alert Remediation under Settings and Preview Features. This is shown in the picture below:

Creating the OMS Alert

The next step is to create the OMS Alert. To do this I will need to do a Log Search with the criteria I want. For my example in this article, I will use an EventLog Search where I have previously added Azure AD Application Proxy Connector Event Log to OMS, and where I also have created a custom field for events where “The Connector was unable to connect to the service due to networking issues”.

The result of this Log Search is shown below, where I have 7 results in the last 7 days:

When I enabled OMS Alerting and Remediation under Settings, I can now see that I have a new Alert button at the bottom of the screen. I click on that to create my new OMS Alert.

I give the OMS Alert a descriptive name, using my current search query, and checking every 15 minutes for this alert. I can also specify a threshold over a specified time windows, in this case I want the Alert to trigger if there are more than 0 occurrences. If I want to I can also send an email notification to specified recipient(s).

Since I want to generate a SCSM Incident when this OMS Alert triggers, I select to Enable Remediation and select my Create-SCSMIncident Runbook.

After saving the OMS Alert I get a successful confirmation, and a link to where I can see my configured Alerts:

While in Preview I can only create up to 10 Alerts, and I can also remove them but not edit existing for now:

That is all I need to configure in Operations Management Suite to get the OMS Alert to trigger. Now I need to go back to the Azure Portal and configure some changes for my PowerShell Runbook!

Configuring Azure Automation PowerShell Runbook for Webhook and Hybrid Worker Group

In the Azure Portal and under my Automation Account and the PowerShell Runbook I created for Create-SCSMIncident (see Part 1), there will now automatically be created a Webhook for OMS Alert Remediation. This Webhook has a expiry date of one year ahead of creation.

I now need to specify the Parameters for the Webhook, so that it runs on my Hybrid Worker group:

After I have specified the Hybrid Worker group, any OMS Alerts will now trigger this Runbook and run on my local environment, and in this case create the SCSM incident as specified in the PowerShell Runbook. But, I also want to have some contextual data in the Incident, so I need to look at the Webhook data in the next step.

Configuring and using Webhook for contextual data in Runbook

Whenever the OMS Alert triggers the remediation Azure Automation Runbook via the Webhook, event information will be submitted from OMS to the Runbook via WebhookData input parameter.

An example of this is shown in the image below, where the WebhookData Input Parameter contains event information formatted as JSON (JavaScript Object Notation):

So, I need to configure my PowerShell Runbook to process this WebhookData, and to use that information when creating the Incident.

Let’s first take a look at the WebhookData. If I copy the input from above to for example Visual Studio Code, I can see clearer that the WebhookData consists of a WebhookName, RequestBody and RequestHeader. The values I’m looking for are in the RequestBody and SearchResults:

I update my PowerShell Runbook so that I can process the WebhookData, and get the WebhookName, WebhookHeaders and WebhookBody. When I have the WebhookBody, I can get the SearchResults and by using ConvertFrom-JSON loop trough the value array to get the fields I’m looking for like this:

In this case I want the Source, EventID and RenderedDescription, which also corresponds to the values from the Alert in OMS, as shown below. I then use these values for the Incident Title and Description in the PowerShell Runbook.

The complete Azure Automation PowerShell Runbook is shown below:

param (
[
object]$WebhookData
)

if ($WebhookData -ne $null) {

# Get Webhook Data
$WebhookName = $WebhookData.WebhookName
$WebhookHeaders = $WebhookData.RequestHeader
$WebhookBody = $WebhookData.RequestBody

# Writing Webhook Data to verbose output
Write-Verbose Webhook name: ‘$WebhookName’
Write-Verbose Webhook header:
Write-Verbose $WebhookHeaders
Write-Verbose Webhook body:
Write-Verbose $WebhookBody

# Searching Webhook Data for Value Results
$SearchResults = (ConvertFrom-JSON $WebhookBody).SearchResults
$SearchResultsValue = $SearchResults.value
Foreach ($item in $SearchResultsValue)
{
# Getting Alert Source, EventID and RenderedDescription
$AlertSource = $item.Source
Write-Verbose Alert Name: ‘$AlertSource’
$AlertEventId = $item.EventID
Write-Verbose Alert EventID: ‘$AlertEventId’
$AlertDescription = $item.RenderedDescription
Write-Verbose Alert Description: ‘$AlertDescription’
}

# Setting Incident Title and Description based on OMS Alert
$incident_title = OMS Alert: + $AlertSource
$incident_desc = $AlertDescription
}
else
{
# Setting Generic Incident Title and Description
$incident_title = Azure Automation Generated Alert
$incident_desc = This Incident is generated from an Azure Automation Runbook via Hybrid Worker
}

# Getting Assets for SCSM Management Server Name and Credentials
$scsm_mgmtserver = Get-AutomationVariable -Name SCSMServerName
$credential = Get-AutomationPSCredential -Name SCSMAASvcAccount

# Create Remote Session to SCSM Management Server
#
(Specified credential must be in Remote Management Users local group and SCSM operator)
$session = New-PSSession -ComputerName $scsm_mgmtserver -Credential $credential

# Import module for Service Manager PowerShell CmdLets
$SMDIR = Invoke-Command -ScriptBlock {(Get-ItemProperty hklm:/software/microsoft/System Center/2010/Service Manager/Setup).InstallDirectory} -Session $session
Invoke-Command -ScriptBlock { param($SMDIR) Set-Location -Path $SMDIR } -Args $SMDIR -Session $session
Import-Module .\Powershell\System.Center.Service.Manager.psd1 -PSSession $session

# Create Incident
Invoke-Command -ScriptBlock { param ($incident_title, $incident_desc)

# Get Incident Class
$IncidentClass = Get-SCSMClass -Name System.WorkItem.Incident

# Get Prefix for Incident IDs
$IncidentPrefix = (Get-SCSMClassInstance -Class (Get-SCSMClass -Name System.WorkItem.Incident.GeneralSetting)).PrefixForId

# Set Incident Properties
$Property = @{Id=$IncidentPrefix{0}
Title
= $incident_title
Description
= $incident_desc
Urgency
= System.WorkItem.TroubleTicket.UrgencyEnum.Medium
Source
= SkillSCSM.IncidentSourceEnum.OMS
Impact
= System.WorkItem.TroubleTicket.ImpactEnum.Medium
Status
= IncidentStatusEnum.Active
}

# Create the Incident
New-SCSMClassInstance -Class $IncidentClass -Property $Property -PassThru

} -Args $incident_title, $incident_desc -Session $session

Remove-PSSession $session

After publishing the updated Runbook I’m ready for the OMS Alert to trigger.

When the OMS Alert triggers

The next time this OMS Alert triggers, I can verify that the Runbook is started and an Incident is created. Since I also wanted an email notification, I also received that:

In Operations Management Suite, I search for any OMS Alerts generated by using the query “Type=Alert SourceSystem=OMS”:

In Azure Automation, I can see that the Runbook has launched a job:

And most importantly, I can see that the Incident is created in Service Manager with the info I specified:

That concludes this two-part blog article on how to create SCSM Incidents from OMS Alerts. OMS Automation rocks!

Creating SCSM Incidents from OMS Alerts using Azure Automation – Part 1

There has been some great announcements recently for OMS Alerts in Public Preview (http://blogs.technet.com/b/momteam/archive/2015/12/02/announcing-the-oms-alerting-public-preview.aspx) and Webhooks support for Hybrid Worker Runbooks (https://azure.microsoft.com/en-us/updates/hybrid-worker-runbooks-support-webhooks/). This opens up for some scenarios I have been thinking about.

This 2-part blog will show how you can create a new Service Manager Incident from an Azure Automation Runbook using a Hybrid Worker Group, and with OMS Alerts search for a condition and generate an alert which triggers this Azure Automation Runbook for creating an Incident in Service Manager via a Webhook and some contextual data for the Alert.

This is the first part of this blog post, so I will start by preparing the Service Manager environment, creating the Azure Automation Runbook, and testing the Incident creation via the Hybrid Worker.

Prepare the Service Manager Environment

First I want to prepare my Service Manager Environment for the Incident creation via Azure Automation PowerShell Runbooks. I decided to create a new Incident Source Enumeration for ‘Operations Management Suite’, and also to create a new account with permissions to create incidents in Service Manager to be used in the Runbooks.

To create the Source I edited the Library List for Incident Source like this:

To make it easier to refer to this Enumeration Value in PowerShell scripts, I define my own ID in the corresponding Management Pack XML:

And specifying the DisplayString for the ElementID for the Languages I want:

The next step is to prepare the account for the Runbook. As Azure Automation Runbooks on Hybrid Workers will run as Local System, I need to be able to run my commands as an account with permissions to Service Manager and to create Incidents.

I elected to create a new local Active Directory account, and give that account permission to my Service Manager Management Server.

With the new account created, I added it to the Remote Management Users local group on the Service Manager Management Server:

Next I added this account to the Advanced Operators Role Group in Service Manager:

Adding the account to the Advanced Operators group is more permission than I need for this scenario, but will make me able to use the same account for other work item scenarios in the future.

With the Service Manager Enviroment prepared, I can go to the next step which is the PowerShell Runbook in Azure Automation.

Create an Azure Automation Runbook for creating SCSM Incidents

I created a new PowerShell Script based Runbook in Azure Automation for Creating Incidents. This Runbook are using a Credential Asset to run Remote PowerShell session commands to my Service Manager Management Server. The Credential Asset is the local Active Directory Account I created in the previous step:

I also have created a variable for the SCSM Management Server Name to be used in the Runbook.

The PowerShell Runbook can then be created in Azure Automation, using my Automation Assets, and connecting to Service Manager for creating a new Incident as specified:

The complete PowerShell Runbook is show below:

# Setting Generic Incident Title and Description
$incident_title = Azure Automation Generated Alert
$incident_desc = This Incident is generated from an Azure Automation Runbook via Hybrid Worker

# Getting Assets for SCSM Management Server Name and Credentials
$scsm_mgmtserver = Get-AutomationVariable -Name SCSMServerName
$credential = Get-AutomationPSCredential -Name SCSMAASvcAccount

# Create Remote Session to SCSM Management Server
#
(Specified credential must be in Remote Management Users local group and SCSM operator)
$session = New-PSSession -ComputerName $scsm_mgmtserver -Credential $credential

# Import module for Service Manager PowerShell CmdLets
$SMDIR = Invoke-Command -ScriptBlock {(Get-ItemProperty hklm:/software/microsoft/System Center/2010/Service Manager/Setup).InstallDirectory} -Session $session
Invoke-Command -ScriptBlock { param($SMDIR) Set-Location -Path $SMDIR } -Args $SMDIR -Session $session
Import-Module .\Powershell\System.Center.Service.Manager.psd1 -PSSession $session

# Create Incident
Invoke-Command -ScriptBlock { param ($incident_title, $incident_desc)

# Get Incident Class
$IncidentClass = Get-SCSMClass -Name System.WorkItem.Incident

# Get Prefix for Incident IDs
$IncidentPrefix = (Get-SCSMClassInstance -Class (Get-SCSMClass -Name System.WorkItem.Incident.GeneralSetting)).PrefixForId

# Set Incident Properties
$Property = @{Id=$IncidentPrefix{0}
Title
= $incident_title
Description
= $incident_desc
Urgency
= System.WorkItem.TroubleTicket.UrgencyEnum.Medium
Source
= SkillSCSM.IncidentSourceEnum.OMS
Impact
= System.WorkItem.TroubleTicket.ImpactEnum.Medium
Status
= IncidentStatusEnum.Active
}

# Create the Incident
New-SCSMClassInstance -Class $IncidentClass -Property $Property -PassThru

} -Args $incident_title, $incident_desc -Session $session

Remove-PSSession $session
The script should be pretty straightforward to interpret. The most important part is that it would require to be run on a Hybrid Worker Group with Servers that can connect via PowerShell Remote to the specified Service Manager Management Server. The Incident that will be created are using a few variables for incident title and description (these will be updated for contextual data from OMS Alerts in part 2), and some fixed data for Urgency, Impact and Status, along with my custom Source for Operations Management Suite (ref. the Enumeration Value created in the first step).

After publishing this Runbook I’m ready to run it with a Hybrid Worker.

Testing the PowerShell Runbook with a Hybrid Worker

Now I can run my Azure Automation PowerShell Runbook. I select to run it on my previously defined Hybrid Worker Group.

The Runbook is successfully completed, and the output is showing the new incident details:

I can also see the Incident created in Service Manager:

That concludes this first part of this blog post. Stay tuned for how to create an OMS Alert and trigger this Runbook in part 2!

Publish the Service Manager Self Service Portal with Azure AD Application Proxy

The Scenario

Updated blog post: 10th November 2015. With todays release of Update Rollup 8 for Service Manager 2012 R2 (https://www.microsoft.com/en-us/download/details.aspx?id=49556) and the new HTML5 based Self Service Portal, I have made some changes to this blog post where the scenario is updated. Please read on for how to publish this portal externally via Azure AD App Proxy:

Recently in a SCSM LyncUp call news came of a coming Self Service Portal that the Service Manager Team are working on. This portal will no longer have a requirement for SharePoint and Silverlight, and will be built on HTML5. Stefan Johner has a good write-up on the features here: http://jhnr.ch/2015/08/22/service-manager-lync-up-summary-august-2015-new-portal-sneak-preview/.

For a while ago I had a blog article on how to publish the Cireson Self Service Portal via the Azure AD Application Proxy (https://systemcenterpoint.wordpress.com/2015/03/26/publish-the-cireson-self-service-portal-with-azure-ad-application-proxy/), and in this blog article I will describe how to publish the new SCSM Self Service Portal. This will give me some interesting possibilities for pre-authentication and controlling user access.

There are two authentication scenarios for publishing this Self Service Portal with Azure AD App Proxy:

  1. Publish without pre-authentication (pass through). This scenario is best used when the Self Service Portal is running Forms Authentication, so that the user can choose which identity they want to log in with. As the new SCSM Self Service Portal doesn’t support Forms Authentication, this is not really an option here.
  2. Publish with pre-authentication. This scenario will use Azure AD authentication, and is best used when the Self Service Portal is running Windows Authentication so that we can have single sign-on with the Azure AD identity.

It is the second scenario with pre-authentication I will configure here.

I went through these steps:

Verify Windows Authentication for Service Manager Self Service Portal

The Service Manager Self Service Portal installs per default with Windows Integrated Authentication. From my environment, I can verify the following configuration settings:

  • Windows Authentication is enabled for the Web Site Application
  • On Advanced settings for the Web Site Application, Kernel Mode Authentication Enabled and Extended Protection to Off. For Providers Negotiate are listed on top.

It is a good idea at this point to verify that Windows Integrated Authentication is working correctly by browsing internally to http[s]://scsmportalservername:[port]/selfserviceportal. Your current logged on user (if permissions are correct) should be logged in automatically.

Create the Application in Azure AD

In this next step, I will create the Proxy Application in Azure AD where the Self Service Portal will be published. To be able to create Proxy Applications I will need to have either an Enterprise Mobility Suite license plan, or Azure AD Basic/Premium license plan. From the Azure Management Portal and Active Directory, under Applications, I add a new Application and select to “Publish an application that will be accessible from outside your network”:

I will then give a name for my application, specify the internal URL and pre-authentication method. I name my application “SCSM Self Service Portal”, use “http://portalserverfqdn:%5Bport%5D” as internal URL and choose Azure Active Directory as Pre-Authentication method.

After the Proxy Application is added, there are some additional configurations to be done. If I have not already, Application Proxy for the directory have to be enabled. I have created other Proxy Applications before this, so I have already done that.

I also need to download the Application Proxy connector, install and register this on a Server that is member of my own Active Directory. The Server that I choose can be either on an On-Premise network, or in an Azure Network. As long as the Server running the Proxy connector can reach the internal URL, I can choose which Server that best fits my needs.

Update: Regarding AADP Connector, you can now greate connector groups and configure the application to use the group of connector(s) you choose:

AADPConnectorGroup

Since I choose to use pre-authentication, I can also assign individual users or groups to the Application. This enables me to control which users who will see the application under their My Apps and who will be able access the application’s external URL directly.

I now need to make additional configurations to the application, and go to the Configure menu. From here I can configure the name, external URL, pre-authentication method and internal URL, if I need to change something.

I choose to change the External URL so that I use my custom domain, and note the warning about creating a CNAME record in external DNS. After that I hit Save so that I can configure the Certificate.

AADPCustomDomain

Since I have already uploaded a certificate (see previous blog post https://systemcenterpoint.wordpress.com/2015/06/10/using-a-custom-domain-name-for-an-application-published-with-with-azure-ad-application-proxy/), I can just verify that it is correct.

AADPCert

Next I need to configure to use Internal Authentication Method “Windows Integrated Authentication”. I also need to configure the Service Principal Name (SPN). Here I specify HTTP/portalserverfqdn, in my example this is HTTP/az-scsm-ms01.skill.local.

Update: You can now choose which Identity to delegate, in this case UPN is fine.

AADPIntegratedWinAuth

From the bottom part of the configuration settings I can configure Acces Rules, which at this time is in Preview. This is cool, because I can for example require for this Application that users will be required to use multi-factor authentication. I have not enabled that here though.

Another feature that is in Preview, is to allow Self-Service Access to the published application. I have configured this here, so that users can request access to the application from the Access Panel (https://myapps.microsoft.com).

After I have configured this and uploaded a logo, I am finished at this step, and now need to configure some more settings in my local Active Directory.

Configure Kerberos Constrained Delegation for the Proxy Connector Server

I now need to configure so that the Server running the Proxy Connector can impersonate users pre-authenticating with Azure AD and use Windows Integrated Authentication to the Self Service Portal Server.

I find the Computer Account in Active Directory for the Connector Server, and on the Delegation tab click on “Trust this computer for delegation to specified services only”, and to “Use any authentication protocol”. Then I add the computer name for the portal server and specify the http service as shown below (I already have an existing delegation set up):

This was the last step in my configuration, and I am almost ready to test.

If you, like me, have an environment consisting on both On-Premise and Azure Servers in a Hybrid Datacenter, please allow room for AD replication of these SPN’s and more.

Testing the published application!

Now I am ready to test the published proxy application.

Remember from earlier that I have assigned the application either to a group of all or some users or directly to some pilot users for example.

I will now log on with my Azure AD user (which of course is synchronized from local Active Directory), and I will use the URL https://myapps.microsoft.com.

After logging on, I can see the applications I have access to. Some of these are SaaS applications I have configured, some are applications we have developed ourselves, and I can see the published Self Service Portal:

(Don’t mind the Norwegian captions and texts, you get the idea;)

I then click on the SCSM Self Service Portal, and can confirm that I am able to access the Self Service Portal. See the external URL I specified and that indeed I’m logged in with my Active Directory user with SSO.

AADPSCSMPortal

Another cool thing is that I can use the App menu in Office 365 and add the Self Service Portal to the App chooser for easy access:

I can now also access the Self Service Application from the “My Apps” App on my Mobile Devices.