Tag Archives: Azure

Shut Down Azure Servers for Earth Hour – 2015 Edition

One year ago, I published a blog article for shutting down, and later restart again, Azure Servers for one hour duration during Earth Hour 2014: https://systemcenterpoint.wordpress.com/2014/03/28/earth-hour-how-to-shut-down-and-restart-your-windows-azure-services-with-automation/.

In that article, I used three different Automation technologies to accomplish that:

  • Scheduled PowerShell script
  • System Center 2012 R2 Orchestrator
  • Service Management Automation in Windows Azure Pack

Today is Earth Hour 2015 (www.earthhour.org). While the Automation technologies referred still can be used for shutting down and restarting Azure Servers, I thought I should create an updated blog article using Azure Automation that has been launched during the last year.

This new example are built on the following:

  1. An Azure SQL Database with a table for specifying which Cloud Services and VM Names that should be shut down during Earth Hour
  2. An Azure Automation Runbook which connects to the Azure SQL Database, reads the Servers specified and shuts them down one by one (or later starts them up one by one).
  3. Two Schedules, one that triggers when the Earth Hour starts and one that triggers when Earth Hour begins, and calls the Runbook.

Creating a Azure SQL Database or a SQL Server is outside the scope of this article, but the table I have created is defined like this:

CREATE TABLE dbo.EarthHourServices
(
    ID int NOT NULL,
    CloudService varchar(50) NULL,
    VMName varchar(50) NULL,
    StayProvisioned bit NULL,
CONSTRAINT PK_ID PRIMARY KEY (ID)
)
GO

The StayProvisioned is a boolean data value where I can specify if VM’s should only be stopped, or stopped and deallocated.

This table is then filled with values for the servers I want to stop.

The Azure Automation Runbook I want to create have some requirements:

  1. I need to create a PowerShell Credential Asset for the SQL Server username and password
  2. I need to be able to Connect to my Azure Subscription. Previously I have been using the Connect-Azure solution (https://gallery.technet.microsoft.com/scriptcenter/Connect-to-an-Azure-f27a81bb) for connecting to my specified Azure Subscription. This is still working and I’m using this method in this blog post, but now depreciated and you should use this guide instead: http://azure.microsoft.com/blog/2014/08/27/azure-automation-authenticating-to-azure-using-azure-active-directory/.

This is the Runbook I have created:

workflow EarthHour_StartStopAzureServices
{
    param
    (
        # Fully-qualified name of the Azure DB server 
        [parameter(Mandatory=$true)] 
        [string] $SqlServerName,
        # Credentials for $SqlServerName stored as an Azure Automation credential asset
        [parameter(Mandatory=$true)] 
        [PSCredential] $SqlCredential,
        # Action, either Start or Stop for the specified Azure Services 
        [parameter(Mandatory=$true)] 
        [string] $Action
    )

    # Specify Azure Subscription Name
    $subName = 'My Azure Subscription'
    # Connect to Azure Subscription
    Connect-Azure `
        -AzureConnectionName $subName
    Select-AzureSubscription `
        -SubscriptionName $subName 

    inlinescript
    {

        # Setup credentials   
        $ServerName = $Using:SqlServerName
        $UserId = $Using:SqlCredential.UserName
        $Password = ($Using:SqlCredential).GetNetworkCredential().Password
        
        # Create connection to DB
        $Database = "SkillAutomationRepository"
        $DatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
        $DatabaseConnection.ConnectionString = "Server = $ServerName; Database = $Database; User ID = $UserId; Password = $Password;"
        $DatabaseConnection.Open();

        # Get Table
        $DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
        $DatabaseCommand.Connection = $DatabaseConnection
        $DatabaseCommand.CommandText = "SELECT ID, CloudService, VMName, StayProvisioned FROM EarthHourServices"
        $DbResult = $DatabaseCommand.ExecuteReader()

        # Check if records are returned from SQL database table and loop through result set
        If ($DbResult.HasRows)
        {
            While($DbResult.Read())
            {
                # Get values from table
                $CloudService = $DbResult[1]
                $VMname = $DbResult[2]
                [bool]$StayProvisioned = $DbResult[3] 
 
                 # Check if we are starting or stopping the specified services
                If ($Using:Action -eq "Stop") {

                    Write-Output "Stopping: CloudService: $CloudService, VM Name: $VMname, Stay Provisioned: $StayProvisioned"
                
                    $vm = Get-AzureVM -ServiceName $CloudService -Name $VMname
                    
                    If ($vm.InstanceStatus -eq 'ReadyRole') {
                        If ($StayProvisioned -eq $true) {
                            Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -StayProvisioned
                        }
                        Else {
                            Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -Force
                        }
                    }
                                       
                }
                ElseIf ($Using:Action -eq "Start") {

                    Write-Output "Starting: CloudService: $CloudService, VM Name: $VMname, Stay Provisioned: $StayProvisioned"

                    $vm = Get-AzureVM -ServiceName $CloudService -Name $VMname
                    
                    If ($vm.InstanceStatus -eq 'StoppedDeallocated' -Or $vm.InstanceStatus -eq 'StoppedVM') {
                        Start-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name    
                    }
                     
                }
 
            }
        }

        # Close connection to DB
        $DatabaseConnection.Close() 
    }    

}

And this is my schedules which will run the Runbook when Earth Hour Begins and Ends. The Scedules specify the parameters I need to connect to Azure SQL and the Action for either Stop VM’s or Start VM’s.

Good luck with automating your Azure Servers and remember to turn off the lights as well J!

Publish the Cireson Self Service Portal with Azure AD Application Proxy

The Scenario

Update: This blog post is the first part in a series. See:
Part 2 – Using a Custom Domain Name for an Application Published with with Azure AD Application Proxy

I have been looking at different usable scenarios for publishing internal sites via the Azure AD Application Proxy, and decided to have a go at publishing the Cireson Self Service Portal. This will give me some interesting possibilities for pre-authentication and controlling user access.

I have been considering two scenarios for publishing the Self Service Portal:

  1. Publish without pre-authentication (pass through). This scenario is best used when the Self Service Portal is running Forms Authentication, so that the user can choose which identity they want to log in with.
  2. Publish with pre-authentication. This scenario will used Azure AD authentication, and is best used when the Self Service Portal is running Windows Authentication so that we can have single sign-on with the Azure AD identity.

It is the second scenario with pre-authentication I will configure here.

I went through these steps:

Configure Windows Authentication for Cireson Self Service Portal

The Cireson Self Service Portal installs per default with Forms Based Authentication. I need to configure Windows Integrated Authentication for the portal, and this is well documented in the Knowledge Article Cireson customers and partners can access at https://support.cireson.com/KnowledgeBase/View/45. From my environment, I can summarize the following configuration settings:

  • The Self Service Portal (v3.6 with hotfix) are running on the same server as the Service Manager Management Server (recommended and officially supported by Cireson)
  • The Portal/Management Server is configured with Kerberos Delegation in Active Directory with “Trust this computer for delegation to any service (Kerberos only)”
  • The Service Manager Service Account is configured with Service Principal Names (SPN) with:
    • SETSPN –s MSOMSDKSVC/NameOfYourServerHere SCSMServiceAccountHere
    • SETSPN –s MSOMSDKSVC/FQDNOfYourServerHere SCSMServiceAccountHere
  • Service Manager Service Account is added to the IIS_IUSRS local group on the Portal Server
  • The Cireson Portal Web Site are configured with Windows Authentication (Kernel Mode Authentication Enabled, Extended Protection to Off). For Providers Negotiate are listed on top.

It is a good idea at this point to verify that Windows Integrated Authentication is working correctly by browsing internally to http://portalservername. Your current logged on user (if permissions are correct) should be logged in automatically.

Create the Application in Azure AD

In this next step, I will create the Proxy Application in Azure AD where the Self Service Portal will be published. To be able to create Proxy Applications I will need to have either an Enterprise Mobility Suite license plan, or Azure AD Premium license plan. From the Azure Management Portal and Active Directory, under Applications, I add a new Application and select to “Publish an application that will be accessible from outside your network”:

I will then give a name for my application, specify the internal URL and pre-authentication method. I name my application “Self Service Portal”, use “http://portalserverfqdn” as internal URL and choose Azure Active Directory as Pre-Authentication method.

After the Proxy Application is added, there are some additional configurations to be done. If I have not already, Application Proxy for the directory have to be enabled. I have created other Proxy Applications before this, so I have already done that.

I also need to download the Application Proxy connector, install and register this on a Server that is member of my own Active Directory. The Server that I choose can be either on an On-Premise network, or in an Azure Network. As long as the Server running the Proxy connector can reach the internal URL, I can choose which Server that best fits my needs.

Since I choose to use pre-authentication, I can also assign individual users or groups to the Application. This enables me to control which users who will see the application under their My Apps and who will be able access the application’s external URL directly.

I now need to make additional configurations to the application, and go to the Configure menu. From here I can configure the name, external URL, pre-authentication method and internal URL, if I need to change something.

What I need to configure here is to use Internal Authentication Method to “Windows Integrated Authentication”. I also need to configure the Service Principal Name (SPN). Here I specify HTTP/portalserverfqdn, in my example this is HTTP/az-scsm-ms01.skill.local.

From the bottom part of the configuration settings I can configure Acces Rules, which at this time is in Preview. This is cool, because I can for example require for this Application that users will be required to use multi-factor authentication. I have not enabled that here though.

After I have configured this, I am finished at this step, and now need to configure some more settings in my local Active Directory.

Configure Kerberos Constrained Delegation for the Proxy Connector Server

I now need to configure so that the Server running the Proxy Connector can impersonate users pre-authenticating with Azure AD and use Windows Integrated Authentication to the Self Service Portal Server.

I find the Computer Account in Active Directory for the Connector Server, and on the Delegation tab click on “Trust this computer for delegation to specified services only”, and to “Use any authentication protocol”. Then I add the computer name for the portal server and specify the http service as shown below:

This was the last step in my configuration, and I am almost ready to test.

If you, like me, have an environment consisting on both On-Premise and Azure Servers in a Hybrid Datacenter, please allow room for AD replication of these SPN’s and more.

Testing the published application!

Now I am ready to test the published proxy application.

Remember from earlier that I have assigned the application either to a group of all or some users or directly to some pilot users for example.

I will now log on with my Azure AD user (which of course is synchronized from local Active Directory), and I will use the URL https://myapps.microsoft.com.

After logging on, I can see the applications I have access to. Some of these are SaaS applications I have configured, some are applications we have developed ourselves, and I can see the published Self Service Portal:

(Don’t mind the Norwegian captions and texts, you get the idea;)

I then click on the Self Service Portal, and can confirm that I am able to access the Self Service Portal. See the special proxy URL and that indeed I’m logged in with my Active Directory user with SSO.

Another cool thing is that I can use the App menu in Office 365 and add the Self Service Portal to the App chooser for easy access:

I can now also access the Self Service Application from the “My Apps” App on my Mobile Devices.

Copy SMA Runbooks from one Server to another

Recently I decided to scrap my old Windows Azure Pack environment and create a new environment for Windows Azure Pack partly based in Microsoft Azure. As a part of this reconfiguration I have set up a new SMA server, and wanted to copy my existing SMA runbooks from the old Server to the new Server.

This little script did the trick for me, hope it can be useful for others as well.


# Specify old and new SMA servers
$OldSMAServer = "myOldSMAServer"
$NewSMAServer = "myNewSMAServer"

# Define export directory
$exportdir = 'C:\_Source\SMARunbookExport\'

# Get which SMA runbooks I want to export, filtered by my choice of tags
$sourcerunbooks = Get-SmaRunbook -WebServiceEndpoint https://$OldSMAServer | Where { $_.Tags -iin ('Azure','Email','Azure,EarthHour','EarthHour,Azure')}

# Loop through and export definition to file, on for each runbook
foreach ($rb in $sourcerunbooks) {
    $exportrunbook = Get-SmaRunbookDefinition -Type Draft -WebServiceEndpoint https://$OldSMAServer -name $rb.RunbookName
    $exporttofile = $exportdir + $rb.RunbookName + '.txt'
    $exportrunbook.Content | Out-File $exporttofile
}

# Then loop through and import to new SMA server, keeping my tags
foreach ($rb in $sourcerunbooks) {
    $importfromfile = $exportdir + $rb.RunbookName + '.txt'
    Import-SmaRunbook -Path $importfromfile -WebServiceEndpoint https://$NewSMAServer -Tags $rb.Tags
}

# Check my new SMA server for existence of the imported SMA runbooks
Get-SmaRunbook -WebServiceEndpoint https://$NewSMAServer |  FT RunbookName, Tags



How to access Operational Insights from Windows Phone

Microsoft just recently announced the availability of Operational Insights App for Windows Phone: http://www.windowsphone.com/en-us/store/app/operational-insights/4823b935-83ce-466c-82bb-bd0a3f58d865?signin=true

The App require that you sign in with a Microsoft Account:

Many organizations, as ourselves, are using Organizational Accounts for our Azure and Office 365 services. Therefore, to be able to use this App I will need to create/use a Microsoft Account with access to my Operational Insights workspace.

First, I will need to log on to my workspace at https://preview.opinsights.azure.com. From there I select the Settings icon, right beside the welcome message and the name of my workspace.

At settings, I click on Manage Users, and from there I can select to Add User:

As my existing administrator account is an Organizational account, I will now add a Microsoft account, and select if that user should be Administrator or User role.

After adding the Microsoft Account, I receive an activation email which I have to complete for the user to be added.

 

I activate the Microsoft account and join the workspace:

After that, I am able to successfully log in with the Windows Phone App:

Cireson Portal and SQL AlwaysOn Availability Databases

I have for a while been working on my System Center Service Manager environment to run on Azure Virtual Machines, and to increase availability for this environment I have created a SQL Server AlwaysOn Cluster also in Azure. While my environment is mostly for demo and development, and availability is not that critical, I find it important to look into high availability scenarios for knowledge and guidance for our customers.

There are some great tutorials and support on how to create SQL Server AlwaysOn in Azure, and how Service Manager supports AlwaysOn in these links:

I have also placed my Cireson Portal server on an Azure Virtual Machine, configured to use the Service Manager environment above.

The SQL Server AlwaysOn cluster use an Availability Listener, which until recently had to use a Public IP endpoint for the Cloud Service. Therefore, my Availability Listener is using a custom endpoint port, which I have set to 51433 but could be anything you want. Since this is a Public IP address, it is also important to set ACLs on that endpoint. Some weeks ago, there was finally support for running AlwaysOn listener on Internal Load Balancer, http://azure.microsoft.com/blog/2014/10/01/sql-server-alwayson-and-ilb/, so I will change my configuration to that next.

Now for Cireson Portal v1 and v2, where the connection string had to be configured manually in the .config files, it was quite straightforward to configure the SQL Server connections. In v2 Cireson also introduced the HTML KB and ServiceManagement database, which also required a connection. To get this to work I configured the following files:

  • In the Web.Config file at C:\Inetpub\CiresonPortal, specify database connection string with Server=ListenerName,Port:<connectionStrings>
    <add name=”ManagementServer” connectionString=”az-scsm-ms01″ />
    <add name=”ServiceManagementDatabase” connectionString=”Server= AZ-SCSQLListen,51433;Database=ServiceManagement;Trusted_Connection=True;” />
    </connectionStrings>
  • In the Cireson.CacheBuilder.WindowsService.exe.Config file at C:\Inetpub\CiresonPortal\CacheBuilder\WindowsService, and Cireson.CacheBuilder.Service.exe.Config file at C:\Inetpub\CiresonPortal\CacheBuilder\ConsoleApplication, specify the database connection strings to both ServiceManager database and ServiceManagement database with Server=ListenerName,Port:<connectionStrings>
    <add name=”ServiceManagementDatabase” connectionString=”Server=AZ-SCSQLListen,51433;Database=ServiceManagement;Trusted_Connection=True;” />
    <add name=”ServiceManagerDatabase” connectionString=”Server=AZ-SCSQLListen,51433;Database=ServiceManager;Trusted_Connection=True;” />
    <add name=”ManagementServer” connectionString=”az-scsm-ms01″ />
    </connectionStrings>

In Cireson Portal v1 and v2, this worked perfectly. I was able to use both ServiceManager database, and the new ServiceManagement database via the AlwaysOn Availability Listener.

Now, for Cireson Portal v3 and the Release Candidate I have been testing, there is a new Setup program. This setup program does not recognize using Availability Listener name and custom SQL Server port, as shown below:

The same warning message applied to the Cache Builder settings of the Setup program.

Now, I have two choices:

  1. Go ahead and use the listener, port configuration and finish the setup.
  2. Or just specify the name of the primary SQL Server, and later reconfigure the same Config files that I did in the previous version.

For the first alternative, at the end of running Setup, I got this error message:

The setup log gave the same information, and when looking into the Web.Config file at C:\Inetpub\CiresonPortal the availability listener and port was configured in the server setting. The same applied to the Console and Windows Service cache builder config files, which in v3 are located in the folder C:\inetpub\CiresonPortal\bin.

So I tried the second alternative, just use the name of the primary SQL Server node, and finish Setup that way. This time I get a little further, but get an ALTER DATABSE error from Setup:

Looking at the log for details, I see the reason for the failure:

10/26/2014 7:53:57 PM Attempting to create ServiceManagement Database

10/26/2014 7:53:57 PM Failed to create management database ServiceManagement on az-scsql01 : The operation cannot be performed on database “ServiceManagement” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.

ALTER DATABASE statement failed.

So, in other words, to successfully complete Setup I would need to temporary remove the ServiceManagement database from the Availability Group, so that only the primary SQL Server node has the database.

When I did that, everything went as planned and the Setup completed without any errors!

I now wanted to see how I could after Setup still configure the Cireson Portal v3 to use the Availability Group and Listener for ServiceManagement database. So I did the following:

  1. Added the ServiceManagement database back to the Availability Group
  2. Configured the Web.Config file to use <availability listener name>,<port> for Server connection for ServiceManagement database.
  3. Configured the Cireson.CacheBuilder.WindowsService.exe.Config and Cireson.CacheBuilder.Service.exe.Config files in C:\inetpub\CiresonPortal\bin, with the same <availability listener name>,<port> for Server connection for ServiceManagement database.

This worked fine, and when accessing the Cireson Portal I was able to log in and use HTML KB and more.

But, after a while, I suspected a problem with the cache builder, which in v3 use the ServiceManagement database. And when running the Console application for Cache Builder I see the following error:

From the CacheBuilder.log file at C:\Inetpub\CiresonPortal\bin\logs folder I see several of these errors:
ERROR [MAIN] 26 Oct 2014 20:12:00,068: Error executing delegate: System.Data.SqlClient.SqlException (0x80131904): The operation cannot be performed on database “ServiceManagement” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.
ALTER DATABASE statement failed.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, SqlDataReader ds)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
at Cireson.ServiceManager.DAL.Database.SwitchToBulkLoggedRecoveryMode(ISqlConnectionWrapper connection)
at Cireson.ServiceManager.DAL.Database.<>c__DisplayClass7`1.b__6(ISqlConnectionWrapper connection)
at Cireson.ServiceManager.DAL.Database.OpenConnection(Action`1 action, String context)
ClientConnectionId:9004c2b3-2cc2-4d04-8748-106e431b7cab
ERROR [MAIN] 26 Oct 2014 20:12:00,068: SqlBulkCopy caused an exception.
ERROR [MAIN] 26 Oct 2014 20:12:00,068: Native Error : The operation cannot be performed on database “ServiceManagement” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.
ERROR [MAIN] 26 Oct 2014 20:12:00,068: Native Error : ALTER DATABASE statement failed.
INFO [MAIN] 26 Oct 2014 20:12:00,068: Added 0 work items.

So, a quick recap:

So far on testing the Cireson Portal v3 with SQL AlwaysOn availability databases I have discovered:

  • The new Setup program cannot use availability listener and custom SQL Server port. Setup finishes but any DAC packages and changes to the ServiceManagement database are not deployed.
  • The Setup program cannot deploy ServiceManagement database if the database is in an Availability Group even if I only specify the primary SQL Server node directly.
  • After Setup I can change the Config files to use AlwaysOn availability group, but Cache Builder fails if the ServiceManagement database is in an Availability Group.
  • Cireson Portal v3 can successfully connect to and use the ServiceManager database in an Availability Group.

These experiences are submitted to Cireson as bug/feature requests, so I will update this post if changes are made to any of this.

 

 

Assigning a Public Reserved IP to existing Azure Cloud Service

I have been running a SQL Server AlwaysOn Cluster in Azure for my System Center environment. I use this mostly for Demo and Development for partner solutions, so I like to shutdown and deallocate the cloud services when I’m not using them. This would also mean that the Public IP address for my Cloud Services would be deallocated and a new IP adresse would be created when I start the Cloud Service deployment again.

So, I have been looking into the new possibility of creating a reservation for the Public IP address, as described here: http://msdn.microsoft.com/en-us/library/azure/dn690120.aspx.

As described in the link:

  • You must reserve the IP address first, before deploying.
  • At this time, you can’t go back and apply a reservation to something that’s already been deployed.

Unfortunately, I had already deployed my cloud service and VMs.

The solution? Well, If I’m willing to accept a small downtime, I can easily remove and re-deploy my Cloud Service and VMs while keeping my data and configurations!

My solution was using Azure PowerShell, and save the VM configuration to XML files, then delete the VMs (NB! Important not to delete the disks). After that I recreate the VMs from the config XML files, and specify my IP address reservation which I had already created before. Now my Cloud Service and VM deployment have a reservation, and in that way my SQL AlwaysOn listener keeps a fixed IP address.

The complete solution is listed in the script window below, the script is meant to run interactively snippet by snippet. Please make sure that you do a backup first if this is required.

# PowerShell command to set a reserved IP address for Cloud Service in Azure
# Reference:
# Reserved IP addresses: http://msdn.microsoft.com/en-us/library/azure/dn690120.aspx

# Log on to my Azure account
Add-AzureAccount

# Set active subscription
Get-AzureSubscription -SubscriptionName "mysubscriptionname" | Select-AzureSubscription

# Create a Public Reserved IP for SQL AlwaysOn Listener IP
$ReservedIP = New-AzureReservedIP -ReservedIPName "SCSQLAlwaysOnListenerIP" -Label "SCSQLAlwaysOnListenerIP" -Location "West Europe"

$workingDir = (Get-Location).Path

# Define VMs and Cloud Service
$vmNames = 'az-scsql02', 'az-scsql01', 'az-scsqlquorum'
$serviceName = "mycloudsvc-az-scsql"

# Export VM Config and Stop VM
ForEach ($vmName in $vmNames) {

    $Vm = Get-AzureVM –ServiceName $serviceName –Name $vmName
    $vmConfigurationPath = $workingDir + "\exportedVM_" + $vmName +".xml"
    $Vm | Export-AzureVM -Path $vmConfigurationPath

    Stop-AzureVM –ServiceName $serviceName –Name $vmName -Force

}

# Remove VMs while keeping disks
ForEach ($vmName in $vmNames) {

    $Vm = Get-AzureVM –ServiceName $serviceName –Name $vmName
    $vm | Remove-AzureVM -Verbose

}

# Specify VNet for the VMs
$vnetname = "myvnet-prod"

# Re-create VMs in specified order
$vmNames = 'az-scsqlquorum', 'az-scsql01', 'az-scsql02'

ForEach ($vmName in $vmNames) {

    $vmConfigurationPath = $workingDir + "\exportedVM_" + $vmName +".xml"
    $vmConfig = Import-AzureVM -Path $vmConfigurationPath

    New-AzureVM -ServiceName $serviceName -VMs $vmConfig -VNetName $vnetname -ReservedIPName $ReservedIP.ReservedIPName -WaitForBoot:$false

}




Hidden Network Adapters in Azure VM and unable to access network resources

I have some Azure VM’s that I regulary Stop (deallocate) and Start using Azure Automation. The idea is to cut costs while at night or weekends, as these VM’s are not used then anyway. I recently had a problem with one of these Virtual Machines, I was unable to browse or connect to network resources, could not connect to the domain to get Group Policy updates and more. When looking into it, I found out that I had a lot of hidden Network Adapters in Device Manager. The cause of this is that every time a VM is shut down and deallocated, on next start it will provision a new network adapter. The old network adapter is kept hidden. The result of this over time as I automate shut down and start every day, is that I get a lot of these, as shown below: I found in some forums that the cause of the network browse problem I had with the server could be related to this for Azure VM’s. I don’t know the actual limit, or if it’s a fixed value, but the solution would be to uninstall these hidden network adapters. Although it is easy to right click and uninstall each network adapter, I wanted to create a PowerShell Script to be more efficient. There are no native PowerShell cmdlets or Commands that could help me with this, so after some research I ended with a combination of these two solutions:

I then ended up with the following PowerShell script. The script first get all hidden devices of type Microsoft Hyper-V Network Adapter and their InstanceId. Then for each device uninstall/remove with DevCon.exe. The Script:

Set-Location C:\_Source\DeviceManagement

Import-Module .\Release\DeviceManagement.psd1 -Verbose

# List Hidden Devices

Get-Device -ControlOptions DIGCF_ALLCLASSES | Sort-Object -Property Name | Where-Object {($_.IsPresent -eq $false) -and ($_.Name -like “Microsoft Hyper-V Network Adapter*”) } | ft Name, DriverVersion, DriverProvider, IsPresent, HasProblem, InstanceId -AutoSize

# Get Hidden Hyper-V Net Devices

$hiddenHypVNics = Get-Device -ControlOptions DIGCF_ALLCLASSES | Sort-Object -Property Name | Where-Object {($_.IsPresent -eq $false) -and ($_.Name -like “Microsoft Hyper-V Network Adapter*”) }

# Loop and remove with DevCon.exe

ForEach ($hiddenNic In $hiddenHypVNics) {

$deviceid = “@” + $hiddenNic.InstanceId

.\devcon.exe -r remove $deviceid

}

And after a while all hidden network adapter devices was uninstalled: In the end I booted the VM and after that everything was working on the network again!

Earth Hour – How to Shut Down and Restart your Windows Azure Services with Automation

The upcoming Saturday, 29th of March 2014, Earth Hour will be honored between 8:30 PM and 9:30 PM in your local time zone. Organized by the WWF (www.earthhour.org), Earth Hour is an incentive where amongst other activities people are asked to turn off their lights for an hour.

I thought this also was an excellent opportunity to give some examples of how you can use automation technologies to shut down and restart Windows Azure Services. Let’s turn off the lights of our selected Windows Azure servers for an hour!

I will use three of my favorite automation technologies to accomplish this:

  1. Windows PowerShell
  2. System Center 2012 R2 Orchestrator
  3. Windows Azure Pack with SMA

I also wanted some kind of list of which services I would want to turn off, as not all of my services in Azure are available to be turned off. There are many ways to accomplish this, I chose to use a comma separated text file. My CSV file includes the name of the Cloud Services in Azure I want to shut down (and later restart), and all VM roles inside the specified Cloud Services will be shut down.

In the following, I will give examples for using each of the automation technologies to accomplish this.

Windows PowerShell

In my first example, I will use Windows PowerShell and create a scheduled PowerShell Job for the automation. This has a couple of requirements:

  1. I must download and install Windows Azure PowerShell.
  2. I must register my Azure Subscription so I can automate it from PowerShell.

For instructions for these prerequisites, see http://www.windowsazure.com/en-us/documentation/articles/install-configure-powershell.

First, I start PowerShell at the machine that will run my PowerShell Scheduled Jobs. I prefer PowerShell ISE for this, and as the job will automatically run on Saturday evening, I will use a server machine that will be online at the time the schedule kicks off.

The following commands creates job triggers and options for the job:

# Job Trigger

$trigger_stop = New-JobTrigger -Once -At “03/29/2014 20:30”

$trigger_start = New-JobTrigger -Once -At “03/29/2014 21:30”

# Job Option

$option = New-ScheduledJobOption -RunElevated

After that, I specify some parameters, as my Azure subscription name and importing the CSV file that contains the name of the Cloud Services I want to turn off and later start again:

# Azure Parameters

$az_subscr = my subscription name

$az_cloudservices = Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8

Then I create some script strings, one for the stop job, and one for the start job. It is out of the scope of this blog article to explain in detail the PowerShell commands used here, but you will see that the CSV file is imported, and for each Cloud Service a command is run to stop or start the Azure Virtual Machines.

Another comment is that for the Stop-AzureVM command I chose to use the switch “-StayProvisioned”, which will make the server to keep its internal IP address and the Cloud Service will keep its public IP address. I could also use the “-Force” switch which will deallocated the VM’s and Cloud Services. Deallocating will save costs but also risks that the VM’s and Cloud Services will be provisioned with different IP addresses when restarted.

# Job Script Strings

$scriptstring_stop = “Import-Module Azure `

Set-AzureSubscription -SubscriptionName ‘” + $az_subscr + “‘ `

Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8 | ForEach-Object { Get-AzureVM -ServiceName `$_.CloudService | Where-Object {`$_.InstanceStatus -eq ‘ReadyRole’} | ForEach-Object {Stop-AzureVM -ServiceName `$_.ServiceName -Name `$_.Name -StayProvisioned } } “

$scriptstring_start = “Import-Module Azure `

Set-AzureSubscription -SubscriptionName ‘” + $az_subscr + “‘ `

Import-Csv -Path C:\_Source\Script\EarthHour_CloudServices.csv -Encoding UTF8 | ForEach-Object { Get-AzureVM -ServiceName `$_.CloudService | Where-Object {`$_.InstanceStatus -eq ‘StoppedDeallocated’} | ForEach-Object {Start-AzureVM -ServiceName `$_.ServiceName -Name `$_.Name } } “

# Define Job Script Blocks

$sb_stop = [scriptblock]::Create($scriptstring_stop)

$sb_start = [scriptblock]::Create($scriptstring_start)

To create the PowerShell Jobs my script strings must be formatted as script blocks, as shown above. Next I specify my credentials that will run the job, and then register the jobs. I create two jobs for each the stop and start automation, and with the configured script blocks, triggers, options and credentials.

# Get Credentials

$jobcreds = Get-Credential

# Register Jobs

Register-ScheduledJob -Name Stop_EarthHourSkillAzure -ScriptBlock $sb_stop -Trigger $trigger_stop -ScheduledJobOption $option -Credential $jobcreds

Register-ScheduledJob -Name Start_EarthHourSkillAzure -ScriptBlock $sb_start -Trigger $trigger_start -ScheduledJobOption $option -Credential $jobcreds

And that’s it! The jobs are now registered and will run at the specified schedules.

If you want to test a job, you can do this with the command:

# Test Stop Job

Start-Job -DefinitionName Stop_EarthHourSkillAzure

You can also verify the jobs in Task Scheduler, under Task Scheduler Library\Microsoft\Windows\PowerShell\ScheduledJobs:

System Center 2012 R2 Orchestrator

Since its release with System Center 2012, Orchestrator have become a popular automation technology for data centers. With Integration Packs for Orchestrator, many tasks can be performed by creating Orchestrated Runbooks that automates your IT processes. Such Integration Packs exists for System Center Products, Active Directory, Exchange Server, SharePoint Server and many more. For this task, I will use the Integration Pack for Windows Azure.

To be able to do this, the following requirements must be met:

  1. A System Center 2012 SP1 or R2 Orchestrator Server, with Azure Integration Pack installed.
  2. Create and configure an Azure Management Certificate to be used from Orchestrator.
  3. Create a Windows Azure Configuration in Orchestrator.
  4. Create the Runbooks and Schedules.

I will not get into detail of these first requirements. Azure Integration Pack can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=39622.

To create and upload an Azure Management Certificate, please follow the guidelines here: http://msdn.microsoft.com/en-us/library/windowsazure/gg551722.aspx.

Note that from the machine you create the certificate, you will have the private key for then certificate in your store. You must then do two things:

  1. Upload the .cer file (without the private key) to Azure. Note your subscription ID.
  2. Export the certificate to a .pfx file with password, and transfer this file to your Orchestrator Server.

In Runbook Designer, create a new Windows Azure configuration, with .pfx file location and password, and the subscription ID. For example like this:

You are now ready to use the Azure activities in your Orchestrator Runbooks.

In Orchestrator it is recommended to create different Runbooks that call upon each other, rather than creating a very big Runbook that do all the work. I have created the following Runbooks and Schedules:

  • A generic Recycle Azure VM Runbook, that either Starts, Shutdown or Restarts Azure VM’s based on input parameters for Action to do, Cloud Service, Deployment and VM instance names.
  • An Earth Hour Azure Runbook, that reads Cloud Services from CSV file and gets the Deployment Name and VM Role Name, with input parameter for Action. This Runbook will call the first generic Runbook.
  • I will have two Schedule Runbooks, that are Started, and will kick off at ..
  • .. two specified Schedules, for Earth Hour Start and Stop.

Let’s have a look:

My first generic Recycle Azure VM Runbook looks like this:

This Runbook is really simple, based on input parameter for Action either Start, Shutdown and Restart are sent to the specified VM instance, Deployment and Cloud Service. The input parameters for the Runbook is:

I also have an ActivityID parameter, as I also use this Runbook for Service Requests from Service Manager.

My parameters are then used in the activities, for example for Shutdown:

After this I have the Earth Hour Azure Runbook. This Runbook looks like this:

This Runbook takes one input parameter, that would be Start or Shutdown for example:

First, I read the Cloud Services from my CSV file:

I don’t read from line 1 as it contains my header. Next I get my Azure deployments, where the input is from the text file:

The next activity I had to think about, as the Azure Integration Pack doesn’t really have an activity to get all VM role instances in a Deployment. But, the Get Deployment task returns a Configuration File XML, and I was able to do a Query XML activity XPath Query for getting the VM instance role names:

After that I do a Send Platform Event just for debug really, and last I call my first Recycle Azure Runbook, with the parameters from my Activities:

I don’t use ActivityID here as this Runbook is initiated from Orchestrator and not a Service Manager Service Request.

Until now I can now start this Runbook manually, specify Start or Shutdown actions, and the Cloud Services from the CSV file will be processed.

To schedule these to run automatically, I will need to create some Schedules first:

These schedules are created like this. I specify last Saturday in the month.

And with the hours 8:00-9:00 PM. (I will check against this schedule at 8:30 PM)

The other schedule for Start is configured the same way, only with Hours from 9:00-10:00 PM.

The last thing I have to do is to create two different Runbooks, which monitors time:

And:

These two Runbooks will check for the monitor time:

If the time is correct, it will check againt the Schedule, and if that is correct, it will call the Earth Hour Azure with Start and Shutdown.

The scheduling functionality is a bit limited in Orchestrator for running Runbooks only one time, so in reality these Runbooks if left running will actually run every last Saturday every month.

Anyway, the last thing to do before the weekend is to kick of these two last Runbooks, and it will run as planned on Saturday night.

Windows Azure Pack with SMA

The last automation technology I want to show is using SMA, Service Management Automation, with Windows Azure Pack. Service Management Automation (SMA) is a new feature of System Center 2012 R2 Orchestrator, released in October 2013. At the same time Windows Azure Pack (WAP) was released as a component in System Center 2012 R2 and Windows Server 2012 R2. I will not get into great detail of WAP and SMA here, but these solutions are very exciting for Private Cloud scenarios. SMA does not require Windows Azure Pack, but it is recommended to install WAP to use as a portal and user interface for authoring and administering the SMA runbooks.

The example I will use here have the following requirements:

  1. I must download and install Windows Azure PowerShell.
  2. I must configure a connection to my Azure Subscription.
  3. I must create a connection object in SMA to Windows Azure.
  4. I must create the SMA runbooks.

The prerequisites above are well described here: http://blogs.technet.com/b/privatecloud/archive/2014/03/12/managing-windows-azure-with-sma.aspx.

So I will concentrate on creating the the SMA runbooks.

The SMA runbooks will use PowerShell workflow. I will create two SMA runbooks, one for ShutDown and one for Starting the Azure Services.

My PowerShell workflow for the ShutDown will then look like this:

workflow
EarthHour_ShutDownAzureServices

{

    # Get the Azure connection


$con = Get-AutomationConnection -Name My-SMA-AzureConnection

    # Convert the password to a SecureString to be used in a PSCredential object


$securepassword = ConvertTo-SecureString -AsPlainText -String $con.Password -Force

    # Create a PS Credential Object

    $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $con.Username, $securepassword


inlinescript

{


Import-Module “Azure”

 # Select the Azure subscription

 Select-AzureSubscription -SubscriptionName my subscription name

 # Import Cloud Services from CSV file and Stop VMs for each specified service

 Import-Csv -Path C:\_Source\EarthHour_CloudServices.csv -Encoding UTF8 | `

 ForEach-Object { Get-AzureVM -ServiceName $_.CloudService | `

 Where-Object {$_.InstanceStatus -eq ‘ReadyRole’} | `

 ForEach-Object {Stop-AzureVM -ServiceName $_.ServiceName -Name $_.Name -StayProvisioned } }

} -PSComputerName $con.ComputerName -PSCredential $cred

}

As for the first automation example with PowerShell Job I read the Cloud Services from a CSV file, and loop through the Cloud Services and Azure VM’s for the Stop commands.

Similarly I have the workflow for Start the Azure services again:

workflow
EarthHour_StartAzureServices

{

    # Get the Azure connection

 $con = Get-AutomationConnection -Name My-SMA-AzureConnection

    # Convert the password to a SecureString to be used in a PSCredential object

 $securepassword = ConvertTo-SecureString -AsPlainText -String $con.Password -Force

    # Create a PS Credential Object

    $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $con.Username, $securepassword

 inlinescript

{

 Import-Module “Azure”

 # Select the Azure subscription

 Select-AzureSubscription -SubscriptionName my subscription name

 # Import Cloud Services from CSV file and Stop VMs for each specified service

 Import-Csv -Path C:\_Source\EarthHour_CloudServices.csv -Encoding UTF8 | `

 ForEach-Object { Get-AzureVM -ServiceName $_.CloudService | `

 Where-Object {$_.InstanceStatus -eq ‘StoppedDeallocated’ -Or $_.InstanceStatus -eq ‘StoppedVM’ } | `

 ForEach-Object {Start-AzureVM -ServiceName $_.ServiceName -Name $_.Name } }

} -PSComputerName $con.ComputerName -PSCredential $cred

}

These workflows can now be tested I you want by running Start action. I however want to schedule them for Earth Hour on Saturday. This is really easy to do using the SMA assets for schedules:

These schedules are then linked to each of the SMA runbooks:

And similarly, with details:

That concludes this blog post. Happy automating, and remember to turn off your lights!