Category Archives: OAuth2

Speaking at Cloud Identity Summit 2022!

I’m excited to to be travelling to Bonn, Germany, and to speak at the upcoming Cloud Identity Summit 2022, which will be held September 22nd at adesso SE, close to the city of Bonn.

This is my second time speaking at the Cloud Identity Summit, the first time was in 2020 and that was a virtual online conference only, as the Covid pandemic and its effects were felt all over the world. So I’m really looking forward to travel there and be there in-person this time around.

This is the 3rd time the Cloud Identity Summit is held, starting in 2020 it was originally planned as a on-site conference, but had to move to virtual. And again in 2021 it was an virtual conference, featuring 10 sessions over 2 tracks covering Cloud Identity and Security, with 250 participants from all over the world.

Some of the highlights of Cloud Identity Summit 2022:

  • Hybrid event and free of charge.
  • Morning workshops for in-person attendance in Bonn only, covering “Hands on with decentralised identifiers and verifiable credentials”, with Stefan van der Wiele and “Azure AD Security Testing with AADInternals” with Nestori Syynimaa. A tough choice to make there for sure!
  • The afternoon sessions, 8 sessions over 2 tracks (Identity Management and Identity Security), and the subsequent roundtable with all experts will be available both on-site in Bonn and online.
  • .. and as always at Communicty Conference, the ability to connect, ask, share and be with fellow members in the many communities around Microsoft solutions.

I will speak in the Identity Management track, about Azure AD Authentication Fundamentals . Modern authentication in Azure AD can be used in a variety of forms, from human identities to non-human identities like devices, and workload identities like applications and managed identities. While supporting industry standards for AuthN and AuthZ like OIDC and OAuth2, as an Azure AD admin, IT ops or Developer, you have to know what to use when. This session aim to give you that fundamental knowledge!

Sessions details:

Sales for the in-person attendance is now ended, as the event is fully booked! You can still register a free ticket for the virtual attendance though, see this link:

For a full list of conference program and speakers, see the conference website: https://www.identitysummit.cloud/

I really look forwarding to visiting Bonn and Germany, and joining up with the Community at the Cloud Identity Summit 2022! Hope to see you there, please say hi!

Speaking at NIC X Edition 2022!

I’m very happy and excited to once again speak at NIC (Nordic Infrastructure Conference), which will be held May 31 – June 2, Oslo Spektrum, Norway. Previously held in a winterly Oslo in February, and last time held just before the Corona outbreak in 2020, attendants and speakers should this time experience a beautiful Oslo spring surrounding the event.

NIC is celebrating 10 years anniversery this time, and this in-person event gathers over 1000+ attendees, international and well-known speakers, in addition to partners, vendors and a great exhibition area, it is truly the place to be for IT professionals and decision makers that want to see and experience the latest and greatest content!

Some of the highligths of NIC X:

  • Pre-Conference where you can choose to learn from one the best in the industry: Sami Laiho, Paula Januszkiewicz, or John Craddock!
  • 2 full days of conference content including Opening Keynote from Chen Goldberg (VP Google Cloud) and Closing Keynote from Ulrich Hoffman (Corporate VP Microsoft), and 65+ Breakout sessions, all honoring the conference motto: Less slides – more demos!
  • Session tracks for Security, Data, AI & ML, Architecture & Code, Server & Client, Operations & Automation, and Cloud!
  • Anniversary party with the Valentourettes!
  • Awesome exhibition area with over 20+ https://www.nicconf.com/xedition/partners, including Microsoft, AWS, Google and many more.
  • .. and as always at NIC, the best food and mingling with fellow members of the industry.

Myself I will present two breakout sessions during the main conference, focusing on Security with Azure AD and Microsoft Cloud Solutions:

In my first session on the first day I will speak about How to Create an Azure AD Protected API in Azure in one hour!, where I will show you how you can create your own API in Azure and protect it with Azure AD using Oauth2. API’s can be anything you want, and in true NIC spirit this session will really will be most about the demos and very little slides!

In my second session the last day, I will speak about why and how you can use Azure Authentication using Managed Identities vs. Service Principals in Azure AD. Do you use Azure Services that need to authenticate to other resources and APIs? Have you been using App Registrations and Service Principals to achieve this? Have you felt the pain of managing secret credentials, who has access to the credentials and lifecycle management of these and want at better way to achieve Azure Authentication? This is where Managed Identities is the way to go.

Sessions details:

There is still time to book your conference pass: https://www.nicconf.com/xedition/tickets

For a full list of session program and speakers, see the conference website: https://www.nicconf.com/

Hope to see you there!

Speaking at Oslo Power Platform & Beyond!

I’m excited and very much looking forward to speak at the upcoming Oslo Power Platform & Beyond Community Event, which will happen in-person at May 21st 2022 at Microsoft Norway offices i Oslo.

Oslo Power Platform & Beyond is a Community Event hosted by the Dynamics User Group Norway , and will on this upcoming Saturday feature 21 sessions delivered by 23 international speakers and rockstars, MVP’s and community leaders!

My session will be about how you can Connect Power Platform to any Azure AD protected API using OAuth2 and Custom Connectors. While there are hundreds of built-in connectors you can use in your Power Automate Flows or Power Apps, there are many scenarios where you would want to access API’s like Microsoft Graph, or any other API that is protected by Azure AD. In this session I will show how you can access this using Custom Connectors and OAuth2, and my demo will show a self-built API using Azure Serverless solutions like Azure Functions and Logic Apps!

Session details:

The event starts in a few days, but there is still time to register for FREE:

For a full list of session program and speakers, see here: https://oslo-power-platform-and-beyond.sessionize.com/

Hope to see you there!

Creating an Azure AD Protected API in Azure in an hour!

This blog post will accompany my contribution for Festive Tech Calendar 2021, where I on the 22nd of December will present a live stream & interactive session where I in just a school hour will show you how you can create your own API in Azure and protect it with Azure AD using Oauth2. API’s can be anything you want, but let’s keep it festive!

This is some of the content I will cover in this blog post:

  • What is an API anyway?
  • What can you use in Azure to create APIs?
  • Get your tools out!
  • Why do we want to secure it?
  • How can we use Azure AD to secure it?

What is an API?

API, Application Programming Interface, is a middle layer of logic between the consumer (represented by a client), and the data and/or services that the client needs to access. An relevant example is a web application that reads and writes data to a database. To be able to read and write data in that database, you must provide a secure and consistent way to do that, and that is where the API’s come into play. By calling the API the Web Application don’t have to manage the logic and security of operating against the database, the API will handle all of that by exposing methods the client can send requests to and receive responses from.

There are different ways of how you can communicate with an API, or if it will be available on a public network or private, but it is common today that APIs are web based and openly accessible. In general, these APIs should adhere to:

  • Platform independence. Any clients should be able to call it, and that means using standard protocols.
  • Service evolution. The Web API should be able to evolve and add functionality without breaking the clients.

The RESTful API

REST, Representational State Transfer, is an architectural approach to designing web services. Most common REST API implementations use HTTP as the application protocol, making it easier to achieve the goal of platform independence.

Some of the most important guidelines for designing REST APIs for HTTP are (using Microsoft Graph API as examples):

If you want to read more on this topic, I highly recommend this article: https://docs.microsoft.com/en-us/azure/architecture/best-practices/api-design.

In this blog post I will build on these design principles.

Using Azure to create your own APIs

Using Azure Resources you have a range of different solutions from where you can create your own APIs. You can develop and publish APIs using App Services, you can use Azure API Management, or you can start a little more simpler with Azure Serverless technologies like Azure Functions or Logic Apps.

In this blog post I will use Azure Functions for my demo scenario, creating a Serverless API that will receive and respond to HTTP requests. Azure Functions supports all the architectural guidelines from above, including connections to backend services like a database.

Demo Scenario

I will build the following scenario for the solution I want to demo. The theme will be Festive and build a solution for registering and managing Christmas Whishes!

  1. A CosmosDB Account and Database, which will store whishes as document items.
  2. An Azure Function App, with Functions that will serve as the API, and will:
    • Implement methods to GET whishes, create new whishes (POST), change existing (PUT) or DELETE whishes.
    • Provide a secure connection to the Cosmos DB account to update items accordingly.
  3. An Azure App Service, running a web site as frontend, from where users will get, create, update and delete whishes, and this will use the Azure Functions API.

The following simple diagram shows an architectural overview over this solution as described above:

Diagram displaying the parts of the application: web site, the API using Azure Functions, and the database with the products data

Later in this blog post I will show how we can add Azure AD Authentication and Authorization to this solution, and securing the API.

Get your tools ready!

I will use Visual Studio Code and Azure Functions Core Tools to create, work with and publish the serverless API, in addition to creating a frontend web based on Node.js.

If you want to follow along and recreate this scenario in your environment, make sure you have the following installed:

  1. Visual Studio Code. https://code.visualstudio.com/
  2. Node.js. https://nodejs.org/en/
  3. Azure Function Core Tools. https://github.com/Azure/azure-functions-core-tools
  4. Azure Function Extension. https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
  5. In addition, I will build the API logic using Azure Functions PowerShell so you need to have PowerShell Core installed as well.

In addition to the above tools, you will also need access to an Azure AD tenant where you can create App Registrations for Azure AD Authentication, as well as an Azure Subscription where you can create the required resources.

If you don’t have access to an Azure Subscription with at least Contributor access for a Resource Group, you can develop and run parts of the solution locally, but then you would not be able to fully complete all parts of the authentication and authorization requirements.

After making sure you have those components installed, configured or updated, you can proceed to the next steps.

Download Repository set up Resources

I have the following GitHub repository set up with starting resources: https://github.com/JanVidarElven/build-azure-ad-protected-api-azure-functions-festivetechcalendar

You can download all the files using a ZIP file, or you can fork and/or clone the repository if you have your own GitHub account.

After the repository has been downloaded, open the workspace file “christmas-whishes.code-workspace” in VS Code, you should now see two folders, one for api and one for frontend.

The api folder contains the Functions Project, and the Functions I have pre-created and that we will build on later. The frontend folder contains the node.js website with the html file and a javascript file for connecting to the api logic.

Before we proceed with configuring the local project, we need to create the dependant Azure Resources like the Cosmos DB and Functions App.

Set up Azure Resources

In my repository you downloaded / cloned above, you will find instructions and some Az PowerShell samples for creating the required resources, this will include:

  • Create a Resource Group “rg-festivetechcalendar” in your chosen region (you can change the rg name to something other of your choice).
  • Create a Azure Function App of <yourid>-fa-festivetechcalendar-api in the above resource group. Choose your region and a consumption plan.
  • Create a Cosmos DB account in your region and in the above resource group with the name <-yourid>-festivetechcalendar-christmaswhishes, opt in for free tier.
  • In the Cosmos DB account, create a new database called “festivetechcalendar” and a container named “whishes” using /id as partition key.
  • In the above resource group, create an App Service for the frontend web site, with the name <yourid>-festivetechcalendar-christmaswhishes, using Node and Node 14 LTS as runtime and Linux as operationg system. If a Free plan is available in your region, you can use that, but else use a low cost dev/test plan.

PS! You can use other names for the above resources, but then you need to make sure that you change this in the repository code you will be working from.

In addition to the above resources, some supporting services like App Plans, Applications Insights and Storage Accounts are created as part of the process.

Connect to the Azure Account in VS Code

Using the Azure Accont extension i VS Code, make sure that you are signed in to the correct subscription, you should be able to see the above Function App for example, like in my environment:

PS! If you, like I have access to many subscriptions in different tenants, it might be worthwhile to add this azure.tenant setting to the VS Code workspace file:

    "settings": {       
"azure.tenant": "yourtenant.onmicrosoft.com",

Configure the Bindings and make the API RESTful

Next we will make some changes to the Azure Functions API, so that we can successfully connect to the Cosmos DB and make the API RESTful following the architectural guidelines.

First we need to create/update a local.settings.json file in the api folder, where you need the following settings, replace the Festive_CosmosDB connection string with your own connection string from your Azure Resource:

{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME_VERSION": "~7",
"FUNCTIONS_WORKER_RUNTIME": "powershell",
"Festive_CosmosDB": "AccountEndpoint=https://festivetechcalendar-christmaswhishes.documents.azure.com:443/;AccountKey=jnrSbHmSDDDVzo1St4mWSHn……;"
},
"Host": {
"CORS": "*"
}
}

Open a new Terminal Window in VS Code (if not already open) and choose the api folder. Then run the following command func start. This will start up the Functions Core Tools runtime and enable to send requests to the API locally. Typically when you create and run functions locally the will show something like the following:

You will see from above that I have created from before 4 functions:

  • CreateWhish: Function for creating new Christmas Whishes
  • DeleteWhish: Function for Deleting Whishes
  • GetWhishes: Function for Getting Whishes
  • UpdateWhish: Function for Updating Whishes

All of these functions are using an HTTP trigger for request and response. In addition I have used a CosmosInput trigger for getting existing items from the DB (via the connection string defined in local.settings.json) and CosmosOutput trigger for sending new or updated items back to the DB.

Deleting items from Cosmos DB is a little bit more trickier though, as the CosmosOutput trigger does not support deletes. So in that case I chose to do the delete via Cosmos REST API, using Managed Identity for the Function App, or the logged on user locally. More on that later.

First, lets make the APIs RESTful. As I mentioned earlier, the API should make use of resources. And while I have methods for CreateWhish, DeleteWhish and so on, I want to change this so that I will focus on the resource whish, and use the correct verbs for the operations I want. I will change the API to the following:

  • GET /api/whish (getting all whishes)
  • POST /api/whish (creating a new whish)
  • DELETE /api/whish (delete a whish)
  • PUT /api/whish (change a whish)

That should be much better! I should also specify and existing id for DELETE or PUT, and make it possible to GET a specific whish also by id. So let’s define that as well:

  • GET /api/whish/{id?}
  • POST /api/whish
  • DELETE /api/whish/{id}
  • PUT /api/whish/{id}

The question mark after GET /api/whish/{id?} means that it is optional (for getting all whishes) or a specific whish by id. DELETE and PUT should always have an id in the request.

Let’s make this changes in the Functions. Inside every function there is a function.json file, which defines all the input and output bindings. For doing the above changes, we will focus specifically on the HttpTrigger In binding. Two changes must be made, one is to change the method (the http verb) and the other is to add a “route” setting. So for example for GetWhishes, change the first binding to:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "HttpTrigger",
      "direction": "in",
      "name": "Request",
      "methods": [
        "get"
      ],
      "route": "whish/{id?}"      
    },

Then the CreateWhish should be changed to:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "HttpTrigger",
      "direction": "in",
      "name": "Request",
      "methods": [
        "post"
      ],
      "route": "whish"      
    },

The DeleteWhish HttpTrigger in should be changed to:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "HttpTrigger",
      "direction": "in",
      "name": "Request",
      "methods": [
        "delete"
      ],
      "route": "whish/{id}"      
    },

And last the UpdateWhish HttpTrigger in to be changed to:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "HttpTrigger",
      "direction": "in",
      "name": "Request",
      "methods": [
        "put"
      ],
      "route": "whish/{id}"
    },

Now, run func start again in the terminal windows, and the Functions should now show the following:

The API is now much more RESTful, each method is focused on the resource whish, and using the correct http verbs for the operations.

The case of the Delete of Item in Cosmos DB

As mentioned earlier the CosmosOutput binding handles updates and creation of new items in the Cosmos DB, but not deleting. Barbara Forbes has a nice and more detailed walkthrough of how to use the Cosmos DB input and output bindings in this blog post, but for deletes I did it another way. Let’s look into that.

The code in run.ps1 for DeleteWhish function starts with the following, getting the input bindings and retrieving the whish by id from the CosmosInput:

using namespace System.Net

# Input bindings are passed in via param block.
param($Request, $TriggerMetadata, $CosmosInput)

# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request to delete a whish."

# Check id and get item to delete
If ($Request.Params.id) {
    $whish = $CosmosInput | Where-Object { $_.id -eq $Request.Params.id}
}

I now have the item I want to delete. Next I build the document URI for the item I want to delete using the Cosmos DB REST API:

# Build the Document Uri for Cosmos DB REST API
$cosmosConnection = $env:Festive_CosmosDB -replace ';',"`r`n" | ConvertFrom-StringData
$documentUri = $cosmosConnection.AccountEndpoint + "dbs/" + "festivetechcalendar" + "/colls/" + "whishes" + "/docs/" + $whish.id

Note that I have hardcoded my Database (festivetechcalendar) and container (whishes) above, you might want to change that in your environment if different. Next I check if the Azure Function is running in the Function App in Azure, or locally inside my VS Code. If running in the Function App I will use the Managed Identity to connect to the resource https://cosmos.azure.com and get an access token. If I run locally in VS Code I’ll just get an acces token using Get-AzAccessToken, providing that I have connected to my tenant and subscription earlier using Login-AzAccount.

NB! This operation requires RBAC Data Operations role assigment, more on that later!

# Check if running with MSI (in Azure) or Interactive User (local VS Code)
If ($env:MSI_SECRET) {
    
    # Get Managed Service Identity from Function App Environment Settings
    $msiEndpoint = $env:MSI_ENDPOINT
    $msiSecret = $env:MSI_SECRET

    # Specify URI and Token AuthN Request Parameters
    $apiVersion = "2017-09-01"
    $resourceUri = "https://cosmos.azure.com"
    $tokenAuthUri = $msiEndpoint + "?resource=$resourceUri&api-version=$apiVersion"

    # Authenticate with MSI and get Token
    $tokenResponse = Invoke-RestMethod -Method Get -Headers @{"Secret"="$msiSecret"} -Uri $tokenAuthUri
    $bearerToken = $tokenResponse.access_token
    Write-Host "Successfully retrieved Access Token Cosmos Document DB API using MSI."

} else {
    # Get Access Token for the interactively logged on user in local VS Code
    $accessToken = Get-AzAccessToken -TenantId elven.onmicrosoft.com -ResourceUrl "https://cosmos.azure.com"
    $bearerToken = $accessToken.Token
}

Then, when I got the Access Token for the Cosmos DB REST API, I can proceed to delete the document item. There are some special requirements for the headers to include the Authorization header, version and partition key as shown below. Then I can Invoke-RestMethod with Delete operation on the Document Uri and with the right Headers. Note also that PowerShell Core wasn’t to happy with this header format, so I had to use the SkipHeaderValidation:


# Prepare the API request to delete the document item
$partitionKey = $whish.id
$headers = @{
    'Authorization' = 'type=aad&ver=1.0&sig='+$bearerToken
    'x-ms-version' = '2018-12-31'
    'x-ms-documentdb-partitionkey' = '["'+$partitionKey+'"]'
}

Invoke-RestMethod -Method Delete -Uri $documentUri -Headers $headers -SkipHeaderValidation

$body = "Whish with Id " + $whish.id + " deleted successfully."

# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
    StatusCode = [HttpStatusCode]::OK
    Body = $body
})

Now, for the user getting the access token, either interactively or the Managed Identity, you will need to assign roles for Data Operations. This is all documented here: https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac, but the following commands should get you started:

$subscriptionId = "<subscriptionId_for_Azure_subscription_for_resources>"
Set-AzContext -Subscription $subscriptionId
$principalUpn = "<user_upn_for_member_or_guest_to_assign_access>"
$managedIdentityName = "<name_of_managed_identity_connected_to_function_app>"

$resourceGroupName = "rg-festivetechcalendar"
$accountName = "festivetechcalendar-christmaswhishes"
$readOnlyRoleDefinitionId = "00000000-0000-0000-0000-000000000001" 
$contributorRoleDefinitionId = "00000000-0000-0000-0000-000000000002"

$principalId = (Get-AzADUser -UserPrincipalName $principalUpn).Id
New-AzCosmosDBSqlRoleAssignment -AccountName $accountName `
    -ResourceGroupName $resourceGroupName `
    -RoleDefinitionId $contributorRoleDefinitionId `
    -Scope "/" `
    -PrincipalId $principalId

$servicePrincipalId = (Get-AzADServicePrincipal -DisplayName $managedIdentityName).Id
New-AzCosmosDBSqlRoleAssignment -AccountName $accountName `
    -ResourceGroupName $resourceGroupName `
    -RoleDefinitionId $contributorRoleDefinitionId `
    -Scope "/" `
    -PrincipalId $servicePrincipalId

In my commands from above I have assigned both my own user (running locally in VS Code) and the Managed Identity for the Function App to the Contributor Role.

PS! Don’t forget to enable Managed Identity for the Function App:

That means that the API is now finished in this first phase, and we can deploy them to the Function App.

After you deploy the functions to the Function App, also make sure to update the app settings.

I won’t go into details for testing this now, I often use Postman for these testing scenarios, both locally and remotely to the Function App, but if you want to see this from a recording of my session at Festive Tech Calendar, you can find that here: https://www.youtube.com/watch?v=5zLbksF0Ejg.

The Web Frontend

A few words about the web frontend also, as I mentioned earlier this is built on Node.js as a Single Page Application (SPA). This means the entire application exists in the browser, there is no backend for the web. All the more reasons for separating the logic and security connections in the API.

The web frontend basically consists of an html page and a javascript file. The javascript file has some methods for using the api, as the screenshot below shows:

There is a API constant that points to the Azure Functions API urls that we saw when running func start. The getWhishes, updateWhish methods and so on use this API and the resource whish, together with the http verbs to send requests and receive responses from the API.

I’m not really a web developer myself, so I have depended on other Microsoft Identity samples and good help from collegueas and community, but I have been able to change the code such that the web frontend you downloaded from the repository earlier now should work with the API locally.

So while you still are running func start from before, open a new terminal window, choosing frontend as the folder, and run:

npm start

This will build and start the web frontend locally:

You can now go to your browser and use http://localhost:3000, which will show you the start page that will look something like this:

You can now Create a Whish, see existing as they are created, change or delete them, it should work provided you have followed the steps as layed out above.

With the API now working between the frontend and the backend Cosmos DB, we can proceed to secure the API!

Why secure the API?

The API is important to protect, as it contains connection strings in app settings and a managed idenity that can manipulate the Cosmos DB. While the default mode for functions require a knowledge of a function code query parameter, this will still be exposed in the web frontend code, any user can open this source code and get those URI’s and run them from other places.

Another reason is the increasing focus on security and zero trust, where every user and access should be verified and never trusted. For example, runnning requests from another unfamiliar location against the API, and as well the assume breach mindset should make sure that every connection is authenticated and audited.

Authentication is one thing, but also authorization is an important part of API protection. Take this simple, but relevant case of Christmas Whishes: Should really everyone be able to see both your own but also everybody elses whishes? Should you be able to write and delete other users whishes?

For this next scenario we will implement authentication and authorization for this Azure Functions API, using Azure AD and OAuth2.

How to use Azure AD to protect the API

Azure Active Directory can be the Identity Provider that can require and successfully authenticate users. Azure AD can also expose API scopes (delegated permissions) and roles (application permissions) as needed so that you can use OAuth2 for authorization decicions as well.

We can require authentication on the API, and provide a way for the frontend web to sign users in, consent to permissions that the scopes have defined, and securely call the API. In that way, there is no way an anonymous user can send requests to the API, they have to be authenticated.

We will use Azure AD App Registrations for setting this up. Let’s get started.

Create the API App Registration

We will first create a App Registration for defining the API. In the Azure AD Portal, select new App Registration, give it a name like FestiveTechCalendar API, and then select the multitenant setting of accounts in any organizational directory and personal Microsoft accounts, and the click create.

(NB! You can use single-tenant if you want to, then only users in your tenant will be able to authenticate to the API).

Next, go to Expose an API, and first set the Application ID URI to something like the following:

Then, we will add the following scopes to be defined by the API:

  • access_as_user, to let users sign in and access the api as themselves
  • Whish.ReadWrite, to let users be able to create, edit, delete or get their own whishes only.
  • Whish.ReadWrite.All, with admin consent only, to let privileged users be able to see all users’ whishes (for example Santa Claus should have this privilege 🎅🏻)

This should look something like this after:

This completes this App Registration for now. We will proceed by creating another App Registration, to be used from the clients.

Create the Frontend App Registration

Create a new App Registration, with the name for example FestiveTechCalendar Frontend, and with the same multitenant + Microsoft personal account setting as the API app. Click create.

Next, go to API permissions, and click Add a permission, from which you can select own APIs and find the FestiveTechCalendar API app and add our three custom scopes, as shown below:

PS! Do NOT click to grant admin consent for your organization: (I prefer that users consent themselves, providing that they are allowed to do so that is)

Next, under Authentication, click to Add a platform, select Single Page Application (SPA) and add http://localhost:3000 as redirect uri:

Adding localhost:3000 will make sure that when I run the web frontend locally, I can sign in from there.

Also, add the checkboxes for Access tokens and ID tokens, as we will need this in our scenario with the API:

Go back to Overview, and note/copy the Application (Client) ID, you will need that later.

Azure AD Authentication to the API with Postman

If you want to be able to test secured requests from Postman client, you can also add the following Web platform in addition to the Single Page Application platform:

Add https://oauth.pstmn.io/v1/callback as Redirect URI for Postman requests.

For authenticating with Postman you also need to setup a Client Secret:

Take a note/copy of that Secret also, it will be needed for Postman testing later.

For now, this App Registration for the frontend client is finished.

Require Authentication on Azure Function App

Wiht Azure AD App Registrations set up, we can now proceed to the Function App and require Azure AD Authentication.

Under Authentication, click to Add an identity provider:

Select Microsoft, then pick and existing app registration and find the FestiveTechCalendar Fronted app. Change the Issuer URL to use the common endpoint, as we have configured this to support both multitenant and microsoft accounts:

Set restric access to Require authentication, and for unauthenticated requests to the API to get a HTTP 401 Unauthorized:

Click Add to finish adding and configuring the Identity provider, the Function App and the Functions API are now protected!

The last step we need to do, is to configure the allowed audiences for the function app authentication, we need to add the following two audiences, and this is for the API App Registration. For v1.0 token formats, use api://festivetechcalendarapi, for v2.0 token formats (which should be default today), use the App ID for the API App Registration:

Note also that for token 2.0 formats the issuer will be https://login.microsoftonline.com/common/v2.0.

Remove Function Key authentication

Now that the Function App in itself is protected with Azure AD Authentication, we can remove the Function Key authentication. For each of the functions (GetWhishes, UpdateWhish, etc), go into the function.json file, and change the authLevel from function to anonymous, like below:

{
  "bindings": [
    {
      "authLevel": "anonymous",
      "type": "HttpTrigger",
      "direction": "in",
      "name": "Request",

After the functions have been changed, deploy from VS Code to the Function App again.

Testing Authentication with Postman

We can now try to do some testing against the API from Postman. This isn’t something you have to do, but it’s nice to use for some evaluating and testing before we proceed to the web frontend.

I have created myself a collection of requests for Festive Tech Calendar in Postman:

The local ones use the URL http://localhost:7071/api/ for requests, while the remote ones use https://<myfunctionapp&gt;.azurewebsites.net/api/ for requests.

For example, if I try to Get Whishes Remote, without authentication, I will now recieve a 401 Unauthorized, and the message that I do not have permission:

In Postman and on the Collection settings, you can add Authorization. Here I have added OAuth 2.0, and specified a Token Name and using Authorization Code grant flow. I will use the browser to authenticate, and note that the callback url should be the same as you added to the Frontend App registration earlier:

Next, specify the /common endpoints for Auth URL and Token URL (https://login.microsoftonline.com/common/oauth2/v2.0/authorize and https://login.microsoftonline.com/common/oauth2/v2.0/token), the Client ID should be the App ID of the Frontend App registration, and the Secret should be the secret you created earlier. I have used environment variables in my setup below.

Important! You need to specify the scopes for which API you will get an access token for. In this case I use the api://festivetechcalendarapi/access_as_user and api://festivetechcalendarapi/Whish.ReadWrite:

With all that set up, you can now click to Get New Access Token. It will launch a browser session (if you are running multiple profiles, make sure it opens in your correct one), and you can authenticate. Upon successful authentication, an access token will be returned to Postman and you can again test a remote request, which this time should be successful:

Adding Authorization Logic to Azure Functions

So now we know that the Function App and API is protected using Azure AD.

The next step would be to implement authorization logic inside the functions. This is when a request has been triggered with an Authorization Header, which contains the Access Token for the API, and where we can get the details from the token and make authorization logic based on that.

For this scenario I’m going to use a community PowerShell module called JWTdetails, made by Darren Robinson.

First add a dependency on that module in requirements.psd1 inside the api folder:

@{
    # For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
    # To use the Az module in your function app, please uncomment the line below.
    'Az' = '7.*'
    'JWTDetails' = '1.*'
}

Then in run.ps1 for each of the functions (Create, Update, Get and Delete Whish) add the following in the beginning, just after param statement and write-host for “PowerShell HTTP trigger…”:

$AuthHeader = $Request.Headers.'Authorization'
If ($AuthHeader) {
    $AuthHeader
    $parts = $AuthHeader.Split(" ")
    $accessToken = $parts[1]
    $jwt = $accessToken | Get-JWTDetails

    Write-Host "This is an authorized request by $($jwt.name) [$($jwt.preferred_username)]"

    # Check Tenant Id to be another Azure AD Organization or Personal Microsoft
    If ($jwt.tid -eq "9188040d-6c67-4c5b-b112-36a304b66dad") {
        Write-Host "This is a Personal Microsoft Account"
    } else {
        Write-Host "This is a Work or School Account from Tenant ID : $($jwt.tid)"
    } 
}

The code you see above, retrieves the Authorization Header if it is present, and then splits the access token from the string “BEARER <jwt token>”. That JWT token can then be checked for details by the Get-JWTDetails command. I’m retreiving a couple of claims to get the user name and the user principal name, and also checks which organization or personal (fixed tenant id for MS accounts) the user is coming from.

In addition, for the GetWhishes function, I also add the authorization logic that the functions checks the scopes, and if the user does not have Whish.ReadWrite.All, then the user should only be allowed to see own whishes at not everybody elses.

This code should be placed right after getting the Cosmos DB items in the run.ps1 for GetWhishes function:

If ($AuthHeader) {
    Write-Host "The Requesting User has the Scopes: $($jwt.scp)"
    # Check for Scopes and Authorize
    If ($jwt.scp -notcontains "Whish.ReadWrite.All") {
        Write-Host "User is only authorized to see own whishes!"
        # $Whishes = $Whishes | Where-Object {$_.uid -eq $jwt.oid}
        # $Whishes = $Whishes | Where-Object {$_.upn -eq $jwt.preferred_username }
        $Whishes = $Whishes | Where-Object {$_.name -eq $jwt.name}
    }
} else {
    Write-Host "No Auth, return nothing!"
    $Whishes = $Whishes | Where-Object {$_.id -eq $null}
}

You will see that I can select a few alternatives for filtering (I have commented out the ones not in use). I can use a soft filter based on name, or if I want to I can filter based on user/object id, or user principal name. The last two options require that I add a couple of lines to the CreateWhish function as well:

$whish = [PSCustomObject]@{
    id = $guid.Guid
    name = $Request.Body.name
    whish = $Request.Body.whish
    pronoun = [PSCustomObject]@{ 
        name = $Request.Body.pronoun.name 
    }
    created = $datetime.ToString()
    uid = $jwt.oid
    upn = $jwt.preferred_username
}

As you see from the last two lines above, I have added that the creation of a new item is also stored with the object id and the upn of the user that has authenticated to the API (using the JWT token).

With these changes, you should once again Deploy the local Functions to the Function App.

If I now do another remote test in Postman client, and follow the Azure Function App monitor in the Azure Portal, I can indeed see that my user has triggered the API securly, and is only authorized to see own whishes:

The last remaining step now is to change the web frontend to be able to use the API via Azure AD Authentication.

Authenticate to API from a Single Page Application (SPA)

We are now going to configure the web frontend application, which is based on JavaScript SPA, to be able to sign in and authorize, get ID and Access Token and send secured requests to our Christmas Whishes API, this will use the Oauth2 authorization code flow as shown below:

Configure Msal.js v2

We are going to use the Microsoft Authentication Library (MSAL) for Javascript, Msal.js v2, for the authentication and authorization flows in the web app.

First, in the frontend folder, create a file named authConfig.js, and add the following code:

const msalConfig = {
    auth: {
      clientId: "<your-app-id>",
      authority: "https://login.microsoftonline.com/common",
      redirectUri: "http://localhost:3000",
    },
    cache: {
      cacheLocation: "sessionStorage", // This configures where your cache will be stored
      storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
    }
  };

  // Add here scopes for id token to be used at MS Identity Platform endpoints.
  const loginRequest = {
    scopes: ["openid", "profile"]
  };

Change the above code to your app id from the frontend app registration, and if you have other names for the api scopes change that as well for the tokenRequest constant.

PS! If you created a single-tenant application earlier, and not multi-tenant and personal microsoft accounts as I did, replace authority above with https://login.microsoftonline.com/<your-tenant-id&gt;.

Create another file in the frontend folder named apiConfig.js. Add the following code:

const apiConfig = {
    whishesEndpoint: "https://<yourfunctionapp>.azurewebsites.net/api/whish/"
  };

Change the above endpoint to your function app name.

Next, create another file in the frontend folder named authUI.js, and add the following code:

// Select DOM elements to work with
const welcomeDiv = document.getElementById("welcomeMessage");
const signInButton = document.getElementById("signIn");

function showWelcomeMessage(account) {
  // Reconfiguring DOM elements
  welcomeDiv.innerHTML = `Welcome ${account.username}`;
  signInButton.setAttribute("onclick", "signOut();");
  signInButton.setAttribute('class', "btn btn-success")
  signInButton.innerHTML = "Sign Out";
}

function updateUI(data, endpoint) {
  console.log('Whishes API responded at: ' + new Date().toString());

}

The above code is used for hiding/showing document elements based on if the user is signed in or not.

Next, we need to have a script with functions that handles the sign-in and sign-out, and getting the access token for the API. Add a new file to the frontend folder called authPopup.js, and add the following script contents to that file:

// Create the main myMSALObj instance
// configuration parameters are located at authConfig.js
const myMSALObj = new msal.PublicClientApplication(msalConfig);

let username = "";

function loadPage() {
    /**
     * See here for more info on account retrieval:
     * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
     */
    const currentAccounts = myMSALObj.getAllAccounts();
    if (currentAccounts === null) {
        return;
    } else if (currentAccounts.length > 1) {
        // Add choose account code here
        console.warn("Multiple accounts detected.");
    } else if (currentAccounts.length === 1) {
        username = currentAccounts[0].username;
        showWelcomeMessage(currentAccounts[0]);
    }
}

function handleResponse(resp) {
    if (resp !== null) {
        username = resp.account.username;
        console.log('id_token acquired at: ' + new Date().toString());        
        showWelcomeMessage(resp.account);
    } else {
        loadPage();
    }
}

function signIn() {
    myMSALObj.loginPopup(loginRequest).then(handleResponse).catch(error => {
        console.error(error);
    });
}

function signOut() {
    const logoutRequest = {
        account: myMSALObj.getAccountByUsername(username)
    };

    myMSALObj.logout(logoutRequest);
}

loadPage();

We can now start to make some changes to the index.html file, so that it supports the Msal v2 library, shows a sign in button and a welcome message, and adds the app scripts from above.

First add the following html code to the HEAD section of the index.html file in frontend folder:

    <!-- IE support: add promises polyfill before msal.js  -->
    //cdn.jsdelivr.net/npm/[email protected]/js/browser/bluebird.min.js
    https://alcdn.msauth.net/browser/2.0.0-beta.4/js/msal-browser.js

Then, add the following sign in button and welcome message text to index.html, so that the header section looks something like this:

      <header>
        <div class="container">
          <div class="hero is-info is-bold">
            <div class="hero-body">
                <img src="festivetechcalendar.jpg" width="500px">
              <h1 class="is-size-1">Christmas Whishes</h1>
            </div>
            <div>
              <button type="button" id="signIn" class="btn btn-success" onclick="signIn()">Sign In</button>
            </div>            
          </div>
          <div>
            <h5 class="is-size-4" id="welcomeMessage">Please Sign In to create or see whishes.</h5>
          </div>
        </div>
      </header>

Then, at the end of the index.html file, add the following app scripts:

    <!-- importing app scripts (load order is important) -->
    http://./authConfig.js
    http://./apiConfig.js
    http://./authUi.js
    http://./authPopup.js
  </body>
</html>

Then, back to the Visual Studio Code and Terminal Window, stop the node web application if it’s still running, and type the following command in the frontend folder:

npm install @azure/msal-browser

This will install and download the latest package references of Msal. After this has been run successfull you will see references updated in the package.json and package-lock.json files.

Save the index.html file with the above changes. We can now test the updated web page via http://localhost:3000. Type npm start in the terminal window to start the web application again. Now you can click the Sign in button, and you can sign in with your account:

PS! Note that the above application supports multi-tenant and personal microsoft accounts, so you can sign in with either.

First time signing in you should get a consent prompt:

These are the permission scopes “openid” and “profile” defined in the login request above.

After signing in the welcome message should get updated accordingly:

We have now been able to sign in with the Identity Platform, and this will get an ID token for the signed in user. We are halfway there, because we will need an access token to be able to call the resource api.

Getting an Access Token for the API using Msal.js v2

Speaking at Scottish Summit 2021!

I’m very much looking forward to speaking this Saturday 27th of February at the Virtual Scottish Summit 2021! This amazing Community conference will host 365 (!) virtual sessions from experts all over the world, ranging from first-time speakers to experienced community leaders.

Sessions will be delivered in a range of tracks covering Microsoft technologies and community topics, something to choose from for everyone.

My session will be about why it so important to have Zero Trust Admins with least privilege access, and why you start using Azure AD PIM (Privileged Identity Management) today!

I learned about the stereotype of cheap Scots from reading about Scrooge McDuck, I’ve no idea if this is true or not, probably not ;), but the thing that is true is that you should be really cheap when handing out admin privileges.

Today Microsoft 365 Global Administrators or Azure Subscription Owners are the new Domain/Enterprise Admins, in many organizations too many users have these roles. In the session I will show how by implementing just-in-time and just-enough-access (JIT/JEA) policies, we can reduce vulnerability and attack surface, and the right tool for the job is using Azure AD Privileged Identity Management (PIM).

I have been using AAD PIM for years, and in this session I will share my best practices and how to implement and use the right way.

Session details:

There is still time to register, but time and available tickets are running out fast. Go an register your free ticket for an awesome day of free community contents here:

Scottish Summit 2021 | Scottish Summit

Have a great conference and hope you visit my session if the topic is of your interest!

Speaking at Nordic Virtual Summit 2021!

I’m excited to announce that I’m speaking at the inaugural Nordic Virtual Summit 2021, from 10th to 11th February. Nordic Virtual Summit is a Microsoft IT Pro Community Event, organized by the people behind #SGUCSE #SCUGDK #SCUGFI #MMUGNO and #MSEndpointMgr communities!

Sessions will be delivered in 3 tracks:

  • Endpoint Management
  • Security & Compliance
  • Azure & Automation

Each day will start with a pre-conference talk, and then the sessions kick off each hour mark, 3 sessions before lunch and 3 after. There will be 15 minutes break and Q/A between sessions, so you can catch your breath, fill up your coffee or just wait in excitement for the next session 🙂

My session will be about while Azure Serverless Automation solutions like Azure Functions & Logic Apps can be great for your automation scenarios, how can you secure access to sending requests and protect your serverless automation using Azure AD authentication and authorization.

Session details:

I’m hearing the number of registered attendees is now closing in on 2000, so make sure you register and secure your FREE ticket today:

Register – Nordic Virtual Summit

Hope to see you there!

Speaking at Global Automation Bootcamp 2021

I’m happy to announce that I’m part of the amazing global initiative of automation bootcamps in starting from February 5th to 20th 2021!

Update: The Azure Automation track has now been pushed back one week from February 20th to February 27th.

I will speak about how Azure Serverless Automation solutions like Azure Functions, Logic Apps and more can be protected by Azure AD and how Power Platform can securely send requests to trigger your automation scenarios. Session details:

You can register for FREE here at this link: Global Automation Bootcamp 2021 – Power Community

The agenda is very exciting with top speakers, and sessions will be delivered according to the following tracks and days:

  • Automation Summit Day 1, Fri 5th February
  • Power Automate Saturday Bootcamp, Sat 6th February
  • Power Automate Bootcamp Day 2, Sun 7th February
  • RPA & UI Test Automation Bootcamp, Sat 13th February
  • Azure Automation Bootcamp, Sat 27th February
  • Powershell Saturday Bootcamp, Sat 20th February

You can sign up anytime, hope to see you at my session and catch any of the other great sessions 🙂

Protect Logic Apps with Azure AD OAuth – Part 3 Connect to API from Power Platform

In this article I’m going to build on my previous blog posts in this series where I have written about how to add Azure AD OAuth authentication and authorization to your Logic Apps and expose them as an API. For reference the links to these blog post articles are here:

If you want to connect to API’s using Power Platform (Power Automate Flows, PowerApps etc.), you can do this in two different ways:

  • Using HTTP action and send requests that use Azure AD OAuth authentication. This will use the “Client Credentials” OAuth flow, and is suitable for calling the API using application permissions and roles.
  • Setting up a Custom Connector for the API, and using the HTTP logic app trigger as operation. This will use the “Authorization Code” OAuth flow, and is suitable for using delegated permissions and scopes for the logged on user via connections.

So it depends on how you want your Power Platform users to be able to send requests your Logic App API. Should they do this as themselves with their logged on user, or should they use an application identity? There are use cases for both, so I will show both in this article.

Connect to Logic App API using Custom Connector

Using Custom Connectors is a great way to use your own identity for sending requests to an API. This way you can also securely share Custom Connectors, and Flows/PowerApps, using them in your organization, without needing to share sensitive credentials like client id and client secrets.

If you want to create a Custom Connector in Power Platform that triggers an HTTP request to a Logic App, you can currently do this in one of the following ways:

  1. Creating Custom Connector using Azure services and Logic App.
  2. Exporting the Logic App to a Power Platform environment.
  3. Creating the Custom Connector using an OpenAPI swagger definition file/url.
  4. Creating the Custom Connector from blank.

Lets take a quick look at each of these, but first we need to take care of some permissions in Azure for creating the Custom Connector automatically.

Azure Permissions for Logic Apps and listing swagger

There are some minimum permissions your user needs to be able to create a Custom Connector automatically by browsing the Azure Service.

A good starting point is using one of the built in Azure roles for Logic Apps:

But even these do not have the permissions necessary, if you try you will get an error similar to the following:

So we need to att this ../listSwagger/action for the scope, and you could give the user full Contributor access, but that seems rather excessive. Lets create a custom role instead. I will do this using Azure PowerShell, for reference please see the docs: Create or update Azure custom roles using Azure PowerShell – Azure RBAC | Microsoft Docs.

After connecting to Azure using an Azure account that can create custom roles for the scope (Owner or User Access Administrator), start by exporting an existing role as a starting point:

# 1. Export a JSON template as a reference based on an exisiting role
Get-AzRoleDefinition -Name "Logic App Operator" | ConvertTo-Json | Out-File .\LogicAppAPIOperator.json

Then edit this JSON file, by removing the Id parameter, defining a Name, setting the IsCustom to true, and Description to something descriptive like below. I have also set my Azure subscription id under assignable scopes, and added the required ../listSwagger/action:

{
  "Name": "Logic App API Operator",
  "IsCustom": true,
  "Description": "Lets you read, enable and disable logic app, and list swagger actions for API.",
  "Actions": [
    "Microsoft.Authorization/*/read",
    "Microsoft.Insights/alertRules/*/read",
    "Microsoft.Insights/metricAlerts/*/read",
    "Microsoft.Insights/diagnosticSettings/*/read",
    "Microsoft.Insights/metricDefinitions/*/read",
    "Microsoft.Logic/*/read",
    "Microsoft.Logic/workflows/disable/action",
    "Microsoft.Logic/workflows/enable/action",
    "Microsoft.Logic/workflows/validate/action",
    "Microsoft.Resources/deployments/operations/read",
    "Microsoft.Resources/subscriptions/operationresults/read",
    "Microsoft.Resources/subscriptions/resourceGroups/read",
    "Microsoft.Support/*",
    "Microsoft.Web/connectionGateways/*/read",
    "Microsoft.Web/connections/*/read",
    "Microsoft.Web/customApis/*/read",
    "Microsoft.Web/serverFarms/read",
    "Microsoft.Logic/workflows/listSwagger/action"
  ],
  "NotActions": [],
  "DataActions": [],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/<my azure sub id>"
  ]
}

After that you can create the custom role:

# 3. Create the new custom role:
New-AzRoleDefinition -InputFile .\LogicAppAPIOperator.json

This role can now be assigned to the Power Platform user(s) that need it, using the scope of your Logic Apps, for example the Resource Group. You can either add the role assignment to the user directly, or preferrably using Azure PIM:

With the correct permissions now in place, you can proceed to the next step.

Creating Custom Connector using Azure services and Logic App

Log in to Power Apps or Power Automate using your Power Platform user. Under Data and Custom Connectors, select to create a New custom connector. From there select “Create from Azure Service” (Preview as per now):

Next, type name for your Custom Connector, select which Azure subscription, and from which Azure service which in this case is Logic Apps. Note that you can create from Azure Functions and Azure API Management as well. Then select the Logic App name from the list:

If you don’t see or get any errors here, verify your permissions.

Click Continue, and you will see something like the following, where the Host and Base URL has automatically been set correctly for your Logic App HTTP trigger. If you want you can upload an icon, background color and desctription as well:

Click Next to go to Security. Here we need to change the Authentication to OAuth 2.0, as this is what we have implemented for authorizing requests to the Logic App. To authenticate and get the correct Access Token, we will reuse the LogicApp Client app registration that we created in the previous blog post. Copy the Application Client ID and Tenant ID:

And then create and copy a new secret for using in the Power Platform Custom Connector:

Fill in the rest of the OAuth 2.0 details for your environment like below:

Note from above that we need to specify the correct resource (the backend API) and scope. This is all very vell described in the previous blog post.

Click Create Connector to save the Connector details. Make sure that you copy the Redirect URL:

And add that to the App Registration redirect URLs:

Next, under the Custom Connector proceed to step 3. Definition. This is where the POST request trigger will be, and it should already be populated with an action:

We need to specify a value for Summary, in my case I will type “Get Managed Devices”:

Next, under Request Query, remove the “sp”, “sv” and “sig” parameters, as these will not be needed as long as we are using OAuth2 authorization scheme:

The Body parameter should be correctly specified, expecting an operatingSystem, osVersion and userUpn request body parameters:

Last, lets check the Responses from the Logic App. They have been successfully configured with status 200 (OK) and 403 (Not Authorized), as these two responses were defined in the Logic App.

If the Response body is empty like below, we would need to import from sample output from the Logic App (response body should have automatically been configured if the response action in Logic App had a response schema defined):

For response status 200, the sample import is:

[
  {
    "deviceName": "",
    "manufacturerer": "",
    "model": "",
    "operatingSystem": "",
    "osVersion": "",
    "userDisplayName": "",
    "userPrincipalName": ""
  }
]

Giving this response definition:

The sample response from the 403 not authorized should be:

{
  "Message": "",
  "Roles Required": "",
  "Roles in Token": "",
  "Scopes Required": "",
  "Scopes in Token": ""
}

Giving this response definition:

NB! It’s important to have correct responses defined like above, it will make it easier to consume those responses later in Power Automate Flows and Power Apps.

Click on Update Connector, and then go to next section 4. Test.

We can now test the Logic App trigger via the Custom Connector, first we need to create a new connection:

After logging in, and if needed consenting to the permissions scopes (se previous blog post for details), we should have a connection. We can now test the trigger by supplying the api-version (2016-10-01) and specify the operatingSystem, osVersion and userUpn parameters:

Click Test operation and verify a successful response like below:

Let’s try another test, this time leaving the userUpn blank (from the previous blog post this means that the Logic App tries to return all managed devices, if the user has the correct scope and/or roles). This time I get a 403 not authorized, which is expected as I don’t have the correct scope/role:

Checking the Logic App run history I can see that my Power Platform user triggered the Logic App and I can see the expected scp and roles claim:

Perfect so far! In the end of this blog post article I will show how we can get this response data to a Power App via a Flow and the Custom Connector, but first lets look into the other ways of creating a Custom Connector.

Exporting the Logic App to a Power Platform environment

In the previous example, I created the Custom Connector from my Power Platform environment, in this example I will do an Export from the Logic App. The user I will do this with needs to be a Power Platform licensed user and have access to the environments, or else I will get this:

To Export, click this button:

Then fill inn the name of the Custom Connector to create, I will call this ..v2, and select environment:

You might get another permission error:

If so, we need to update the Custom Role created earlier with this permission. Do the following:

# 3b. Update the custom role
$roleLogicAppAPIOperator = Get-AzRoleDefinition -Name "Logic App API Operator"
$roleLogicAppAPIOperator.Actions.Add("Microsoft.Logic/workflows/triggers/listCallbackUrl/action")
Set-AzRoleDefinition -Role $roleLogicAppAPIOperator

The role and the assignment should now be updated, so we can try again. You might need to refresh or log out and in again for the permission to be updated. After this the Export should be successful:

We can now find the Custom Connector right below the first we created:

We still need to edit the Custom Connector with the authentication details, adding the app/client id, secret etc. The export has also left out all the query parameters (sv, sp, sig) but also the required api-version. This must be fixed, the easiest way is to switch to Swagger Editor, and add the line 15 and 16 as shown below:

After this you should be able to Update the Connector, and the Test, Create a Connection and verify successful results.

Creating the Custom Connector using an OpenAPI swagger definition file/url

Both examples above, either importing a Custom Connector from Azure Service, or exporting the Logic App to a Custom Connector in a Power Platform environment, require that the user doing this both has:

  • Azure RBAC role assignment and permissions as detailed above.
  • Access to Power Platform environment and licensed for using Power Platform.

What if you as Azure administrator don’t want your Power Platform users to have access to Azure, but you still want to help them with creating Custom Connectors that send requests to selected Logic App workflow APIs?

Then you can provide them with an OpenAPI swagger definition file or url. You can get the swagger OpenAPI definition by running this Azure REST request: Workflows – List Swagger (Azure Logic Apps) | Microsoft Docs.

To get your swagger you can look at the first blog post in this series, where I showed how you could use Az PowerShell to get management access tokens using Get-AzAccessToken and running Invoke-RestMethod. In this example I’m just going to use the Try it button from the Docs link above, and then authenticate to my Azure subscription, and fill inn the required parameters:

Running this request should produce your requested swagger OpenAPI definition. You can now copy this to a file:

Before you share this OpenAPI file with your Power Platform developers, you should edit and remove the following request query parameters, as these are not needed when running the OAuth2 authorization scheme:

After this you can create a new Custom Connector, by specifying an OpenAPI file / url, depending on where you made the file available to your Power Platform Developers.

Browse to the filename and type a Connector name:

After this you will have the basis of the Connector defined, where you can customize general settings etc:

You will now need to add the authentication for Azure AD OAuth2 with client id, secret etc under Security, as well as creating the connector, and under test create a connection and test the operations. This is the same as I showed earlier, so I don’t need to show tthe details here.

Creating the Custom Connector from blank

You can of course create Power Platform Custom Connectors from blank as well, this should be easy enough based on the details I have provided above, but basically you will need to make sure to set the correct Host and Base URL path for your Logic App here:

After adding the authentication details for Azure AD OAuth2 (same as before), you will need to manually add actions, and providing a request from sample, as well as defining the default responses for 200 and 403 status, as shown in the earlier steps.

With the Custom Connector now in place for sending Requests to the Logic App using delegated authentication, we can now start using this Connector in our Flows/Power Apps.

Lets build a quick sample of that.

Creating a Power Automate Flow that will trigger Logic App API

I’ll just assume readers of this blog post knows a thing or two about Power Automate and Cloud Flows, so I’ll try to keep this high level.

I’ve created a new instant Cloud Flow, using PowerApps as a trigger. Then I add three initialize variables actions, giving the variables and actions name like below, before I set “Ask in PowerApps” for values:

Next, add a Custom Connector action, selecting the Custom Connector we created earlier (I have 3 versions here, as I shown above with the alternative ways to set up a Custom Connector from Logic Apps:

Next, select the action from the chosen Custom Connector, and fill in the parameters like below:

Next, we need to check the response status code we get back from the Custom Connector. Add a Switch control action, where we will check against the outputs from the Get Managed Devices and statusCode:

Make sure that the Switch action should run either if the Get Managed Devices is successful or failed:

For each case of statusCode we will check against, we need a Response action to return data back to the PowerApp. For status code 200 OK, I’ll return the Status Code as shown below, adding Headers to be Content-Type application/json, and using the Body output from the Get Managed Devices custom connector action. The Response Body JSON schema is based on the sample output from the Logic App API.

PS! To get the Body output, you can use the following custom expression: outputs('Get_Managed_Devices')?['body']:

For status code 403 I have added the following Case and Response action, using the Body output again, but this time the schema is based on the 403 response from the Logic App API:

Last, as every case should have Response, I’ll add the following Default Case:

The whole Flow visualized:

PS! Instead of using the Response action, I could also have used the “Respond to PowerApps” action. However this action only let me return text strings, numbers, boolean etc, and I wanted to return native JSON response.

Make sure that you test and verify the Flow before you proceed.

Creating the Power App to connect to the Flow

With the Flow ready, lets quickly build a PowerApp. My PowerApp is a Canvas App, and I have been using the Phone layout.

You can build this any way you want, but I used a dark theme, added an Icon at the top screen, and the 3 labels and text inputs for the parameters needed for the Flow. I then added a button for triggering the Flow, and under the button I have added a (hidden now) text label, for showing any error messages from the Flow. And I have added a Gallery control under there again for showing the resulting devices:

For the Button, select it and click on the Action menu and the Power Automate to connect your Flow. Next, change the “OnSelect” event for the Button to the following command:

Set(wait,true);
Clear(MyManagedDevices);Set(MyErrorMessage,Blank());
Set(MyDeviceResponse,GetManagedDevices.Run(textOperatingSystem.Text,textOSVersion.Text,textUserUpn.Text));
If(IsBlank(MyDeviceResponse),Set(MyErrorMessage,"Authorization Error: Check Flow for details on missing scope or roles claims for querying organization devices."),ClearCollect(MyManagedDevices,MyDeviceResponse));
Set(wait,!true)

A quick explanation of the commands above:

  • Set(wait,true) and Set(wait,!true) is to make the PowerApp “busy” when clicking.
  • I then Clear my Collection and Variable used.
  • I then use Set to get a “MyDeviceResponse”, this will return a collection of items (devices) returned via a JSON array from the Flow, or if I’m not authorized, it will return a failed response (based on the 403) and a blank MyDeviceResponse.
  • Next I do a If test, if the MyDeviceResponse i Blank, I’ll set the MyErrorMessage variable, if it’s not blank I will run a ClearCollect and fill the Collection with returned devices.

I fully appreciate that there might be other ways to do this fail checking and error handling, please let me know in the comments if you have other suggestions 🙂

For the Gallery I set the Data source to MyManagedDevices collection, and I have selected to use the layout of “Title, subtitle, and body”. You can change the device data that get filled in for these items in the Galleri, for example Manufacturer, Version, Name etc.

And last I set the Text property of my error message label to the MyErrorMessage variable:

Let’s Save, Publish and test this PowerApp!

First, I’ll try to add parameters for getting all Windows 10 devices, leaving user principal name blank. This will via the Custom Connector send a request to the Logic App API to return all devices, and in this case I’m not authorized to do so, so I’ll get an authorization error:

This is something I can verify in the Flow run history also:

Next, I’ll try to return my test users devices only, and this is successful and will fill the gallery:

We now have a working Flow and PowerApp connected to the Logic App API using the signed in users delegated permissions. If I want I can now share the PowerApp and Flow including the Custom Connector with other users in my organization, and they can use their own user identity for connections to the Logic App API.

In the next part of this blog post I will show how you can access the Logic App API using HTTP action and application permissions.

Connect to Logic App API using HTTP action

Sometimes you will have scenarios where you want to use an application identity to call an API like the Logic App I have used in this blog post article series. This is especially useful if you want to run a Power Automate Flow without a logged in user’s permissions.

I the previous blog post part 2 for exposing Logic App as an API, I created this App Registration to represent the Application Client scenarios and Application permissions:

In that App Registration, create a new Client Secret for using in Power Automate, and copy this to your clipboard:

Make sure to copy the Application (Client) ID and Tenant ID also:

Now let’s create a new Power Automate Flow to test this scenario. This type of Flow could use a range of different triggers based on your needs, but I’ll just use a Instant Cloud Flow as trigger where I have configured the following inputs:

Note that I have configured userUpn as an optional input.

Next add a “Compose” action for the Client Secret, give the action a name and paste in the Client Secret you created earlier. Note the Lock symbol:

Click on settings and select to Secure Inputs:

Next add a HTTP action, specifying the Method to POST and the URI to be the LogicApp API url, remember to not include the sv, sp and sig query parameters. Set Headers Content-Type to application/json, and under queries add the api-version. For body build the JSON request body using the inputs. We need to build a dynamic expression for userUpn, as this can be optional. I have used the following expression:

if(not(empty(triggerBody()?['text_2'])),triggerBody()?['text_2'],'')

Click to show advanced settings, and choose Authentication to use Azure AD OAuth. Add the authority, tenant id and set audience to the custom Logic App API URI. Then paste in the Application (Client) Id, and use the Outputs from the Compose Client Secret action:

This authentication above will use the Client Credentials Flow to get an access token that will be accepted by the Logic App API.

The remaining parts of this Flow can be exactly the same as the previous Flow we built, a Switch control that continiues on success/failure of the HTTP action:

And then returning Response objects from the HTTP action body for each case:

When testing the Flow now, I can see that the client secret is hidden from all relevant actions:

Summary and Next Steps

We are at the end of another extensive blog post. The focus for this article has been to show how you can use Power Automate to connect to your custom API, that we built in the previous blog post for exposing the Logic App as an API.

The community are increasingly creating Power Platform custom connectors and http actions that sends requests to API’s to Microsoft Graph directly, and that is great but it might result in too extensive permissions given to users and application clients. My focus has been to show how you can control authentication and authorization using on-behalf-of flows hidden behind a Logic App API where users and clients are allowed to send requests based on allowed permission scopes and/or roles, using the powers of Azure Active Directory and OAuth2.

There will be a later blog post in this series also, where I look into how Azure API Management can be used in these scenarios as well.

In the meantime, thanks for reading, hope it has been helpful!

How to Send Requests to GitHub API from Power Platform using Custom Connector

Recently I came across a personal scenario where I use Hugo and GitHub Pages as a team site for a Soccer team I’m coaching and wanted to automate some updates to the web site. I’ve written a blog post previously on how I organized trainings at home using Power Platform: How I as a Soccer Coach…. | GoToGuy Blog, and I am now using Github Pages and Hugo for publishing some statistics and more for that scenario.

In this blog post I will show how I:

  1. Created an OAuth Application for Github API.
  2. Created a Custom Connector in Power Platform for connections to that OAuth Application.
  3. Created Operations for getting content, updating content and triggering workflows for Github Actions.
  4. Connected to Github API using my Azure AD account and user impersonation.
  5. Created a Power Automate Cloud Flow for using the Custom Connector and the defined operations.

Lets get started!

Create OAuth Application for Github API

Start by logging in to your GitHub account and go to Settings. Under Settings you will find Developer Settings where you can access OAuth Apps. You can also go directly to the following URL https://github.com/settings/developers.

Click to Register a new application, and fill in something like the following:

As the above image shows, give the application a descriptive name for your scenario, you can type any homepage URL, this is not important in this scenario. The authorization callback URL is important though, as this will the callback to the Custom Connector we will create later. We can verify the URL later, but use https://global.consent.azure-apim.net/redirect.

Register the application. Next you can change the settings for the registered app. You will have to copy the Client ID, we will need that later. You also need to create Client Secret, make sure to copy that as well, you will only be able to see this once. You can also change some settings like name, logo and branding if you like. This is how my Github App registration looks like now:

We can now proceed to Power Platform to create the Custom Connector.

Create Custom Connector to Github API in Power Platform

Log in to your Power Platform environment, and go to Custom Connectors under Data. Click to create a New custom connector. You can select to create from blank if you want to follow along the steps in my blog post here, or you can select to import an OpenAPI for URL, as I will provide the swagger file at the end of this blog post.

Give the connector a name of your choice and continue:

Next you need to specify “api.github.com” as host. You can also optionally upload a connector icon, as I have done here:

(You can grab the mark logo used above from here, GitHub Logos and Usage, note the usage requirements).

Next, go to Security. Select OAuth 2.0 as authentication type, and then selec GitHub as Identity Provider.

(PS! You can select Generic OAuth 2 also, but it will fall back to GitHub as Identity Provider eventually after all).

Add your Client ID and Secret from the Github OAuth application registration:

It is important to configure the correct scope (or scopes) as this will authorize the client for accessing the API. If you leave the scope blank, you will only get public read only access. You can read more on available scopes here: Scopes for OAuth Apps – GitHub Docs

In my case I want to have full read and write access to public repositories, as well as read write to user profile, so I set the scope to “public_repo user” (use space delimiter for multiple scopes):

I can now click “Create connector”. After creating the security details are now hidden/disabled, and I can verify the Redirect URL to be the same as the Callback URL from the GitHub OAuth app registration:

We can now start defining the operations for the actions I want to do against the GitHub API.

Create Operations for sending requests to GitHub API

When querying and sending request to the GitHub API you need to know the API details and required parameters for what you want. The following link is for the official GitHub Rest API reference: Reference – GitHub Docs.

In my example I want to define the following 3 operations in my Custom Connector:

Under 3. Definition, select to create a New action, and call it something like “Get Repository Content” with the Operation ID set to “GetRepositoryContent”:

Then, under Request, click Import from sample. Select the Verb GET, and under URL type https://api.github.com. The rest of the query we will get from the GitHub API docs. Copy the following fra the REST API reference docs:

So that your sample request now looks like this, remember to add the recommended Accept header:

Click Import. The request will now ask for owner, repo and path as parameters:

Next, click the default response. Here you can copy the sample response from the REST API docs, I’ve copied the sample response for getting file contents:

After that click “Update connector” and we have the first action operation defined.

Click New action again, this time for updating file contents:

For the sample request the Verb is PUT, the URL is the same as when getting file content, but now we need to specify a request body as well:

I’ve created the sample request body based on the docs reference, with just empty placeholder values for the parameters needed. Some of these can be omitted, but message, contents, sha and branch is required for updating an existing file:

{
 "message": "",
 "content": "",
 "sha": "",
 "branch": "",
 "committer": {
  "name": "",
  "email": "",
  "author": {
   "name": "",
   "email": ""
  }
 }
}

After importing the sample request, you can click into the body parameter and change to required for the body itself, as well as the payload parameters that you always want to include from below:

Add a sample default response as well, I’ve copied the example response for updating a file from the docs.

Click “Update connector” again and we are ready to add the third action:

This will be a POST request, with the following URL and request body:

Note from above that “ref” needs to be referencing a branch or tag name as is a required parameter. “Inputs” is an object, depending on your GitHub Actions workflow if incoming parameters is defned, so in many cases this can be empty.

You can leave the default response as it is, as API will return 204 No Content if request is successful.

Click on “Update connector” again, and you should now have 3 actions successfully configured.

We can now proceed to create a connection and authenticate to GitHub API using this custom connector.

Connect to Github API using my Azure AD account and user impersonation

Go to “4. Test”, and click to create a “New connection”. This will create a new authentication popup, and if you’re not already logged in to GitHub you must log in first. Note the correct reference and branding to the “Elven Power Platform OAuth App”:

After logging in I’m prompted to authorize the OAuth app to access data in my account. Note that the scopes “public_repo” and “user” is shown in the authorization request:

If you own other organizations you can grant access to that as well. Click Authorize “OwnerName”: as shown below:

After authorizing you will be redirected back to the Connections, and you should be able to successfully get a new connection object.

Let’s take a look at GitHub settings again, under https://github.com/settings/applications. You should see the OAuth App and the correct permissions configured if you click into details. You can also revoke the access if you need to remove it or reconfigure the scopes for example:

Let’s do a test from the Custom Connector and see what we get. Click on the GetRepositoryContent, and provide the paramaters for “owner” (your GitHub account name), “repo” (any repository, I’m using my GitHub Pages repo here), and a “path” to an existing file in that repo (I’m just testing against my README.md at root, but this can be any subfolder\file also). Click Test operation and see:

This should be successful, note that the response contains a couple of important values for later, the “sha” for the existing file, and the “content” which is a base64 representation of the current contents of the README.md file.

Click on the Request tab, and you will see a preview of how the request was constructed. You will also see the Authorization Header with the Bearer Token:

A couple of important things to note:

  • The request uses an API gateway in Azure APIM, not GitHub directly.
  • The Bearer Token in the Authorization Header is for the Azure API GW audience, so it cannot be used directly against GitHub API.

Copy the entire token value, from after “Bearer <token……>”, and paste it into a JWT debugger like jwt.io. From there we can look at the decoded payload:

From that payload it’s clear that the Token has been issued by my Azure AD tenant and for my logged on user in Power Platform. The scope is user_impersonation, so this will be used in a on-behalf-of flow scenario via the audience defined as apihub.azure.com, which in turn will request from GitHub API resources on my behalf via the APIM gateway used by Power Platform.

You can also lookup the appid from the Token in the Azure AD tenant, and you will find the following Enterprise Application, from where you can enable or disable it on an organization level, or you can examine the sign in logs:

We can test the other operations as well, but let’s create a Flow for that scenario.

Create a Power Automate Cloud Flow for using the Custom Connector to Get and Update File Content

Create a new Cloud Flow, using an instant trigger for manually triggering a flow. Add some inputs like shown below:

Next, add a new action and from under Custom find the GitHub Custom Connector:

Add the “Get Repository Content” action and then fill in the inputs like below:

Next, add a Compose action, with the following dynamic expression:

base64ToString(outputs('Get_Repository_Content')?['body/content'])

This is just for checking what the existing file contents is:

We can do a quick Save and then Test Flow so far, from the Run history I should get the correct inputs, and when finding the existing file the outputs will include the sha value of the existing file, as well as the base64 encoded value of the content:

And when looking at the decoding of the content I can see that the readme.md file content is shown correctly:


Go into Flow edit mode again, and add another Compose action, this time we need to base64 encode the new content I want to update the file with:

Note that the base64 function uses for parameter the input trigger of base64(triggerBody()?['text']), as this is the first text parameter of the trigger.

Add a new action, this time for the Custom Connector again, and the Update File Contents. Specify the owner, repo and path as previously input values, type a custom message for the message, and select the outputs from the “Base64 Updated Content” action, and use the sha value from the “Get Repository Content”. The rest of the values (committer, author objects) are optional:

Save and then do another test, for example like the following to update the README.md file:

And the test should be successful:

I can also verify this at my repository and check the file has been updated. Note also the commit message:

Triggering a GitHub Actions Workflow

The last thing I wanted to go through in this blog post is using the Power Platform Custom Connector to trigger a GitHub Actions workflow. My use case for this is to start a Hugo build when I have dynamically updated files for my static website, but for now I will keep it simple.

I have via a basic template created a simple workflow like this:

This workflow can also be triggered manually using workflow_dispatch, so let’s use that to verify that I can call it from Power Platform.

Add a new action at the end of the Flow, adding the Custom Connector action for Dispatching Workflow event:

Specify Owner and Repo from inputs, and for workflow id either specify ID or the name of the workflow file, in this case blank.yml. The ref parameter is either a branch or tag name, so in my case I use main branch. I leave the other parameters blank as I don’t have any inputs to supply, and use the default Accept header.

Save and Test the Flow again, supplying an updated file content, owner, repo and path similar to what we did previously. When the Flow runs it should complete successfully:

If I go to my GitHub repository, and under Actions, I can see that this workflow has been triggered:

Actually it has been triggered twice, as the first trigger is automatic for the push commit on the file update, and the other (named “CI” in results) is the actual workflow dispatch from the Flow.

Basically this means that I can select some different logic to when my workflows will trigger, either as a push or pull trigger, or as a trigger event based on my Flows. But of course I won’t normally run both triggers 😉

I now have what I need for working further with my personal Hugo and GitHub Pages project, my plan is to update data and assets files from my Power Platform environment, and then trigger a Hugo build for my website. I might blog more on that process later.

Summary and some last thoughts

In this blog post I wanted to show how you can work with the GitHub REST API via a Power Platform Custom Connector. This way you can basically achieve anything that the GitHub API has available, provided the correct scope/scopes has been authorized.

I do want to mention however that there is a GitHub Connector you can use directly in Power Automate, Logic Apps, or Power Apps also: GitHub – Connectors | Microsoft Docs, where you can create a direct connection to your GitHub account. You should take a look at that if that can server your needs.

In my case I needed the API to get or update file contents directly, as well as when using impersonation people in my organization can use their own Azure AD accounts if I share the Custom Connector with them, they don’t need their own GitHub accounts as long as the OAuth App has been authorized on my behalf.

If you want a quickstart on creating the Custom Connector your self, below is the Swagger definition. Thanks for reading, hope it has been useful!

swagger: '2.0'
info: {title: JanVidarElven Github Connector, description: GitHub API Connector for
JanVidarElven, version: '1.0'}
host: api.github.com
basePath: /
schemes: [https]
consumes: []
produces: []
paths:
/repos/{owner}/{repo}/contents/{path}:
get:
responses:
default:
description: default
schema:
type: object
properties:
type: {type: string, description: type}
encoding: {type: string, description: encoding}
size: {type: integer, format: int32, description: size}
name: {type: string, description: name}
path: {type: string, description: path}
content: {type: string, description: content}
sha: {type: string, description: sha}
url: {type: string, description: url}
git_url: {type: string, description: git_url}
html_url: {type: string, description: html_url}
download_url: {type: string, description: download_url}
_links:
type: object
properties:
git: {type: string, description: git}
self: {type: string, description: self}
html: {type: string, description: html}
description: _links
summary: Get Repository Content
operationId: GetRepositoryContent
description: Get File or Folder Content from Repository
parameters:
– {name: owner, in: path, required: true, type: string}
– {name: repo, in: path, required: true, type: string}
– {name: path, in: path, required: true, type: string}
– {name: Accept, in: header, required: false, type: string}
put:
responses:
default:
description: default
schema:
type: object
properties:
content:
type: object
properties:
name: {type: string, description: name}
path: {type: string, description: path}
sha: {type: string, description: sha}
size: {type: integer, format: int32, description: size}
url: {type: string, description: url}
html_url: {type: string, description: html_url}
git_url: {type: string, description: git_url}
download_url: {type: string, description: download_url}
type: {type: string, description: type}
_links:
type: object
properties:
self: {type: string, description: self}
git: {type: string, description: git}
html: {type: string, description: html}
description: _links
description: content
commit:
type: object
properties:
sha: {type: string, description: sha}
node_id: {type: string, description: node_id}
url: {type: string, description: url}
html_url: {type: string, description: html_url}
author:
type: object
properties:
date: {type: string, description: date}
name: {type: string, description: name}
email: {type: string, description: email}
description: author
committer:
type: object
properties:
date: {type: string, description: date}
name: {type: string, description: name}
email: {type: string, description: email}
description: committer
message: {type: string, description: message}
tree:
type: object
properties:
url: {type: string, description: url}
sha: {type: string, description: sha}
description: tree
parents:
type: array
items:
type: object
properties:
url: {type: string, description: url}
html_url: {type: string, description: html_url}
sha: {type: string, description: sha}
description: parents
verification:
type: object
properties:
verified: {type: boolean, description: verified}
reason: {type: string, description: reason}
signature: {type: string, description: signature}
payload: {type: string, description: payload}
description: verification
description: commit
summary: Update File Contents
description: Update existing file in repository
operationId: UpdateFileContents
parameters:
– {name: owner, in: path, required: true, type: string}
– {name: repo, in: path, required: true, type: string}
– {name: path, in: path, required: true, type: string}
– {name: Accept, in: header, required: false, type: string}
– name: body
in: body
required: true
schema:
type: object
properties:
message: {type: string, description: message, title: ''}
content: {type: string, description: content, title: ''}
sha: {type: string, description: sha, title: ''}
branch: {type: string, description: branch, title: ''}
committer:
type: object
properties:
name: {type: string, description: name}
email: {type: string, description: email}
author:
type: object
properties:
name: {type: string, description: name}
email: {type: string, description: email}
description: author
description: committer
required: [branch, content, message, sha]
/repos/{owner}/{repo}/actions/workflows/{workflow_id}/dispatches:
post:
responses:
default:
description: default
schema: {}
summary: Dispatch Workflow Event
operationId: DispatchWorkflowEvent
description: Trigger a GitHub Actions Workflow by ID
parameters:
– {name: owner, in: path, required: true, type: string}
– {name: repo, in: path, required: true, type: string}
– {name: workflow_id, in: path, required: true, type: string}
– {name: Accept, in: header, required: false, type: string}
– name: body
in: body
required: true
schema:
type: object
properties:
ref: {type: string, description: ref, title: ''}
inputs:
type: object
properties: {}
description: inputs
required: [ref]
definitions: {}
parameters: {}
responses: {}
securityDefinitions:
oauth2_auth:
type: oauth2
flow: accessCode
authorizationUrl: https://github.com/login/oauth/authorize
tokenUrl: https://login.windows.net/common/oauth2/authorize
scopes: {public_repo user: public_repo user}
security:
– oauth2_auth: [public_repo user]
tags: []

Protect Logic Apps with Azure AD OAuth – Part 2 Expose Logic App as API

This blog article will build on the previous blog post published, Protect Logic Apps with Azure AD OAuth – Part 1 Management Access | GoToGuy Blog, which provided some basic understanding around authorizing to Logic Apps request triggers using OAuth and Access Tokens.

In this blog I will build on that, creating a scenario where a Logic App will be exposed as an API to end users. In this API, I will call another popular API: Microsoft Graph.

My scenario will use a case where end users does not have access themselves to certain Microsoft Graph requests, but where the Logic App does. Exposing the Logic App as an API will let users be able to authenticate and authorize, requesting and consenting to the custom Logic App API permissions I choose. Some of these permissions can users consent to themselves, while other must be admin consented. This way I can use some authorizing inside the Logic App, and only let the end users be able to request what they are permitted to.

I will also look into assigning users and groups, and using scopes and roles for additional fine graining end user and principal access to the Logic App.

A lot of topics to cover, so let’s get started by first creating the scenario for the Logic App.

Logic App calling Microsoft Graph API

A Logic App can run requests against the Microsoft Graph API using the HTTP action and specifying the method (GET, POST, etc) and resource URI. For authentication against Graph from the Logic App you can use either:

  • Using Azure Active Directory OAuth and Client Credentials Flow with Client Id and Secret.
  • Using System or User Assigned Managed Identity.

Permissions for Microsoft Graph API are either using “delegated” (in context of logged in user) or “application” (in context of application/deamon service). These scenarios using Logic App will use application permissions for Microsoft Graph.

PS! Using Logic Apps Custom Connectors (Custom connectors overview | Microsoft Docs) you can also use delegated permissions by creating a connection with a logged in user, but this outside of the scope of this article.

Scenario for using Microsoft Graph in Logic App

There are a variety of usage scenarios for Microsoft Graph, so for the purpose of this Logic App I will focus on one of the most popular: Device Management (Intune API) resources. This is what I want the Logic App to do in this first phase:

  • Listing a particular user’s managed devices.
  • Listing all of the organization’s managed devices.
  • Filtering managed devices based on operating system and version.

In addition to the above I want to implement the custom API such that any assigned user can list their own devices through end-user consent, but to be able to list all devices or any other user than your self you will need an admin consented permission for the custom API.

Creating the Logic App

In your Azure subscription, add a new Logic App to your chosen resource group and name it according to your naming standard. After the Logic App is created, you will need add the trigger. As this will be a custom API, you will need it to use HTTP as trigger, and you will also need a response back to the caller, so the easiest way is to use the template for HTTP Request-Response as shown below:

Your Logic App will now look like this:

Save the Logic App before proceeding.

Create a Managed Identity for the Logic App

Exit the designer and go to the Identity section of the Logic App. We need a managed identity, either system assigned or user assigned, to let the Logic App authenticate against Microsoft Graph.

A system assigned managed identity will follow the lifecycle of this Logic App, while a user assigned managed identity will have it’s own lifecycle, and can be used by other resources also. I want that flexibility, so I will create a user assigned managed identity for this scenario. In the Azure Portal, select to create a new resource and find User Assigned Managed Identity:

Create a new User Assigned Managed Identity in your selected resource group and give it a name based on your naming convention:

After creating the managed identity, go back to your Logic App, and then under Identity section, add the newly created managed identity under User Assigned Managed Identity:

Before we proceed with the Logic App, we need to give the Managed Identity the appropriate Microsoft Graph permissions.

Adding Microsoft Graph Permissions to the Managed Identity

Now, if we wanted the Logic App to have permissions to the Azure Rest API, we could have easily added Azure role assignments to the managed identity directly:

But, as we need permissions to Microsoft Graph, there are no GUI to do this for now. The permissions needed for listing managed devices are documented here: List managedDevices – Microsoft Graph v1.0 | Microsoft Docs.

So we need a minimum of: DeviceManagementManagedDevices.Read.All.

To add these permissions we need to run some PowerShell commands using the AzureAD module. If you have that installed locally, you can connect and proceed with the following commands, for easy of access you can also use the Cloud Shell in the Azure Portal, just run Connect-AzureAD first:

PS! You need to be a Global Admin to add Graph Permissions.

You can run each of these lines separately, or run it as a script:

# Microsoft Graph App Well Known App Id
$msGraphAppId = "00000003-0000-0000-c000-000000000000"

# Display Name if Managed Identity
$msiDisplayName="msi-ops-manageddevices" 

# Microsoft Graph Permission required
$msGraphPermission = "DeviceManagementManagedDevices.Read.All" 

# Get Managed Identity Service Principal Name
$msiSpn = (Get-AzureADServicePrincipal -Filter "displayName eq '$msiDisplayName'")

# Get Microsoft Graph Service Principal
$msGraphSpn = Get-AzureADServicePrincipal -Filter "appId eq '$msGraphAppId'"

# Get the Application Role for the Graph Permission
$appRole = $msGraphSpn.AppRoles | Where-Object {$_.Value -eq $msGraphPermission -and $_.AllowedMemberTypes -contains "Application"}

# Assign the Application Role to the Managed Identity
New-AzureAdServiceAppRoleAssignment -ObjectId $msiSpn.ObjectId -PrincipalId $msiSpn.ObjectId -ResourceId $msGraphSpn.ObjectId -Id $appRole.Id

Verify that it runs as expected:

As mentioned earlier, adding these permissions has to be done using script commands, but there is a way to verify the permissions by doing the following:

  1. Find the Managed Identity, and copy the Client ID:
  1. Under Azure Active Directory and Enterprise Applications, make sure you are in the Legacy Search Experience and paste in the Client ID:
  1. Which you then can click into, and under permissions you will see the admin has consented to Graph permissions:

The Logic App can now get Intune Managed Devices from Microsoft Graph API using the Managed Identity.

Calling Microsoft Graph from the Logic App

Let’s start by adding some inputs to the Logic App. I’m planning to trigger the Logic App using an http request body like the following:

{
 "userUpn": "[email protected]",
 "operatingSystem": "Windows",
 "osVersion": "10"
}

In the Logic App request trigger, paste as a sample JSON payload:

The request body schema will be updated accordingly, and the Logic App is prepared to receive inputs:

Next, add a Condition action, where we will check if we should get a users’ managed devices, or all. Use an expression with the empty function to check for userUpn, and another expression for the true value, like below:

We will add more logic and conditions later for the filtering of the operating system and version, but for now add an HTTP action under True like the following:

Note the use of the Managed Identity and Audience, which will have permission for querying for managed devices.

Under False, we will get the managed devices for a specific user. So add the following, using the userUpn input in the URI:

Both these actions should be able to run successfully now, but we will leave the testing for a bit later. First I want to return the managed devices found via the Response action.

Add an Initialize Variable action before the Condition action. Set the Name and Type to Array as shown below, but the value can be empty for now:

Next, under True and Get All Managed Devices, add a Parse JSON action, adding the output body from the http action and using either the sample response from the Microsoft Graph documentation, or your own to create the schema.

PS! Note that if you have over 1000 managed devices, Graph will page the output, so you should test for odata.nextLink to be present as well. You can use the following anonymized sample response for schema which should work in most cases:

{
     "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#deviceManagement/managedDevices",
     "@odata.count": 1000,
     "@odata.nextLink": "https://graph.microsoft.com/v1.0/deviceManagement/managedDevices?$skiptoken=",
     "value": [
         {
             "id": "id Value",
             "userId": "User Id value",
             "deviceName": "Device Name value",
             "managedDeviceOwnerType": "company",
             "operatingSystem": "Operating System value",
             "complianceState": "compliant",
             "managementAgent": "mdm",
             "osVersion": "Os Version value",
             "azureADRegistered": true,
             "deviceEnrollmentType": "userEnrollment",
             "azureADDeviceId": "Azure ADDevice Id value",
             "deviceRegistrationState": "registered",
             "isEncrypted": true,
             "userPrincipalName": "User Principal Name Value ",
             "model": "Model Value",
             "manufacturer": "Manufacturer Value",
             "userDisplayName": "User Display Name Value",
             "managedDeviceName": "Managed Device Name Value"
         }
     ]
 }

PS! Remove any sample response output from schema if values will be null or missing from your output. For example I needed to remove the configurationManagerClientEnabledFeatures from my schema, as this is null in many cases.

Add another Parse JSON action under the get user managed devices action as well:

Now we will take that output and do a For Each loop for each value. On both sides of the conditon, add a For Each action, using the value from the previous HTTP action:

Inside that For Each loop, add an Append to Array variable action. In this action we will build a JSON object, returning our chosen attributes (you can change to whatever you want), and selecting the properties from the value that was parsed:

Do the exact same thing for the user devices:

Now, on each side of the condition, add a response action, that will return the ManagedDevices array variable, this will be returned as a JSON som set the Content-Type to application/json:

Finally, remove the default response action that is no longer needed:

The complete Logic App should look like the following now:

As I mentioned earlier, we’ll get to the filtering parts later, but now it’s time for some testing.

Testing the Logic App from Postman

In the first part of this blog post article series, Protect Logic Apps with Azure AD OAuth – Part 1 Management Access | GoToGuy Blog, I described how you could use Postman, PowerShell or Azure CLI to test against REST API’s.

Let’s test this Logic App now with Postman. Copy the HTTP POST URL:

And paste it to Postman, remember to change method to POST:

You can now click Send, and the Logic App will trigger, and should return all your managed devices.

If you want a specific users’ managed devices, then you need to go to the Body parameter, and add like the following with an existing user principal name in your organization:

You should then be able to get this users’ managed devices, for example for my test user this was just a virtual machine with Window 10:

And I can verify a successful run from the Logic App history:

Summary so far

We’ve built a Logic App that uses it’s own identity (User Assigned Managed Identity) to access the Microsoft Graph API using Application Permissions to get managed devices for all users or a selected user by UPN. Now it’s time to exposing this Logic App as an API son end users can call this securely using Azure AD OAuth.

Building the Logic App API

When exposing the Logic App as an API, this will be the resource that end users will access and call as a REST API. Consider the following diagram showing the flow for OpenID Connect and OAuth, where Azure AD will be the Authorization Server from where end users can request access tokens where the audience will be the Logic App resource:

Our next step will be to create Azure AD App Registrations, and we will start with the App Registration for the resource API.

Creating App Registration for Logic App API

In your Azure AD tenant, create a new App Registration, and call it something like (YourName) LogicApp API:

I will use single tenant for this scenario, leave the other settings as it is and create.

Next, go to Expose an API:

Click on Set right next to Application ID URI, and save the App ID URI to your choice. You can keep the GUID if you want, but you can also type any URI value you like here (using api:// or https://). I chose to set the api URI to this:

Next we need to add scopes that will be the permissions that delegated end users can consent to. This will be the basis of the authorization checks we can do in the Logic App later.

Add a scope with the details shown below. This will be a scope end users can consent to themselves, but it will only allow them to read their own managed devices:

Next, add another Scope, with the following details. This will be a scope that only Admins can consent to, and will be authorized to read all devices:

You should now have the following scopes defined:

Next, go to the Manifest and change the accessTokenAcceptedVersion from null to 2, this will configure so that Tokens will use the OAuth2 endpoints:

That should be sufficient for now. In the next section we will prepare for the OAuth client.

Create App Registration for the Logic App Client

I choose to create a separate App Registration in Azure AD for the Logic App Client. This will represent the OAuth client that end users will use for OAuth authentication flows and requesting permissions for the Logic App API. I could have configured this in the same App Registration as the API created in the previous section, but this will provide better flexibility and security if I want to share the API with other clients also later, or if I want to separate the permission grants between clients.

Go to App Registrations in Azure AD, and create a new registration calling it something like (yourname) LogicApp Client:

Choose single tenant and leave the other settings for now.

After registering, go to API permissions, and click on Add a permission. From there you can browse to “My APIs” and you should be able to locate the (yourname) Logic API. Select to add the delegated permissions as shown below:

These delegated permissions reflect the scopes we defined in the API earlier. Your App registration and API permission should now look like below. NB! Do NOT click to Grant admin consent for your Azure AD! This will grant consent on behalf of all your users, which will work against our intended scenario later.

Next, we need to provide a way for clients to authenticate using Oauth flows, so go to the Certificates & secrets section. Click to create a Client secret, I will name my secret after where I want to use it for testing later (Postman):

Make sure you copy the secret value for later:

(Don’t worry, I’ve already invalidated the secret above and created a new one).

Next, go to Authentication. We need to add a platform for authentication flows, so click Add a platform and choose Web. For using Postman later for testing, add the following as Redirect URI: https://oauth.pstmn.io/v1/callback

Next, we will also provide another test scenario using PowerShell or Azure CLI client, so click on Add a platform one more time, this time adding Mobile desktop and apps as platform and use the following redirect URI: urn:ietf:wg:oauth:2.0:oob

Your platform configuration should now look like this:

Finally, go to advanced and set yes to allow public client flows, as this will aid in testing from PowerShell or Azure CLI clients later:

Now that we have configured the necessary App registrations, we can set up the Azure AD OAuth Authorization Policy for the Logic App.

Configuring Azure AD OAuth Authorization Policy for Logic App

Back in the Logic App, create an Azure AD Authorization Policy with issuer and audience as shown below:

Note the Claims values:

We are using the v2.0 endpoint as we configured in the manifest of the App Registration that accessTokenAcceptedVersion should be 2. (as opposed to v1.0 issuer that would be in the format https://sts.windows.net/{tenantId}/). And the Audience claim would be our configured API App ID. (for v1.0 the audience would be the App ID URI, like api://elven-logicapp-api).

Save the Logic App, and we can now start to do some testing where we will use the client app registration to get an access token for the Logic App API resource.

Testing with Postman Client

The first test scenario we will explore is using Postman Client and the Authorization Code flow for getting the correct v2.0 Token.

A recommended practice when using Postman and reusing variable values is to create an Environment. I’ve created this Environment for storing my Tenant ID, Client ID (App ID for the Client App Registration) and Client Secret (the secret I created for using Postman):

Previously in this blog article, we tested the Logic App using Postman. On that request, select the Authorization tab, and set type to OAuth 2.0:

Next, under Token configuration add the values like the following. Give the Token a recognizable name, this is just for Postman internal refererence. Make sure that the Grant Type is Authorization Code. Note the Callback URL, this is the URL we configured for the App registration and Callback Url. In the Auth and Access Token URL, configure the use of the v2.0 endpoints, using TenantID from the environment variables. (Make sure to set the current environment top right). And for Client ID and Client Secret these will also refer to the environment variables:

One important step remains, and that is to correctly set the scope for the access token. Using something like user.read here will only produce an Access Token for Microsoft Graph as audience. So we need to change to the Logic App API, and the scope for ManagedDevices.Read in this case:

Let’s get the Access Token, click on the Get New Access Token button:

A browser window launches, and if you are not already logged in, you must log in first. Then you will be prompted to consent to the permission as shown below. The end user is prompted to consent for the LogicApp API, as well as basic OpenID Connect consents:

After accepting, a popup will try to redirect you to Postman, so make sure you don’t block that:

Back in Postman, you will see that we have got a new Access Token:

Copy that Access Token, and paste it into a JWT debugger like jwt.ms or jwt.io. You should see in the data payload that the claims for audience and issuer is the same values we configured in the Logic App Azure AD OAuth policy:

Note also the token version is 2.0.

Click to use the Token in the Postman request, it should populate this field:

Before testing the request, remember to remove the SAS query parameters from the request, so that sv, sp and sig are not used with the query for the Logic App:

Now, we can test. Click Send on the Request. It should complete successfully with at status of 200 OK, and return the managed device details:

Let’s add to the permission scopes, by adding the ManagedDevices.Read.All:

Remember just to have a blank space between the scopes, and then click Get New Access Token:

If I’m logged on with a normal end user, I will get the prompt above that I need admin privileges. If I log in with an admin account, this will be shown:

Note that I can now do one of two actions:

  1. I can consent only on behalf of myself (the logged in admin user), OR..
  2. I can consent on behalf of the organization, by selecting the check box. This way all users will get that permission as well.

Be very conscious when granting consents on behalf of your organization.

At this point the Logic App will authorize if the Token is from the correct issuer and for the correct audience, but the calling user can still request any managed device or all devices. Before we get to that, I will show another test scenario using a public client like PowerShell.

Testing with PowerShell and MSAL.PS

MSAL.PS is a perfect companion for using MSAL (Microsoft Authentication Library) to get Access Tokens in PowerShell. You can install MSAL.PS from PowerShellGallery using Install-Module MSAL.PS.

The following commands show how you can get an Access Token using MSAL.PS:

# Set Client and Tenant ID
$clientID = "cd5283d0-8613-446f-bfd7-8eb1c6c9ac19"
$tenantID = "104742fb-6225-439f-9540-60da5f0317dc"

# Get Access Token using Interactive Authentication for Specified Scope and Redirect URI (Windows PowerShell)
$tokenResponse = Get-MsalToken -ClientId $clientID -TenantId $tenantID -Interactive -Scope 'api://elven-logicapp-api/ManagedDevices.Read' -RedirectUri 'urn:ietf:wg:oauth:2.0:oob'

# Get Access Token using Interactive Authentication for Specified Scope and Redirect URI (PowerShell Core)
$tokenResponse = Get-MsalToken -ClientId $clientID -TenantId $tenantID -Interactive -Scope 'api://elven-logicapp-api/ManagedDevices.Read' -RedirectUri 'http://localhost'


MSAL.PS can be used both for Windows PowerShell, and for PowerShell Core, so in the above commands, I show both. Note that the redirect URI for MSAL.PS on PowerShell Core need to be http://localhost. You also need to add that redirect URI to the App Registration:

Running the above command will prompt an interactive logon, and should return a successful response saved in the $tokenResponse variable.

We can verify the response, for example checking scopes or copying the Access Token to the clipboard so that we can check the token in a JWT debugger:

# Check Token Scopes
$tokenResponse.Scopes

# Copy Access Token to Clipboard
$tokenResponse.AccessToken | Clip

In the first blog post of this article series I covered how you can use Windows PowerShell and Core to use Invoke-RestMethod for calling the Logic App, here is an example where I call my Logic App using the Access Token (in PowerShell Core):

# Set variable for Logic App URL
$logicAppUrl = "https://prod-05.westeurope.logic.azure.com:443/workflows/d429c07002b44d63a388a698c2cee4ec/triggers/request/paths/invoke?api-version=2016-10-01"

# Convert Access Token to a Secure String for Bearer Token
$bearerToken = ConvertTo-SecureString ($tokenResponse.AccessToken) -AsPlainText -Force

# Invoke Logic App using Bearer Token
Invoke-RestMethod -Method Post -Uri $logicAppUrl -Authentication OAuth -Token $bearerToken

And I can verify that it works:

Great. I now have a couple of alternatives for calling my Logic App securely using Azure AD OAuth. In the next section we will get into how we can do authorization checks inside the Logic App.

Authorization inside Logic App

While the Logic App can have an authorization policy that verifies any claims like issuer and audience, or other custom claims, we cannot use that if we want to authorize inside the logic app based on scopes, roles etc.

In this section we will look into how we can do that.

Include Authorization Header in Logic Apps

First we need to include the Authorization header from the OAuth access token in the Logic App. To do this, open the Logic App in code view, and add the operationOptions to IncludeAuthorizationHeadersInOutputs for the trigger like this:

        "triggers": {
            "manual": {
                "inputs": {
                    "schema": {}
                },
                "kind": "Http",
                "type": "Request",
                "operationOptions": "IncludeAuthorizationHeadersInOutputs"
            }
        }

This will make the Bearer Token accessible inside the Logic App, as explained in detail in my previous post: Protect Logic Apps with Azure AD OAuth – Part 1 Management Access | GoToGuy Blog. There I also showed how to decode the token to get the readable JSON payload, so I need to apply the same steps here:

After applying the above steps, I can test the Logic App again, and get the details of the decoded JWT token, for example of interest will be to check the scopes:

Implement Logic to check the Scopes

When I created the LogicApp API app registration, I added two scopes: ManagedDevices.Read and ManagedDevices.Read.All. The authorization logic I want to implement now is to only let users calling the Logic App and that has the scope ManagedDevices.Read.All to be able to get ALL managed devices, or to get managed devices other than their own devices.

The first step will be to check if the JWT payload for scope “scp” contains the ManagedDevices.Read.All. Add a Compose action with the following expression:

contains(outputs('Base64_to_String_Json').scp,'ManagedDevices.Read.All')

This expression will return either true or false depending on the scp value.

Next after this action, add a Condition action, where we will do some authorization checks. I have created two groups of checks, where one OR the other needs to be true.

Here are the details for these two groups:

  • Group 1 (checks if scp does not contain ManagedDevices.Read.All and calling user tries to get All managed devices):
    • Outputs('Check_Scopes') = false
    • empty(triggerBody()?['userUpn']) = true
  • Group 2 (checks if scp does not contain ManagedDevices.Read.All, and tries to get managed devices for another user than users’ own upn):
    • Outputs('Check_Scopes') = false
    • triggerBody()?['userUpn'] != Outputs('Base64_to_String_Json')['preferred_username']

If either of those two groups is True, then we know that the calling user tries to do something the user is not authorized to do. This is something we need to give a customized response for. So inside the True condition, add a new Response action with something like the following:

I’m using a status code of 403, meaning that the request was successfully authenticated but was missing the required authorization for the resource.

Next, add a Terminate action, so that the Logic App stops with a successful status. Note also that on the False side of the condition, I leave it blank because I want it to proceed with the next steps in the Logic Apps.

Test the Authorization Scope Logic

We can now test the authorization scopes logic implemented above. In Postman, either use an existing Access Token or get a new Token that only include the ManagedDevices.Read scope.

Then, send a request with an empty request body. You should get the following response:

Then, try another test, this time specifying another user principal name than your own, which also should fail:

And then test with your own user principal name, which will match the ‘preferred_username’ claim in the Access Token, this should be successful and return your devices:

Perfect! It works as intended, normal end users can now only request their own managed devices.

Let’s test with an admin account and the ManagedDevices.Read.All scope. In Postman, add that scope, and get a new Access Token:

When logging in with a user that has admin privileges you will now get a Token that has the scope for getting all devices, for which your testing should return 200 OK for all or any users devices:

Adding Custom Claims to Access Token

In addition to the default claims and scopes in the Access Token, you can customize a select set of additional claims to be included in the JWT data payload. Since the Access Token is for the resource, you will need to customize this on the App Registration for the LogicApp API.

In Azure AD, select the App Registration for the API, and go to API permissions first. You need to add the OpenID scopes first. Add the following OpenID permissions:

Your API App Registration should look like this:

Next, go to Token configuration. Click Add optional claim, and select Access Token. For example you can add the ipaddr and upn claims as I have done below:

Note the optional claims listed for the resource API registration:

Next time I get a new access token, I can see that the claims are there:

Summary of User Authorization so far

What we have accomplished now is that users can get an Access Token for the Logic App API resource. This is the first requirement for users to be able to call the Logic App, that they indeed have a Bearer Token in the Authorization Header that includes the configured issuer and audience.

In my demos I have shown how to get an access token using Postman (Authorization Code Flow) and a Public Client using MSAL.PS. But you can use any kind of Web application, browser/SPA or, Client App, using any programming libraries that either support MSAL or OpenID Connect and OAuth2. Your solution, your choice 😉

After that I showed how you can use scopes for delegated permissions, and how you can do internal authorization logic in the Logic App depending on what scope the user has consented to/allowed to.

We will now build on this, by looking into controlling access and using application roles for principals.

Assigning Users and Restricting Access

One of the most powerful aspects of exposing your API using Microsoft Identity Platform and Azure AD is that you now can control who can access your solution, in this case call the Logic App.

Better yet, you can use Azure AD Conditional Access to apply policies for requiring MFA, devices to be compliant, require locations or that sign-ins are under a certain risk level, to name a few.

Let’s see a couple of examples of that.

Require User Assignment

The first thing we need to do is to change the settings for the Enterprise Application. We created an App registration for the LogicApp Client, for users to able to authenticate and access the API. From that LogicApp Client, you can get to the Enterprise Application by clicking on the link for Managed application:

In the Enterprise App, go to Properties, and select User assignment required:

We can now control which users, or groups, that can authenticate to and get access to the Logic App API via the Client:

If I try to log in with a user that is not listed under Users and groups, I will get an error message that the “signed in user is not assigned to a role for the application”:

PS! The above error will show itself a little different based on how you authenticate, the above image is using a public client, if you use Postman, the error will be in the postman console log, if you use a web application you will get the error in the browser etc.

Configuring Conditional Access for the Logic App

In addition to controlling which users and groups that can access the Logic App, I can configure a Conditional Access policy in Azure AD for more fine grained access and security controls.

In your Azure AD blade, go to Security and Conditional Access. If you already have a CA policy that affects all Applications and Users, for example requiring MFA, your LogicApp API would already be affected by that.

Note that as we are protecting the resource here, your Conditional Access policy must be targeted to the LogicApp API Enterprise App.

Click to create a new policy specific for the Logic App API, as shown below:

For example I can require that my Logic App API only can be called from a managed and compliant device, or a Hybrid Azure AD Joined device as shown below:

If I create that policy, and then tries to get an access token using a device that are not registered or compliant with my organization, I will get this error:

Summary of Restricting Access for Users and Groups

With the above steps we can see that by adding an Azure AD OAuth authorization policy to the Logic App, we can control which users and groups that can authenticate to and get an Access Token required for calling the Logic App, and we can use Conditional Access for applying additional fine grained access control and security policies.

So far we have tested with interactive users and delegated permission acccess scenarios, in the next section we will dive into using application access and roles for authorization scenarios.

Adding Application Access and Roles

Sometimes you will have scenarios that will let application run as itself, like a deamon or service, without requiring an interactive user present.

Comparing that to the OIDC and OAuth flow from earlier the Client will access the Resource directly, by using an Access Token aquired from Azure AD using the Client Credentials Flow:

Using the Client Credentials Flow from Postman

Back in the Postman client, under the Authorization tab, just change the Grant Type to Client Credentials like the following. NB! When using application access, there are no spesific delegated scopes, so you need to change the scope so that it refers to .default after the scope URI:

Click Get New Access Token, and after successfully authenticating click to Use Token. Copy the Token to the Clipboard, and paste to a JWT debugger. Let’s examine the JWT payload:

Note that the audience and issuer is the same as when we got an access token for an end user, but also that the JWT payload does not contain any scopes (scp) or any other user identifiable claims.

Using the Client Credentials Flow from MSAL.PS

To get an Access Token for an application client in MSAL.PS, run the following commands:

# Set Client and Tenant ID
$clientID = "cd5283d0-8613-446f-bfd7-8eb1c6c9ac19"
$tenantID = "104742fb-6225-439f-9540-60da5f0317dc"
# Set Client Secret as Secure String (keep private)
$clientSecret = ConvertTo-SecureString ("<your secret in plain text") -AsPlainText -Force

# Get Access Token using Client Credentials Flow and Default Scope
$tokenResponse = Get-MsalToken -ClientId $clientID -ClientSecret $clientSecret -TenantId $tenantID -Scopes 'api://elven-logicapp-api/.default'

You can then validate this Token and copy it to a JWT debugger:

# Copy Access Token to Clipboard
$tokenResponse.AccessToken | Clip

Calling the Logic App using Client Application

We can send requests to the Logic App using an Access Token in an application by including it as a Bearer Token in the Authorization Header exactly the same way we did previously, however it might fail internally if the Logic App processing of the access token fails because it now contains a different payload with claims:

Looking into the run history of the Logic App I can see that the reason it fails is that it is missing scp (scopes) in the token.

This is expected when authenticating as an application, so we will fix that a little later.

A few words on Scopes vs. Roles

In delegated users scenarios, permissions are defined as Scopes. When using application permissions, we will be using Roles. Role permissions will always be granted by an admin, and every role permission granted for the application will be included in the token, and they will be provided by the .default scope for the API.

Adding Application Roles for Applications

Now, let’s look into adding Roles to our LogicApp API. Locate the App registration for the API, and go to the App roles | Preview blade. (this new preview let us define roles in the GUI, where until recently you had to go to the manifest to edit).

Next, click on Create app role. Give the app role a display name and value. PS! The value must be unique, so if you already have that value as a scope name, then you need to distinguish it eg. by using Role in the value as I have here:

The allowed member types give you a choice over who/what can be assigned the role. You can select either application or user/groups, or both.

Add another App Role as shown below:

You should now have the following two roles:

Assigning Roles to Application

I recommend that you create a new App Registration for application access scenarios. This way you can avoid mixing delegated and application permissions in the same app registration, it will make it easier to differentiate user and admins consents, and secret credentials will be easier to separate, and you can use different settings for restricting access using Azure AD Users/Groups and Conditional Access.

So create a new App registration, call it something like (Yourname) LogicApp Application Client:

Choose single tenant and leave the other settings as default. Click Register and copy the Application (Client ID) and store it for later:

Next, go to Certificates & secrets, and create a new Client secret:

Copy the secret and store it for later.

Go to API permissions, click Add a permission, and from My APIs, find the LogicApp API. Add the Application permissions as shown below, these are the App Roles we added to the API earlier:

Under API permissions you can remove the Microsoft Graph user.read permission, it won’t be needed here, the two remaining permissions should be:

These you NEED to grant admin consent for, as no interactive user will be involved in consent prompt:

The admin consent are granted as shown below:

Now we can test getting access token via this new app registration, either use Postman or MSAL.PS , remember to use the new app (client) id and app (client) secret. I chose to add the two values to my Postman environment like this:

Next, change the token settings for Client Credentials flow so that the Client ID and Secret use the new variable names. Click to Get New Access Token:

After successfully getting the access token, click Use Token and copy it to clipboard so we can analyze it in the JWT debugger. From there we can indeed see that the roles claims has been added:

We will look for these roles claims in the Logic App later. But first we will take a look at how we can add these roles to users as well.

Assigning Roles to Users/Groups

Adding roles to users or groups can be used for authorizing access based on the roles claim. Go to the Enterprise App for the Logic App API registration, you can get to the Enterprise App by clicking on the Managed application link:

In the Enterprise App, under Users and Groups, you will already see the ServicePrincipal’s for the LogicApp Application Client with the Roles assigned. This is because these role permissions were granted by admin consent:

Click on Add user/group, add for a user in your organization the selected role:

You can add more users or groups to assigned roles:

Lets do a test for this user scenario. We need to do an interactive user login again, so change to using Authorization Code Flow in Postman, and change to the originial ClientID and ClientSecret:

Click to Get New Access Token, authenticate with your user in the browser (the user you assigned a role to), and then use the token and copy it to clipboard. If we now examine that token and look at the JWT data payload, we can see that the user has now a role claim, as well as the scope claim:

We can now proceed to adjust the authorization checks in the Logic App.

Customizing Logic App to handle Roles Claims

Previously in the Logic App we did checks against the scopes (scp claim). We need to do some adjustment to this steps, as it will fail if there are no scp claim in the Token:

Change to the following expression, with a if test that returns false if no scp claim, in addition to the original check for scope to be ManagedDevices.Read.All:

This is the expression used above:

if(empty(outputs('Base64_to_String_Json')?['scp']),false,contains(outputs('Base64_to_String_Json').scp,'ManagedDevices.Read.All'))

Similary, add a new Compose action, where we will check for any Roles claim.

This expression will also return false if either the roles claim is empty, or does not contain the ManagedDevices.Role.Read.All:

if(empty(outputs('Base64_to_String_Json')?['roles']),false,contains(outputs('Base64_to_String_Json').roles,'ManagedDevices.Role.Read.All'))

Next we need to add more checks to the authorization logic. Add a new line to the first group, where we also check the output of the Check Roles action to be false:

In the above image I’ve also updated the action name and comment to reflect new checks.

To the second group, add two more lines, where line number 3 is checking outputs from Check Roles to be false (same as above), and line 4 do a check if the roles claim contains the role ManagedDevices.Role.Read:

The complete authorization checks logic should now be:

And this is the summary of conditions:

  • Group 1 (checks if scp does not contain ManagedDevices.Read.All and roles does not contain ManagedDevices.Role.Read.All and calling user tries to get All managed devices):
    • Outputs('Check_Scopes') = false
    • empty(triggerBody()?['userUpn']) = true
    • Outputs('Check_Roles') = false
  • Group 2 (checks if scp does not contain ManagedDevices.Read.All and roles does not contain ManagedDevices.Role.Read.All, and tries to get managed devices for another user than users’ own upn, and roles does not contain ManagedDevices.Role.Read):
    • Outputs('Check_Scopes') = false
    • triggerBody()?['userUpn'] != Outputs('Base64_to_String_Json')['preferred_username']
    • Outputs('Check_Roles') = false
    • contains(outputs('Base64_to_String_Json')?['roles'],'ManagedDevices.Role.Read') = false

If any of the two groups of checks above returns true, then it means that the request was not authorized. To give a more customized response, change the response action like the following:

In the above action I have changed that response is returned as a JSON object, and then changed the body so that it returns JSON data. I have also listed the values from the token that the user/application use when calling the Logic App. The dynamic expression for getting roles claim (for which will be in an array if there are any roles claim) is:
if(empty(outputs('Base64_to_String_Json')?['roles']),'',join(outputs('Base64_to_String_Json')?['roles'],' '))
And for getting any scopes claim, which will be a text string or null:
outputs('Base64_to_String_Json')?['scp']

Test Scenario Summary

I’ll leave the testing over to you, but if you have followed along and customized the Logic App as I described above, you should now be able to verify the following test scenarios:

User/AppScopeRolesResult
UserManagedDevices.ReadCan get own managed devices.
Not authorized to get all devices or other users’ managed devices.
User (Admin)ManagedDevices.Read.AllCan get any or all devices.
UserManagedDevices,ReadManagedDevices.Role.ReadCan get own managed devices.
Can get other users’ managed devices by userUpn.
Not authorized to get all devices.
UserManagedDevices.ReadManagedDevices.Role.Read.AllCan get any or all devices.
ApplicationManagedDevices.Role.ReadCan any users’ managed devices by userUpn.
Not authorized to get all devices.
ApplicationManagedDevices.Role.Read.AllCan get any or all devices.

When testing the above scenarios, you need a new access token using either authorization code flow (user) or client credentials (application). For testing with roles and user scenarios, you can change the role assignments for the user at the Enterprise Application for the LogicAPI API. For testing with roles with application scenarios, make sure that you only grant admin consent for the applicable roles you want to test.

Final Steps and Summary

This has been quite the long read. The goal of this blog post was to show how your Logic App workflows can be exposed as an API, and how Azure AD OAuth Authorization Policies can control who can send requests to the Logic App as well as how you can use scopes and roles in the Access Token to make authorization decisions inside the Logic App. And even of more importance, integrating with Azure AD let’s you control user/group access, as well as adding additional security layer with Conditional Access policies!

My demo scenario was to let the Logic App call Microsoft Graph and return managed devices, which require privileged access to Graph API, and by exposing the Logic App as an API I can now let end users/principals call that Logic App as long as they are authorized to do so using my defined scopes and/or roles. I can easily see several other Microsoft Graph API (or Azure Management APIs, etc) scenarios using Logic App where I can control user access similarly.

Note also that any callers of the Logic App that now will try to call the Logic App using SAS access scheme will fail, as a Bearer Token is expected in the Authorization Header and the custom authorization actions that has been implemented. You might want to implement some better error handling if you like.

There’s an added bonus at the end of this article, where I add the filters for getting managed devices. But for now I want to thank you for reading and more article in this series will come later, including:

  • Calling Logic Apps protected by Azure AD from Power Platform
  • Protecting Logic App APIs using Azure API Management (APIM)

Bonus read

To complete the filtering of Managed Devices from Microsoft Graph, the Logic App prepared inputs of operatingSystem and osVersion in addition to userUpn. Let’s how we can implement that support as well.

After the initialize variable ManagedDevices action, add a Compose action. In this action, which I rename to operatingSystemFilter, I add a long dynamic expression:

This expression will check if the request trigger has an operatingSystem value, it not this value will be a empty string, but if not empty the I start building a text string using concat function where I build the filter string. There are some complexities here, amongs others using escaping of single apostroph, by adding another single apostroph etc. But this expression works:

if(empty(triggerBody()?['operatingSystem']),'',concat('/?$filter=operatingSystem eq ''',triggerBody()?['operatingSystem'],''''))

Next, add another Compose action and name it operatingSystemVersionFilter. This expression is even longer, checking the request trigger for osVersion, and if empty, it just returns the operatingSystemFilter from the previos action, but if present another string concat where I ‘and’ with the previous filter:

The expression from above image:

if(empty(triggerBody()?['osVersion']),outputs('operatingSystemFilter'),concat(outputs('operatingSystemFilter'),' and startswith(osVersion,''',triggerBody()?['osVersion'],''')'))

We can now add that output to the Graph queries, both when getting all or a specific user’s devices:

I can now add operatingSystem and osVersion to the request body when calling the Logic App:

And if I check the run history when testing the Logic App, I can see that the filter has been appended to the Graph query:

You can if you want also build more error handling logic for when if users specify the wrong user principalname, or any other filtering errors that may occur because of syntax etc.

That concludes the bonus tip, thanks again for reading 🙂