Installing Matrix on Azure App Service Plan

My previous blog post was talking about what Matrix is and why you would like to move from Slack and Teams to Matrix. This blog post will talk about my installation journey.

The normal path of installing Matrix is to deploy it on a Kubernetes cluster. This makes a lot of sense because you need 4-5 services in total to get it running. However, when I was looking up the Azure Kubernetes Service costs it would cost me about $100 per month and I was not willing to spend that much on this experiment. So I played with the idea of deploying on Azure App Service Plan instead.

High Level Architecture

Here’s a high level image of the components of a Matrix system.

High level architecture of Matrix using the element server suite.

Synapse

This is the home server and it’s the core Matrix service. This one is responsible for routing all messages to the correct recipient and such. All the core functionality is here. It’s also the server that you connect your clients to.

Element Web

This is a web client for Matrix. Think of it as Slack in the web browser. This one is not really needed for the system to work, but I found it useful to setup as a way to test the system. Later I used Element desktop client and Element X iOS client exclusively.

Matrix Authentication Service (MAS)

This is where you create users and authenticate. This is absolutely needed and Synapse and MAS need to be integrated with one another to work properly.

PostgreSQL

It is possible to run Matrix on a file database like SQLite, but I don’t think it’s viable for a production setup. I did setup my own PostgreSQL and connected it to Matrix. More on that down below.

Element Call

I didn’t expect to be using audio or video conferencing in my experiment so I didn’t setup Element Call, but I think this is essential in a production setup.

Ingress

I thought I could manage without a dedicated ingress, but it became weird because my home server name and the address became different. My home server name was matrix.klabbet.dev but the address was synapse.matrix.klabbet.dev. I think setting up a dedicated nginx for ingress would have made much difference and I don’t think it would have been particularly hard neither.

Azure Architecture

Deciding to deploy Matrix on Azure to avoid Microsoft training AI on your data, or to become independent from Microsoft dominance with Teams, might seem to contradict the purpose – but the purpose here was to explore Matrix. If the goal was to reduce dependency to Microsoft I would’ve chosen a different hosting option.

I used bicep to deploy resources for my Matrix setup to Azure. This is a visualization of my bicep project. It contains all the resources I deployed.

I will summarize the most important parts of the installation here. If you want the details you can find all the bicep scripts on my public repository on Github.

Virtual Network

First of all you need a virtual network to protect your storage account, key vault and the database. All communication from your app to these services must be private.

resource vnet 'Microsoft.Network/virtualNetworks@2025-01-01' = {
  name: 'vnet-klabbet-matrix-prod-001'
  location: resourceGroup().location
  tags: resourceGroup().tags
  properties: {
    addressSpace: {
      addressPrefixes: [
        '10.100.0.0/16'
      ]
    }
    subnets: [
      {
        name: 'snet-klabbet-matrix-prod-001'
        properties: {
          // 10.100.0.0 - 10.100.0.255
          addressPrefix: '10.100.0.0/24'
          networkSecurityGroup: {
            id: resourceId('Microsoft.Network/networkSecurityGroups', 'nsg-klabbet-matrix-prod-001')
          }
          serviceEndpoints: [
            {
              service: 'Microsoft.Storage'
            }
            {
              service: 'Microsoft.KeyVault'
            }
          ]
          delegations: [
            {
              name: 'Microsoft.Web.serverFarms'
              properties: {
                serviceName: 'Microsoft.Web/serverFarms'
              }
              type: 'Microsoft.Network/availableDelegations'
            }
          ]
        }
      }
      {
        name: 'snet-klabbet-matrixdb-prod-001'
        properties: {
          // 10.100.1.0 - 10.100.1.255
          addressPrefix: '10.100.1.0/24'
          networkSecurityGroup: {
            id: resourceId('Microsoft.Network/networkSecurityGroups', 'nsg-klabbet-matrix-prod-001')
          }
          delegations: [
            {
              name: 'Microsoft.DBforPostgreSQL.flexibleServers'
              properties: {
                serviceName: 'Microsoft.DBforPostgreSQL/flexibleServers'
              }
            }
          ]
        }
      }
    ]
  }
}

There are two subnets, one for the service, key vault and storage. The second subnet is for the database because it needs a delegation. It’s a bit overkill to use a /16 address space for the vnet and /24 for the subnets. You can without any trouble squeeze in everything into a much smaller address space. (just me being lazy)

You need to create a private DNS zone to link to your database.

resource privateDnsZone 'Microsoft.Network/privateDnsZones@2020-06-01' = {
  name: 'private.postgres.database.azure.com'
  location: 'global'
  tags: resourceGroup().tags
}

// Link the Private DNS Zone to your VNet
resource vnetLink 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2020-06-01' = {
  parent: privateDnsZone
  name: 'vnet-klabbet-matrix-prod-001-link'
  location: 'global'
  properties: {
    registrationEnabled: false
    virtualNetwork: {
      id: vnet.id
    }
  }
}

Storage Account

Synapse needs a storage account where it can store temporary files and media. I like Azure Storage Accounts because they’re so cheap. Here we create a file service for synapse data and MAS.

MAS will only use it for the configuration file.

resource stor 'Microsoft.Storage/storageAccounts@2025-06-01' = {
  name: 'stklabbetmatrixprod001'
  location: resourceGroup().location
  tags: resourceGroup().tags
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
  properties: {
    networkAcls: {
      bypass: 'AzureServices'
      defaultAction: 'Deny'
      virtualNetworkRules: [
        {
          id: resourceId('klabbet-matrix-prod', 'Microsoft.Network/virtualNetworks/subnets', 'vnet-klabbet-matrix-prod-001', 'snet-klabbet-matrix-prod-001')
          action: 'Allow'
        }
      ]
    }
  }
}

resource fileService 'Microsoft.Storage/storageAccounts/fileServices@2025-06-01' = {
  parent: stor
  name: 'default'
}

resource synapseData 'Microsoft.Storage/storageAccounts/fileServices/shares@2025-06-01' = {
  parent: fileService
  name: 'synapse-data'
  properties: {
    accessTier: 'Hot'
  }
}

resource masData 'Microsoft.Storage/storageAccounts/fileServices/shares@2025-06-01' = {
  parent: fileService
  name: 'mas-data'
  properties: {
    accessTier: 'Hot'
  }
}

The network rule makes sure that you can only reach the data from within the subnet. Since the configuration files contains secrets, this is necessary.

I believe that it’s possible to replace the secrets in the config files with environment variables. In that case you could put the secrets in Azure Key Vault and have them injected when the service starts. This is worth exploring if running this in production.

PostgreSQL

I preferred setting up a hosted Azure Database for PostgreSQL. Partly because it’s much more performant than SQLite. You offload the application server a lot when you have a dedicated database. Also because you get some nice features with Azure PostgreSQL like backups and data encryption at rest.

You can supply your own keys for the encryption to make sure that Microsoft can’t read your data.

resource database 'Microsoft.DBforPostgreSQL/flexibleServers@2025-08-01' = {
  name: 'pgsql-klabbet-matrix-prod-001'
  location: resourceGroup().location
  tags: resourceGroup().tags
  sku: {
    name: 'Standard_B1ms'  // Burstable, 1 vCore, 2GB RAM - cheapest option ~$12/month
    tier: 'Burstable'
  }
  properties: {
    version:'18'
    administratorLogin: dbUsername
    administratorLoginPassword: dbPassword
    storage: {
      storageSizeGB: 32
    }
    backup: {
      backupRetentionDays: 7
      geoRedundantBackup: 'Disabled'
    }
    highAvailability: {
      mode: 'Disabled'
    }
    network: {
      delegatedSubnetResourceId: resourceId('klabbet-matrix-prod', 'Microsoft.Network/virtualNetworks/subnets', 'vnet-klabbet-matrix-prod-001', 'snet-klabbet-matrixdb-prod-001')
      privateDnsZoneArmResourceId: privateDnsZoneId
    }
  }
}

resource pgExtensions 'Microsoft.DBforPostgreSQL/flexibleServers/configurations@2025-08-01' = {
  parent: database
  name: 'azure.extensions'
  properties: {
    value: 'pg_trgm'
    source: 'user-override'
  }
}

resource synapseDB 'Microsoft.DBforPostgreSQL/flexibleServers/databases@2025-08-01' = {
  name: 'synapse'
  parent: database
  properties: {
    charset: 'UTF8'
    collation: 'en_US.utf8'
  }
}

resource masDB 'Microsoft.DBforPostgreSQL/flexibleServers/databases@2025-08-01' = {
  name: 'mas'
  parent: database
  properties: {
    charset: 'UTF8'
    collation: 'en_US.utf8'
  }
}

I create two databases on this database server. One for Synapse and one for MAS. The extension pg_trgm is needed for these services to work.

App Service Plan

I only set up one compute and I think that is enough. This is the smallest (and cheapest) compute that you can get Matrix to run on. It will cost you about $35 per month.

resource appServicePlan 'Microsoft.Web/serverfarms@2025-03-01' = {
  name: 'asp-klabbet-matrix-prod-001'
  location: resourceGroup().location
  tags: resourceGroup().tags
  sku: {
    name: 'B1'
    tier: 'Basic'
    capacity: 1
  }
  kind: 'linux'
  properties: {
    // must be true if linux
    reserved: true
  }
}

I actually never found any issues with running on this compute. Matrix does eat up a lot of resources, but the service never felt sluggish. Instead I was surprised with how responsive it was.

Matrix hogs all the available memory on my B1 instance and it consumes quite a lot of CPU. Regardless the service felt very snappy.

App Service

I setup Synapse, MAS and Element Web as App Services on the same App Service Plan. I will only show Synapse here, they are pretty much identical. If you want to see all of it, go to the Github repository and read the code.

resource appService 'Microsoft.Web/sites@2025-03-01' = {
  name: 'as-klabbet-matrixsynapse-prod-001'
  location: resourceGroup().location
  tags: resourceGroup().tags
  properties: {
    serverFarmId: appServicePlan.id
    httpsOnly: true
    siteConfig: {
      alwaysOn: true
      linuxFxVersion: 'DOCKER|matrixdotorg/synapse:latest'
      ftpsState: 'Disabled'
      appSettings: [
        {
          name: 'DOCKER_ENABLE_CI'
          value: 'true'
        }
        {
          name: 'WEBSITES_PORT'
          value: '8008'
        }
      ]
      azureStorageAccounts: {
        'synapse-data': {
          type: 'AzureFiles'
          accountName: stor.name
          shareName: 'synapse-data'
          accessKey: stor.listKeys().keys[0].value
          mountPath: '/data'
        }
      }
    }
  }
}

In a production scenario you would not set docker container to latest version, but a specific version so you control when and how the service updates.

Here I connect the storage account to the docker container on the /data mount path. When Synapse starts it will go to this file service and look for homeserver.yaml for its configuration.

I also set up managed certificate and domain name for each app service. Go to the repo if you want to know how I did that.

Configuration

Once you have all the services up and running you need to configure both Synapse and MAS. You create configuration files that you drop in each file service where they are read during startup.

Synapse

You start by generating a basic configuration file by invoking the Docker container.

docker run -it --rm \
    --mount type=volume,src=synapse-data,dst=/data \
    -e SYNAPSE_SERVER_NAME=matrix.klabbet.dev \
    -e SYNAPSE_REPORT_STATS=yes \
    matrixdotorg/synapse:latest generate

Then you need to update the configuration file and upload it to the file service where the app can read it. It should be called homeserver.yaml.

server_name: "matrix.klabbet.dev"
public_baseurl: "https://synapse.matrix.klabbet.dev/"
serve_server_wellknown: true
pid_file: /data/homeserver.pid
enable_login: true
admins:
  - "@mikael:matrix.klabbet.dev"
listeners:
  - port: 8008
    tls: false
    type: http
    x_forwarded: true
    bind_addresses: ['0.0.0.0']
    resources:
      - names: [client, federation]
        compress: false
database:
  name: psycopg2
  args:
    user: <dbusername>
    password: <dbpassword>
    database: synapse
    host: pgsql-klabbet-matrix-prod-001.postgres.database.azure.com
    port: 5432
    cp_min: 5
    cp_max: 10
  allow_unsafe_locale: true
log_config: "/data/matrix.klabbet.dev.log.config"
media_store_path: /data/media_store
registration_shared_secret: 
report_stats: false
macaroon_secret_key: 
form_secret: 
signing_key_path: "/data/matrix.klabbet.dev.signing.key"
trusted_key_servers:
  - server_name: "matrix.org"
matrix_authentication_service:
  enabled: true
  endpoint: https://auth.matrix.klabbet.dev/
  secret: 

There are some parts here that aren’t standard. In order to get the database working with Azure Database for PostgreSQL you need to include allow_unsafe_locale: true.

Matrix Authentication Service (MAS)

You also need to generate a configuration file for MAS.

docker run ghcr.io/element-hq/matrix-authentication-service config generate > config.yaml

You get a configuration file very much like this.

http:
  listeners:
  - name: web
    resources:
    - name: discovery
    - name: human
    - name: oauth
    - name: compat
    - name: graphql
    - name: assets
    binds:
    - address: '[::]:8080'
    proxy_protocol: false
  - name: internal
    resources:
    - name: health
    binds:
    - host: localhost
      port: 8081
    proxy_protocol: false
  trusted_proxies:
  - 192.168.0.0/16
  - 172.16.0.0/12
  - 10.0.0.0/10
  - 127.0.0.1/8
  - fd00::/8
  - ::1/128
  public_base: https://auth.matrix.klabbet.dev/
  issuer: https://auth.matrix.klabbet.dev/
database:
  host: pgsql-klabbet-matrix-prod-001.postgres.database.azure.com
  port: 5432
  username: <dbusername>
  password: <dbpassword>
  database: mas
  max_connections: 10
  min_connections: 0
  connect_timeout: 30
  idle_timeout: 600
  max_lifetime: 1800
email:
  from: '"Authentication Service" <root@localhost>'
  reply_to: '"Authentication Service" <root@localhost>'
  transport: blackhole
secrets:
  encryption: <encryptionsecret>
  keys:
  - key:
  - key:
  - key:
  - key:
passwords:
  enabled: true
  schemes:
  - version: 1
    algorithm: argon2id
  minimum_complexity: 3
matrix:
  kind: synapse
  homeserver: matrix.klabbet.dev
  secret: <sharedsecret>
  endpoint: "https://synapse.matrix.klabbet.dev"

Unless you have an e-mail server you need to configure transport: blackhole. Then it is prefered if users need to register with their e-mail address. While setting up the server and the admin user you might need to have the following configuration

account:
  password_registration_enabled: true
  password_registration_email_required: false

It makes sure that you can create an account without e-mail.

Upload the file to the MAS file service and call it config.yml for MAS to find it.

Summary

Installing Matrix on an Azure App Service Plan might not have been the best idea I had, but it works! It actually works very well and it saves me money. Instead of paying $100 per month for running it on the cheapest Azure Kubernetes Service, I get away with $44.

What I like about this setup is that costs will be quite flat. It will not really increase with time. After one week of active usage we managed to reach 12 MB on the database and 8 MB on the storage account. The database has 35GiB available and the Storage Account 100 TiB. It will take some time before we reach maximum capacity there ;D

This was a fun experiment. Things I would consider if doing this for a real production scenario

  • Not running it in Azure if the idea is to be independent from tech giants 🤣
  • Use Kubernetes as there are a more resources on getting it running on Kubernetes
  • Use an nginx reverse proxy for ingress to avoid having different server name and host
  • Install the Element Call as well for video conferencing
  • Make better use of Azure Key Vault by adding secrets as environment variables populated by AKV

This article was written without the use of generative AI.

My AZ-305 Designing Microsoft Azure Infrastructure Solutions Study Path

I’ve been talking for years about getting the Solution Architect credential, but I’ve never put aside the amount of time needed. This latter half of this year I’ve decided to take 20% of the time I usually spend on clients and spend it on myself instead, and the first goal was to take the AZ-305 exam.

Note: I cannot say anything about the exam itself, as you’re made to sign an NDA not to, but I can tell you about my study path and how I first failed, and then succeeded.

First Try

I failed my first try at this exam, and from what I’ve gathered, it’s not uncommon. I spent about 36 hours of study time in the first round, and I focused on the study path that Microsoft supply on their certificate page.

This study path does not represent the knowledge you’re being tested on. I failed because I studied the wrong things. I got 634 points out of 1000 where 700 is the passing limit.

After failing I did a short retrospective with myself on what went wrong, found new resources to study and set at it again for another 3 weeks of intensive studying. I can be quite stubborn when my mind is set on something.

Second Try

I spent about 40 hours on my second round of studies. First of all I bought the MeasureUp AZ-305 Practice Test and I did all of the 168 questions in 4 sittings. The way I did it was that for every question, I pasted it into Chat GPT and then we discussed every possible answer, why it was right or wrong. This way I used the test to find my knowledge gaps. It was also a great way to discover and remember the things I got wrong, instead of just skipping to the next question. It helped me to get a better understanding about topics I’m not familiar with.

The practice questions can be questionable, but the act of going through and discussing them was most useful to me.

This was a great use case for AI, even if Chat GPT wasn’t always right, it helped me remember as I had to reason about the knowledge. I find that much better than just reading.

I should say, the MeasureUp test has questions that are close to the real exam, but some of the questions are infuriating, and I did find some that were plain wrong. While this sounds bad, getting angry is also a good way of remembering what you try to study.

After identifying my knowledge gaps I did a couple of labs in Azure. I setup scenarios in my own Azure tenant, created resources and tried different things. This was very useful for resources and features that I don’t use myself in my day-to-day work.

  • Availability sets, creating virtual machines in sets, setting up Azure Load Balancer and testing fail-over
  • Availability zones, creating virtual machines in different zones
  • Virtual machine scale sets, setting up an autoscaling cluster of machines
  • Azure Site Recovery, setting up replication of a machine in a different region
  • Azure Backup, playing around with the different backup options
  • Azure SQL where I setup different configurations of single Azure SQL, DTU tier, vCore Tier, Elastic Pool and Managed Instance
  • Azure Policy and Initiatives, creating policies and applying them to my subscriptions

I wanted to play around more with Microsoft Entra ID, but most of the things I wanted to lab with requires a P2 license, like conditional access, access reviews, PIM and ID Protection.

Another thing I did was I watched John Savill’s study cram on YouTube. While it’s very high level and not detailed enough to pass the exam, I found that sometimes he was saying things I didn’t know about, so I went ahead and looked it up to learn about it. I watched this during my commute over a span of 3 weeks.

John Savill is the GOAT for making these study cram videos. I think it was good repetition of the basics before the exam.

The last thing I did was that I got the AZ-305 Exam ref from Amazon. First I thought it was a waste of money, because it would be delivered before the day of my exam, but it arrived early and I spent a couple of evenings reading it through.

While it doesn’t contain all the details you need to know, it’s still a very good and dense walkthrough of everything on a high level, and sometimes very detailed as well. I can recommend getting it if you’re struggling with the exam.

The exam ref has all the bullet points of what you need to know. Maybe not all the details, but it’ it’s a good starting point.

With all this studying I was much more confident on my second try and I finished with 844 points out of 1000 where 700 is the passing score.

Summary

I think this certificate was quite hard, the hardest yet. The reason for me saying so, is that in my previous certificates Administrator and Developer I’ve felt quite at home by using the technology in my daily job. In this certificate they test that you know much about all of Azure, not only the parts that you are comfortable with.

It took me about 80 hours of effective study time to learn everything I needed and I don’t think it’s something that anyone would pass without study. Everyone has their part of Azure they’re comfortable with, and this tests on the whole platform.

Now I have the Administrator, the Developer and the Solution Architect certifications. The only one left that I’m interested in is the DevOps certificate so I guess I’ll do that next.

App Service Plan Random Restarts

I’m hosting a real-time system that is very dependent on low latency throughput and I’m doing it on Azure. In hindsight this might not have been the best choice as you have no control over the PaaS services and only a shallow insight over the IaaS service that Azure offers. In hindsight, when you’re writing a real-time system, deploy it on an environment where you control everything.

Last week we were starting to get problems that the system would have these interruptions. Randomly it looked like the system would stop working for 1-2 minutes and then be back to normal. First we thought it was the network, but after diagnosis of the whole system, we found that the App Service Plan was restarting and this was causing the interruptions.

The memory graph shows when an instance drops, a new one is booting up.

There is no log of this, but you can see it if you watch the App Service Plan metrics, and split the Memory Percentage on instance. You can see that new instances starts up when old ones are killed. While the new instance is starting up, we drop connections and the real-time system stops working for 1-2 minutes.

In a normal system this wouldn’t be a problem, because all requests would move over to the instance that is being live, and the users wouldn’t be affected, but we’re working with web sockets and they cannot be load balanced like that. Once they’re established, they will need to be reconnected if the instance goes down.

So this was bad for us!

These kind of issues are hard to troubleshoot because Azure App Service Plan is PaaS. You don’t have access to all the logs needed, but I found this tool when you go into the Azure App Service and select Resource Health / Diagnose and solve problems and search for Web App Restarted.

There a lots of diagnose tools for Azure App Service if you know where to find them. This one shows web app restarts.

This confirms the issue but really doesn’t tell us why the instances are restarting. Asking Chat GPT for common reasons for App Service Instance restarts, I got the following list

  • App Crashes
  • Out of Memory
  • Application Initialization Failures
  • Scaling or App Service Plan Configuration
  • Health Check Failures
  • App Service Restarts (Scheduled or Manual)
  • Underlying Infrastructure Maintenance (by Azure)

The one that stood out to me was “Health Check Failures” so I went into the Health Check feature on my App Service and used “Troubleshoot” but it said everything was fine. So I checked the requests to my /health endpoint and it told a different story.

The health check is failing a couple of times per day and this seems to be the cause of the App Service instance restarts.

The health checks are fine 99.99% of the times, but those 0.01% flukes will cause the instance to be restarted. Azure App Service will consider that the instance is unhealthy and restart it.

To test my theory I turned off health checks on my Azure App Service, and the problem went away. After evaluating for 24 hours we had zero App Service Instance restarts.

When I turned off health checks on Azure App Service, to test my theory, the problems with the restarts disappeared.

The problem is confirmed, but why are health checks failing? Digging a little deeper I found the following error message

Result: Health check failed: Key Vault
Exception: System.Threading.Tasks.TaskCanceledException: A task was canceled.

In my health checks I check that the service has all the dependencies it needs to work. It cannot be healthy if Azure Key Vault is inaccessible. In this case Azure Key Vault would return an error 4 times during 24 hours, and this would cause the health check to fail and the instances to be rebooted.

Why would it fail? This is could be anything. Maybe Microsoft was making updates to Azure Key Vault. Maybe there was a short interruption to the network. It doesn’t really matter. What matters is that this check should not restart the App Service instances, because the restart is a bigger problem than Key Vault failing 4 checks out of 2880.

Liveness and Readiness

Health checks are a good thing. I wouldn’t want to run the service without them, but we cannot have them restarting the service every hour. So we need to fix this.

I know of the concept of liveness and readiness from working with Kubernetes. I don’t know if this is a Kubernetes thing, but that is where I learned the concept.

  • Liveness means that the service is up. It has started and are responding to essentially ping.
  • Readiness means that the service is ready to receive traffic

What we could do, is to split health checks into liveness checks and readiness checks. Liveness checks would just return 200 OK so that Azure App Service health checks have an endpoint for evaluating the service.

The readiness checks would do what my health checks do today, verify that the service has all the dependences required for it to work. I would connect my Availability Checks to the readiness so I get a monitor alarm if the service is not ready.

The health checks are using the new liveness endpoint that doesn’t verify the dependencies.
The availability check use the new ready endpoint to verify that all dependencies are up and running.

Developing Solutions for Microsoft Azure

Today I passed my AZ-204: Developing Solutions for Microsoft Azure exam and became an Azure Developer Associate. I’ve done some certifications in my days, but this was by far the hardest. The breadth of the knowledge required, Azure SDKs, data storage, data connections, APIs, authentication, authorisation, compute, containers, deployment performance and monitoring – combined with the extreme details in the questions, made this really hard. I didn’t think that I passed until I got my result.

These were the kind of questions that were asked

  • Case studies: Read up on a case study and answer questions on how to solve the client’s particular problems with Azure services. Questions like, what storage technology is appropriate, what service tier should you recommend, and such.
  • Many questions about the capabilities of different services. Like, what event passing service should you use if you need guaranteed FIFO (first-in, first-out)
  • How to setup a particular scenario. Like what order you should create services in order to solve the problem at hand. Some of these questions where down to CLI commands, so make sure that you’ve dipped your toes into Azure CLI.
  • Code questions where you need to fill in the blanks on how to connect and send messages on a service bus, or provision a set of services with an ARM template. You also get code questions where you should answer questions about the result of the code.

Because of the huge area of expertise and the extreme details of the questions, I don’t think you could study and pass the exam without hands-on development experience. If I were to give advice on what to study it would be

  • Go through the Online – Free preparation material. Make sure you remember the capabilities of each service, how they differentiate, and what features higher pricing tiers enables. Those questions are guaranteed.
  • Do some exercises on connecting Azure Functions, blob storage, service bus, queue storage, event grid and event hub. These were central in the exam.
  • Make sure you know how to manage authorisation to services like blob storage and the benefits of the different ways to do it. Know your Azure KeyVault as the security questions emphasise on this.

Be prepared that it is much harder than AZ-900: Microsoft Azure Fundamentals, go slow and use up all the time that you get. Good Luck!