Delegation in Canvas Apps

A couple of weeks ago I found an empty slot in my diary, and I (dangerously) thought “I know, I’ll brush up on my Canvas app skills!”.

In my role I find myself looking across multiple Dynamics 365 apps, Excel spreadsheets, and Power BI reports daily, and I set myself the task of bringing all of this together into one place so that I could access all of the data I need with one or two clicks instead of manually transforming data and keeping several browser tabs permanently open.

This was going great, until I saw the dreaded ‘delegation’ warning that all Canvas app novices will see very quickly in their career.

“Delegation warning. The Filter part of this formula might not work on large data sets.”

When you expand the warning, you get the following detail:

What is Delegation?

Simply put, delegation is an instruction from the target application to the data source, to carry out a query before returning the subset of results that are wanted by the target application itself.

This means that we only ever receive the desired data in the target application, and in turn, performance is increased as a result.

When you compare the processing required in this scenario compared to retrieving every piece of data and then filtering it in the target application, you see a measurable performance increase by using delegation, and you’re also increasing technical debt by pulling back data into the target environment that you want to throw away immediately.

Cause

The wording for this warning can be considered a little misleading. The warning is actually telling us that there will be a lack of delegation in the data source. In this instance, the data source does not have the ability to carry out the condition logic with its capabilities, and therefore it needs request that the Canvas app carries out the query instead.

For example, Power Fx provides the ability to retrieve a day, month, or year value from a Date field, but Dataverse cannot do this! Dataverse can only query date ranges such as ‘on or before [Date]’! When querying a ‘month’ in this scenario, you would receive the delegation warning as the delegation cannot happen.

As a result, the full data set from the data source has to be retrieved by the target application, only for the target application to filter the data once it has all been received. This lowers the performance of the app, but it could be worse than that – if you exceed the definition of ‘large data set’, the data set may not return at all, leaving you with incomplete results with no error and a low quality solution.

Solution

The biggest lesson learned whilst working on delegation recently was from a colleague – there is always a workaround.

Whilst you can’t “fix” the warning with the same piece of code, you can use combinations of delegated conditional logic in order to achieve the same results.

A classic example steps back into using dates in Canvas Apps. In Power Fx I can express “Month = 1”, but Dataverse only allows date ranges so the Canvas App needs to bring back the full data set to work out whether “Month = 1”. As a result, I can’t quite express “in January this year” using delegated logic, so instead I need to combine two ranges using something that Dataverse can recognise. In this example I would combine “Created On must be on or after 1st January 2021”, and “Created On must be on or before 31st January 2021” to obtain the right data at source.

Some examples can be more complicated than this, but a top tip for Dataverse specifically is that if you can achieve it using Advanced Find, then you can be certain that the logic can be delegated!

Have you worked in this space before and found any cool workarounds? Leave a comment below!

Tip: Quickly Enable Migrated Power Apps Portal Configuration

Microsoft’s documentation goes to great lengths in order to explain how we can migrate Power Apps Portal data from one environment to another by using the Configuration Migration Tool, but it doesn’t quite go as far as explaining how to re-point the already-provisioned portal to your newly migrated data upon first deployment.

Follow the below steps once you’ve moved your data in order to see your changes come to life!

1a. Locate via Dataverse

Navigate to Apps and find your Portal app from the list. Click on the three dots, and choose ‘Settings‘.

A screenshot of make.powerapps.com highlighting Apps and Administration.

Select the ‘Administration‘ option which will open a new tab.

1b. Locate via Power Platform Admin Centre

Navigate to the Resources tab which will expand to show a Portals option, and find your Portal app from the list.

A screenshot of the Power Platform admin centre, highlighting the Portal and Manage options.

Click on the three dots, and choose ‘Manage‘.

2. Update Portal Bindings

A screenshot of the Power Apps portals admin centre, showing the Update Portal Binding option.

Stay on the ‘Portal Details‘ tab and scroll down to ‘Update Portal Binding‘ and choose the newly migrated Website Record from the list.

Translating Unknowns into Tangible Requirements

For me, the most exciting part of a project is the challenge of figuring out exactly what a client is asking for based on a very short brief provided in an introductory call.

This challenge is increased in my industry when you move from Dynamics 365 based projects to pure Power Platform projects, because you move away from a functionally built system, to a set of tools that enable the capability. Not only do we now have to qualify the tool, but we also need to qualify the business process at an earlier stage than we typically used to, as well as the full data model.

For example, a “helpdesk replacement tool” screams Dynamics 365 Customer Service, and consultants in the industry typically understand the core operational processes before they speak to a customer. On the Power Platform, however, no two ‘self-serve chatbot’ projects would ever be the same, and there’s no functional process that you can align to this.

So how do we quantify projects with so many unknowns when we need to fully design the data model, the user interface, and the functional process? One way to start is to look for three themes:

  • Trends
  • Assumptions
  • Caveats

The first consideration I make is whether there are any repeatable components for any given high level requirement.

Whilst this doesn’t necessarily give us the full requirement ready to build, it does give us an idea of the size of the scope in contrast to a solution that is easier to estimate. Let’s take the idea of implementing a chat bot for a client on their website.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.

Within the industry I work in, we know that a configurable Power Virtual Agent for Teams solution that only uses Entities is relatively straight forward, and doesn’t require code. The interface used to build the solution is entirely controlled by Microsoft, so we also have confidence that it works! Let’s now put our original requirement into context by using known unknowns:

  • We know that the client cannot deploy this through Teams, but we don’t necessarily know exactly how to deploy it through a website that we don’t control just yet.
  • We are not being asked to build their website and we don’t know what their data source is, but we do know that we can take advantage of data and automation services that we can control to make this easier, perhaps Microsoft Dataverse with some sort of movement of data via Power Automate?

We now have broken down the requirement into tangible considerations and we can justify risk and complexity based on what we do know and what we can control, so we should factor this in to our estimate right from the beginning.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.

Trends:

1. Power Virtual Agents for intelligent chatbot functionality.

2. Power Automate to drive dynamic data interactions between end user and data source.

3. Dataverse to assist with controlling data where necessary.

Assumptions

Next up, assumptions. We are often taught that making assumptions is a bad thing, and in most cases that is correct, but assumptions can be extremely powerful when defining a requirement if used correctly.

Taking our earlier example of a chatbot being deployed via a client’s website, we really don’t want to be developing the website in unfamiliar territory, nor do we want run into any bumps if their data source isn’t fit for purpose. For now, we can set assumptions against our requirement to portray what we would typically expect within the client’s landscape, and if any of these are found not to be true, then we can justify a change in direction for a requirement through a change of scope, estimate, and change request!

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.


Assumptions:
1. Assumes that the client’s existing data model is fit for purpose, and if any changes should be made, the client will take responsibility for these.

2. Assumes that the solution can be deployed using a embedded HTML code snippet, as per Microsoft’s standard approach.

Caveats

And last but not least, we have caveats. Clients may see these as the supplier creating ‘get out of jail free’ cards, but in reality, these are to ensure that everyone involved understands what should happen in the event that one of these factors occurs. Caveats are usually based on assumptions, but can extend further than this to cover typical project factors too.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.


Caveats:
1. If the data source should change after delivery, the client will be responsible for a change request for any errors that may occur with this solution if they wish to continue using the functionality.

2. If the client’s website cannot support HTML snippets for any given reason, the project may need to be delivered via a Power Apps Portal, which would incur extra cost to ensure the delivery is built to the correct standard.

Summary

When I describe this way of working with my team, I reference a phrase that may be familiar to some – It’s about the journey, not the destination. Imagine you have a 100 mile journey to make with no map functionality, digital or print. What would be your first move?

Success isn’t just the destination, or the solution in this case, it’s the route to it and the service provided along the way that counts. This continues to be a significant theme throughout the whole lifecycle of the project, and it can make or break the final engagement with the software.

What is Microsoft Power Platform?

Do you keep hearing about the Power Platform but don’t really know what it is? Well let me tell you everything you need to know in this short video!

In my role at QUANTIQ as the Power Platform Team Manager, I had the opportunity to join in with the video-first campaign where a number of us provide you with an overview of the Microsoft Cloud products we offer consultancy and services for. You can find the rest of the videos available here, and if you want to find out more, please do get in touch!

P.S – Hearing an excited 4 year old shout “Daddy is on YouTube!!” was definitely the highlight of this campaign for me, there’s nothing like it!

Expect Dataverse Deployments To Fail First Time

Whilst the process of deployment hasn’t changed too much since the days of Dynamics CRM, one thing that has changed significantly is the volume of possible components that can be included in a solution file.

Not only is this due to an increase of readily-available functionality from Microsoft, but also by the ability for end users to install their own components, which in turn creates more dependencies on (what we think) is our small solution of configuration changes to be deployed from one environment to another. This can increase the number of failures that can occur during delivery, and often, the end user error isn’t very helpful.

A generic error provided by the Power Platform when trying to deploy a solution.

Solution deployment failures don’t have to be a problem, in fact, we should expect them.

In this blog post I will help you understand how to troubleshoot a failed deployment so that you can solve the issue in an informed way.

Step 1: Download A Code Editor

We want to ensure that the output from the failure is in a readable format, and for this we need a code editor that recognises XML formatted files. My preference as a functional consultant who needs to open the occasional file is Notepad++. It’s free, and it has an XML Tools plugin which allows you to ‘pretty format’ any XML files. You can also use Visual Studio or Visual Studio Code – I suspect some of you reading this will already have one of these installed!

Step 2: Download The Solution’s Log File

Whenever someone approaches me with a failed deployment, the first thing I ask them for is the log file. When you open this file in Notepad++, use ctrl+alt+shift+B, which will ‘pretty format’ your XML file. It’ll look something like this:

A screenshot of Notepad++ with XML Tools plugin installed. The file shown here is using 'pretty format' to make the code readable.

It looks difficult to decipher to the untrained eye, but we can quickly start to understand why the solution is failing with a few tips when we break down the file.

Step 3: Understand The Dependency

Let’s take a look at the first dependency, defined by the <MissingDependency> XML tags.

A snippet of code showing a missing dependency.

You’ll notice a <Required> line and a <Dependent> line which both include a Type. This, alongside the schema name, is the most important part of the dependency, as the two combined tell us what we’re looking for.

Fortunately we don’t need to remember all of the types as Microsoft provide a handy reference guide here.

We simply need to cross-reference the numbers in our dependency, and we now know that to complete the deployment we need to include the “Offering” entity (table) for the “Service” System Form.

Step 4: Modify Your Solution

We have two choices here:

  1. Remove the Service System Form from the solution, or,
  2. Add the Offering entity (table) into the solution.

In this particular instance it would make more sense to add the Offering into the solution, but sometimes you may challenge whether the component is really needed within your deployable solution, in which case, you’d remove the System Form.

Step 5: Rinse & Repeat

Not all dependencies will be resolved within one solution modification, but that’s ok, and you may need to repeat steps 3 & 4 multiple times before you have a solution file that can be successfully deployed. The key is to remember that failures can be expected, and that they don’t always have to be a problem!