Keeping Calm Whilst Troubleshooting Software Bugs

As a manager of a team of consultants, I no longer have my own configuration problems anymore, so troubleshooting is a theme that plays into every single working day.

When I started 10+ years ago, seeing an error would fill me with dread, sweat, and stress that I just couldn’t shake, and today I see technical bugs as a little bit of fun to try and solve the unknown! Some will think I’m strange, but everyone has their niche.

My thought process has changed dramatically over the years and it’s not something that you can teach, it’s all experience. I used to panic particularly about the reaction of everyone around me.

  • I’m an awful consultant.
  • I bet the client/end user is going to be so angry.
  • How do I tell people that something is wrong?!
  • What if I don’t know how to fix it?

Today it’s entirely different and it’s all about the tech, and here are just a couple of provocative sentences for to consider when troubleshooting technical issues to slow down the pace of thought and to focus on the outcome. I can assure you there is a good explanation!

100% pass rates during testing are terrible.

If test scripts produce a pass rate of 100% then it is very likely that we haven’t considered all of the possible paths that the end user may take, and we might have focussed on the happy path too much.

We want to break things during testing, as it means that they’ll be fixed or known about with recommended workarounds by the time the feature moves to the production environment with real users.

Taking a Power Platform example, when testing a requirement such as ‘Entering data into the Contact Table’, perhaps we should consider:

  • What happens if I upload instead of creating from the Form? Are all rules respected?
  • What happens if I create the Contact from another record, does the Form behave the same way as it would if I were creating the Contact standalone.
  • What if I create many versions of the same Contact to try and confuse the system? Does it change the way that the system behaves in other areas?

These tests may seem extreme, but remember, we can never guarantee that an end user will work in the way that the system is built, which often highlights the result of a potential gap between UI and UX.

There are some fantastic examples out there describing this difference which you can find online. Think of the glass bottle of ketchup that we place on its lid when it’s running out so that we can get every single last drop, and how we used to bang on the bottom of the bottle or use a knife to try and help the flow through the thinner neck at the top. It’s no coincidence that today’s bottles are plastic built and have the lid at the bottom, with a width the same across the entire length of the bottle, as well as a squeezy mechanism to help with flow. These are all features that were created because of user experience rather than user interface.

Another notable example I can also almost guarantee that everyone reading this post has been guilty of – The great corner-cutting mud patches that we find in parks or woodlands where two paths meet at a 90° angle. The paths always look nice, but we are built to try and be efficient, and watching someone follow the path in this case can often look unusual. In our local area we’ve had five new-build estates pop up and they all feature non-linear paths. I wonder why?!

Work out who’s to blame.

Wait what? I thought we were all on the same team here. You’re absolutely right, but there are so many parts to any software delivery that you really do need to identify which supplier of the technology is causing the issue.

One of the most interesting examples I saw of this recently was a scaling issue in a Power App that was being loaded through the Website tab in Teams (for good reason, it had an appended URL!). Out of context, the Power App would render fine with no issues functionality or cosmetically whatsoever, but then it occurred to me that this is a 4-level hierarchy of display settings!

  • The person accessing the app is using a desktop monitor with a resolution and scale.
  • Teams on the desktop also has its own scaling percentage.
  • The Website tab, just like web browsers, is mobile responsive.
  • The Power App also has its own scaling and ratios built in as well as orientation.

In this setting, we had control over the Website Tab configuration and the Power App configuration, but we can’t ask end users to use a specific monitor resolution and scale, nor can we change their Teams environment. Rather than ‘fixing’ the perceived issue, we have to work with it, and in this case the ‘fix’ that worked for us was the ‘Lock Aspect Ratio’ setting on the Power App’s Settings so that it rendered in the way that we wanted it to irrespective of other scaling factors that we can’t control.

Final Thought

Bugs will always occur in software delivery, we see it from every single software supplier in the world. So, let’s not be afraid of them and let’s tackle them head on and ask ourselves some thought provoking questions in the process!

How to convert UTC into Your Local Timezone in Canvas Apps

One of the technical challenges we have in the UK is that for half of the year we are in the UTC time zone that we’re all familiar with, and the other half we’re in British Summer Time (BST). Those lucky few that keep the same time zone all year don’t know how easy they have it!

It can be quite confusing, as some digital solutions (including Dynamics 365) host UTC and our local time as separate time zones but call both UTC, but others don’t always make this distinction, and you may have seen data that you just submitted appear with a date stamp of ‘1 hour ago’. This is easily done if you’re non-technical. Why would you ever consider having to change your time zone if you can already see ‘UTC’ in the dropdown?

This doesn’t have a major material impact until you’re working with date values without times, particularly if the solution you’re using only allows you to control the date entry from the front end, and not the time entry. The difficulty we face in this scenario is that an application could even show yesterday’s date!

Yesterday’s date? Are you sure?

Well submitting data at 2pm during your workday doesn’t cause too much of an issue, you might see data entry from 1pm instead. But what if you submit a ‘date only’ value, or, (hopefully you’re not working at this time) but at some time between 00:00 and 00:59?! In this instance, the application can often confuse the user and present the data back as yesterday’s date instead!

How do I prevent this?

Fortunately we don’t have any problems submitting data as these will always be submitted in UTC and convert appropriately.

The issue we face occurs when we are trying to retrieve data from a data source, where (for example) the database stores the date as 30/07/22 00:00:00, but our Canvas App reads this from the data source as 29/07/22 23:00:00 due to the database storing our submitted date in UTC.

I discovered this when using the Outlook Tasks Connector to pull in today’s To Do items into a Collection, rather than using the Today() function to compare dates.

Check out the example below:

DateAdd(DateTimeValue(DueDateTime.DateTime),-TimeZoneOffset(),Minutes)) = Today()

“Add the negative of my local timezone offset in minutes to the local date, and then show me all of the To Do Items where the DueDateTime.DateTime value is equal to the newly calculated date.”

Note: For this particular connector I needed to explicitly specify DateTimeValue as the format, but you don’t need to do this for all Connectors.

That’s all. Fortunately Power Fx allows us to grab the time zone offset for the time zone I am currently in, but we must be aware that this value is a negative, and therefore we need to negate the negative in order to add the correct number of minutes. I’ll be using this in every Canvas App I build now, particularly as I work in an organisation that spans multiple time zones!

Modify An Owner’s Connection References in Power Automate

No matter how amazing an organisation may be, unfortunately there will always be the possibility of someone leaving the organisation. When it comes to Power Automate, this means that you can be stuck with the original Owner of the Cloud Flow having left the organisation, where Connection References eventually error, and lead to an automated process failing which may be critical to business systems.

Referenced Forever?!

At present, there is no way for you to delete the original Owner of a Cloud Flow even if you manage to establish yourself as a Co-Owner. Connection References cannot necessarily be deleted either!

Workaround

In this example I’ll use the Centre of Excellence Starter Kit environment that I inherited from a previous colleague, and for demo purposes I’m going to modify the Dataverse Legacy Connector as it is currently in the correct state to demo.

Let’s make our Connection References valid, and eventually fix the Cloud Flow by following the below steps for each Connection Reference:

A screenshot of a Power Platform environment, looking at Connection References within the Default Solution.
Your screen should look similar to the above screenshot at this stage.
  • Open the Connection Reference that you wish to modify. Hint: Filter by Owner to get to your reference quickly if you have many to search through.
  • Click on Edit.
  • Select the Dropdown with the existing Connection and re-point it to an existing valid Connection or create a new one.
  • Repeat those same steps for every invalid Connection Reference.

But Wait!…

Now there are some caveats to this approach which you should consider during this process:

  1. This does not remove the Owner from the flow, but it stops the Owner’s account from being used as a Connection Reference when using a data source in your Cloud flow.
  2. In my instructions I asked you to navigate to the Default Solution. For the consultants among us, with great power comes great responsibility. Be careful here, and use the original Unmanaged solution if you can. In most circumstances, you will be presented with Managed solutions and will be forced to use the Default solution.
  3. To ensure that you can see the full scope of your solution and automations, you ideally need to be a System Administrator to complete this exercise.

Find & Use Microsoft To Do For Your Personal Account in Power Automate

Way before Microsoft had a fully-fledged Outlook and Microsoft To Do app for iOS and Android, there were two apps that tightly integrated with each other to form an absolute machine in productivity – Sunrise and Wunderlist.

Tasks would show as ‘All Day’ items at the top of your calendar, with ticks next to each one completed as a frequent reminder of progress as you check your calendar for the seventeenth time during the working day.

A digitally produced image of Sunrise Calendar with Wunderlist Integration on an iPad
Sunrise Calendar with Wunderlist Integration on an iPad

Microsoft bought both of those products and that’s how we arrived at Microsoft’s eventual Outlook Tasks replacement and the ability to add third party calendars to our Outlook with ease, but not all features were migrated easily, and I have always wanted a replacement, but never found one.

By using the Power Platform, we now have the ability to bring together the capabilities of personal Microsoft To Do with Outlook, and any other service is hidden within the Outlook Tasks Connector within Power Automate!

Simply search for the Outlook Tasks when creating a flow, and once you’ve chosen your trigger or action, you’ll be able to see your tasks.

A screenshot showing the selection of a Microsoft To Do list in Power Automate via the Outlook Tasks Connector
Selecting a Microsoft To Do list in Power Automate via the Outlook Tasks Connector

I’m unsure on when exactly this feature became available for personal accounts, but Microsoft To Do with business accounts has been available for a while under it’s own Connector.

What’s the catch?

As with a lot of early Connectors that have since had iterative updates in Power Automate, not all actions are built consistently.

A screenshot showing a list of some of the available Actions within the Outlook Tasks Connector.
A list of some of the available Actions within the Outlook Tasks Connector.

We also have to bear in mind that Microsoft To Do and Outlook Tasks are built on entirely different architectures where functionality has merged over the years, and therefore there are several fields available that may not directly align to what you expect, particularly when trying to use the data you’ve received in another Connector.

Having said all of the above, once you have established the correct Dynamic Values and the correct Actions to use, the connector is extremely reliable and hasn’t failed me yet in any working examples.

References

Microsoft Docs: Outlook Tasks Connector

Microsoft Docs: Microsoft To Do (Business) Connector

Delegation in Canvas Apps

A couple of weeks ago I found an empty slot in my diary, and I (dangerously) thought “I know, I’ll brush up on my Canvas app skills!”.

In my role I find myself looking across multiple Dynamics 365 apps, Excel spreadsheets, and Power BI reports daily, and I set myself the task of bringing all of this together into one place so that I could access all of the data I need with one or two clicks instead of manually transforming data and keeping several browser tabs permanently open.

This was going great, until I saw the dreaded ‘delegation’ warning that all Canvas app novices will see very quickly in their career.

“Delegation warning. The Filter part of this formula might not work on large data sets.”

When you expand the warning, you get the following detail:

What is Delegation?

Simply put, delegation is an instruction from the target application to the data source, to carry out a query before returning the subset of results that are wanted by the target application itself.

This means that we only ever receive the desired data in the target application, and in turn, performance is increased as a result.

When you compare the processing required in this scenario compared to retrieving every piece of data and then filtering it in the target application, you see a measurable performance increase by using delegation, and you’re also increasing technical debt by pulling back data into the target environment that you want to throw away immediately.

Cause

The wording for this warning can be considered a little misleading. The warning is actually telling us that there will be a lack of delegation in the data source. In this instance, the data source does not have the ability to carry out the condition logic with its capabilities, and therefore it needs request that the Canvas app carries out the query instead.

For example, Power Fx provides the ability to retrieve a day, month, or year value from a Date field, but Dataverse cannot do this! Dataverse can only query date ranges such as ‘on or before [Date]’! When querying a ‘month’ in this scenario, you would receive the delegation warning as the delegation cannot happen.

As a result, the full data set from the data source has to be retrieved by the target application, only for the target application to filter the data once it has all been received. This lowers the performance of the app, but it could be worse than that – if you exceed the definition of ‘large data set’, the data set may not return at all, leaving you with incomplete results with no error and a low quality solution.

Solution

The biggest lesson learned whilst working on delegation recently was from a colleague – there is always a workaround.

Whilst you can’t “fix” the warning with the same piece of code, you can use combinations of delegated conditional logic in order to achieve the same results.

A classic example steps back into using dates in Canvas Apps. In Power Fx I can express “Month = 1”, but Dataverse only allows date ranges so the Canvas App needs to bring back the full data set to work out whether “Month = 1”. As a result, I can’t quite express “in January this year” using delegated logic, so instead I need to combine two ranges using something that Dataverse can recognise. In this example I would combine “Created On must be on or after 1st January 2021”, and “Created On must be on or before 31st January 2021” to obtain the right data at source.

Some examples can be more complicated than this, but a top tip for Dataverse specifically is that if you can achieve it using Advanced Find, then you can be certain that the logic can be delegated!

Have you worked in this space before and found any cool workarounds? Leave a comment below!

Quickly Enable Migrated Power Apps Portal & Power Pages Configuration

Microsoft’s documentation goes to great lengths in order to explain how we can migrate Power Apps Portal data from one environment to another by using the Configuration Migration Tool, but it doesn’t quite go as far as explaining how to re-point the already-provisioned portal to your newly migrated data upon first deployment.

Follow the below steps once you’ve moved your data in order to see your changes come to life!

1a. Locate via Dataverse

Navigate to Apps and find your Portal app from the list. Click on the three dots, and choose ‘Settings‘.

A screenshot of make.powerapps.com highlighting Apps and Administration.

Select the ‘Administration‘ option which will open a new tab.

1b. Locate via Power Platform Admin Centre

Navigate to the Resources tab which will expand to show a Portals option, and find your Portal app from the list.

A screenshot of the Power Platform admin centre, highlighting the Portal and Manage options.

Click on the three dots, and choose ‘Manage‘.

2. Update Portal Bindings

A screenshot of the Power Apps portals admin centre, showing the Update Portal Binding option.

Stay on the ‘Portal Details‘ tab and scroll down to ‘Update Portal Binding‘ and choose the newly migrated Website Record from the list.

Translating Unknowns into Tangible Requirements

For me, the most exciting part of a project is the challenge of figuring out exactly what a client is asking for based on a very short brief provided in an introductory call.

This challenge is increased in my industry when you move from Dynamics 365 based projects to pure Power Platform projects, because you move away from a functionally built system, to a set of tools that enable the capability. Not only do we now have to qualify the tool, but we also need to qualify the business process at an earlier stage than we typically used to, as well as the full data model.

For example, a “helpdesk replacement tool” screams Dynamics 365 Customer Service, and consultants in the industry typically understand the core operational processes before they speak to a customer. On the Power Platform, however, no two ‘self-serve chatbot’ projects would ever be the same, and there’s no functional process that you can align to this.

So how do we quantify projects with so many unknowns when we need to fully design the data model, the user interface, and the functional process? One way to start is to look for three themes:

  • Trends
  • Assumptions
  • Caveats

The first consideration I make is whether there are any repeatable components for any given high level requirement.

Whilst this doesn’t necessarily give us the full requirement ready to build, it does give us an idea of the size of the scope in contrast to a solution that is easier to estimate. Let’s take the idea of implementing a chat bot for a client on their website.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.

Within the industry I work in, we know that a configurable Power Virtual Agent for Teams solution that only uses Entities is relatively straight forward, and doesn’t require code. The interface used to build the solution is entirely controlled by Microsoft, so we also have confidence that it works! Let’s now put our original requirement into context by using known unknowns:

  • We know that the client cannot deploy this through Teams, but we don’t necessarily know exactly how to deploy it through a website that we don’t control just yet.
  • We are not being asked to build their website and we don’t know what their data source is, but we do know that we can take advantage of data and automation services that we can control to make this easier, perhaps Microsoft Dataverse with some sort of movement of data via Power Automate?

We now have broken down the requirement into tangible considerations and we can justify risk and complexity based on what we do know and what we can control, so we should factor this in to our estimate right from the beginning.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.

Trends:

1. Power Virtual Agents for intelligent chatbot functionality.

2. Power Automate to drive dynamic data interactions between end user and data source.

3. Dataverse to assist with controlling data where necessary.

Assumptions

Next up, assumptions. We are often taught that making assumptions is a bad thing, and in most cases that is correct, but assumptions can be extremely powerful when defining a requirement if used correctly.

Taking our earlier example of a chatbot being deployed via a client’s website, we really don’t want to be developing the website in unfamiliar territory, nor do we want run into any bumps if their data source isn’t fit for purpose. For now, we can set assumptions against our requirement to portray what we would typically expect within the client’s landscape, and if any of these are found not to be true, then we can justify a change in direction for a requirement through a change of scope, estimate, and change request!

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.


Assumptions:
1. Assumes that the client’s existing data model is fit for purpose, and if any changes should be made, the client will take responsibility for these.

2. Assumes that the solution can be deployed using a embedded HTML code snippet, as per Microsoft’s standard approach.

Caveats

And last but not least, we have caveats. Clients may see these as the supplier creating ‘get out of jail free’ cards, but in reality, these are to ensure that everyone involved understands what should happen in the event that one of these factors occurs. Caveats are usually based on assumptions, but can extend further than this to cover typical project factors too.

As a website user, I want to be able to engage with a chatbot, so that I can easily find out store opening times and current stock levels.


Caveats:
1. If the data source should change after delivery, the client will be responsible for a change request for any errors that may occur with this solution if they wish to continue using the functionality.

2. If the client’s website cannot support HTML snippets for any given reason, the project may need to be delivered via a Power Apps Portal, which would incur extra cost to ensure the delivery is built to the correct standard.

Summary

When I describe this way of working with my team, I reference a phrase that may be familiar to some – It’s about the journey, not the destination. Imagine you have a 100 mile journey to make with no map functionality, digital or print. What would be your first move?

Success isn’t just the destination, or the solution in this case, it’s the route to it and the service provided along the way that counts. This continues to be a significant theme throughout the whole lifecycle of the project, and it can make or break the final engagement with the software.

What is Microsoft’s Accessibility Insights for Web?

One of my favourite things about consulting in Power Apps Portals is that I am able to step back in to the world of traditional web development temporarily, and I get to explore a whole host of tools to improve our solution.

Visual improvement tools are cool, but helping ourselves reach a wider audience whilst simultaneously improving inclusivity is even better! This is where Microsoft’s Accessibility Insights for Web tool comes in to play.

The tool can be used for any website, but in this blog post I’ll use a Power Apps Portal as an example.

Context

Accessibility in digital services is all about providing alternative navigational aids and component references for those with impairments, and they can be elements that could be visible or hidden to all users. A couple of examples include:

  • Ensuring that the colour contrast between background colour and text that sits on top is significant enough to be considered easily readable. Using two similar colours may create difficulties for those with colour blindness.
  • Defining Tab Indexes in the website’s code to explain the order of your site’s navigation, so that anyone using a screen reader can access the components (such as a navigation bar with child links) in a logical order.

Regulations came into force in the late 2010s in many parts of the world, and more specifically, in the UK all public sector organisations had to ensure that their website was considered accessible by 23rd September 2018. If this isn’t possible, the organisation needs to provide a suitable alternative.

As many of the largest suppliers of digital services now give you the ability to create your own content, whether that’s social media or the Power Platform, many of the tech giants have created tools to empower you to make your content accessible, as it would be impossible for the tech giants themselves to automatically make every single piece of digital content meet these standards.

Getting Results Quickly Using FastPass

To get started, you don’t need a Microsoft account or even need a log in for your website. This tool can be run against any website to measure the closeness to common accessibility standards, or lack of. The tool is installed as a browser extension and is available for most modern browsers here.

To run the tool, simply click the following icon within your browser’s navigation bar:

Accessibility Insights for Web icon that is visible in your browser's navigation bar. A blue heart with a white search icon.

The browser window will give you several options which are self-explanatory, but to get reasonable results fast, the ‘FastPass’ option works well enough straight away.

A notification will pop up alongside the report for automated checks which instantly gives you a visual summary of all of the issues that have been raised.

A screenshot of a web browser showing the Accessibility for Insights Web tool by Microsoft with two different errors.
This particular portal has two errors relating to HTML code.

And there you have it! Within a few minutes you now have a list of potential issues to resolve shown in Step 1, and if you’re unfamiliar with the specific results you receive, there is plenty of information from the report or the web.

A screenshot of the FastPass results, showing two issues on the homepage of the demo website, with visual indicators on the website itself.

If you keep the report open on this page, as you expand your selection in the report, it will highlight on your original web page exactly where the issue is found and it will explain how to fix in order to avoid lengthy researching or development processes for someone that is comfortable editing HTML.

Moving on to Step 2, this provides a way to understand how accessibility tools will behave on our website when using ‘Tab Stops’ to navigate the screen.

A screenshot of the tab index being recorded by the report on my demo website to tell us how a screen reader will navigate the website.
Tab Stops are flowing in a logical order on this website, moving from left to right before shifting down to the next interactive item on the page.

To enable this, you simply press the ‘tab’ key on your website linked to the report and indicators will display for all elements on the page that are included within the ‘Tab Index’, and as this is subjective, the user is expected to decide whether the current order is correct.

Full Assessment

Whilst FastPass is likely to be beneficial for most small websites, for larger audiences or those in the public sector, you may want to consider running a full automated assessment. This is the second option available from your browser’s extension, and returns a significantly larger set of results in trade-off for a slower running time.

A screenshot of the full Assessment feature being run against a website looking at every component for accessibility issues.
Full assessment being run on a website.

Once the report has been completed, you can navigate to any items on the left-hand side of the browser window, focussing on those with an X to identify where issues lie, and once opened, the conventions used for displaying issues and recommendations follows the same path as the FastPass option.

Final Thoughts

Just because accessibility regulations aren’t law for all websites, it doesn’t mean that we shouldn’t consider this within our digital assets. For someone that is comfortable with HTML, running the tool (remember, it’s free!) with the FastPass report and fixing two issues could take less than one hour to change your homepage, but it could open up your site’s usability to a whole host of new audience members, not only increasing reach, but improving the perception of your services or product for those that need accessibility features everywhere.

When you’re next on social media or working with a website that you or your organisation owns, take a look at the accessibility features and you might open up a world of seemingly hidden features to make the digital experience better for all!

Microsoft’s Accessibility Insights for Web

Useful Guidance & Tools for Digital Accessibility

Colour Hex