Model-Driven Apps

How to detect car engine anomaly by analyzing engine noise?

Following to my article on “Starting your exciting journey of Connected Field Service and Azure IoT Hub“, I started working on a practical scenario about measuring noise in your surrounding and generating alerts in #PowerPlatform. In this article I want to summarize all resources required to implement such a scenario and my learnings. I hope this will add on the resources available in the community so you can use it as a walkthrough to implement a practical scenario.

In this article, you will see what architectural components are required to implement this simple scenario using Azure Iot and Connected Field Service. This article focuses on what is happening underhood of the platform. Luckily with Connected Field Service Application, you have everything managed behind the scene and you don’t need to worry much but this walkthrough enables to you understand what options you have in such scenarios.

Scenario

The scenario is about connecting MXChip IoT DevKit to your car or any place with noise and analyze the noise level by recording and sending the noise in form of Wave stream to an Azure IoT Hub. The Azure IoT Hub sends the data to an #Azurefunction which calculates the noise level using a simple formula and the function calls a #MicrosoftFlow to create alerts in #PowerPlatform. This can lead to number of endless scenarios.

  • The function for calculating the noise level from a wave file is extremely simple as well. There are so many scientific information which you can read here, here, here and here.
  • Calculating the noise level is not an easy task. There are many considerations involved and if you want to have the real working model, you will need to work on analyzing audio files which is beyond the scope of this demo.
  • It is possible and desirable to calculate the noise level in the device and send only the alerts to Azure IoT. This will reduce the traffic and the load on your Azure. However, for the sake of experiment I am sending all the noise data to Azure and calculate the noise level in Azure function.
  • In this demo, I am not listening to the noise all the time. I start recording on press of button A. I send the noise data to Azure on press of button B. I made this change to the scenario to demonstrate working with buttons in MX Chip and also reduce the traffic to Azure.

Architecture

The architecture of this sample is very simple. I am using an IoT Hub and Azure function to calculate and propagate the IoT events to the #PowerPlatform. On the device side, there is an Arduino application running which listens to noises and sends the noise to the Azure function.

A very comprehensive architecture of a connected field service is created in the below diagram which can simply be implemented using the #ConnectedFieldService application. However, I just wanted to implement it in a simpler way. Full details of the #ConnectedFieldService architecture can be seen in this documentation.

Components

The logical diagram of components is demonstrated below:

Ardiuno App

This component is a very program which reads the input from Audio, ButtonA and ButtonB of the device and does the following:

  1. On startup, it initializes the device and gets ready to listen to surrounding noise. It also checks the connectivity to Azure.
  2. On press of ButtonA , it records and surrounding noise and stores the stream in a buffer.
  3. On press of ButtonB, it sends the stream in the buffer to Azure.

To implement this part of the application, you will need to take following actions:

  1. Setup your device MXChip device. Please refer to this link to start.
  2. Setup your Visual Studio environment. Please refer to this link.
  3. You will need to learn how to deploy your code to the MXChip device. The simple way to upload your code your code to the device is to bring your MXChip device to Configuration mode. This means everytime you want to upload your updated code, Press A (and keep pressing) and then press reset (while still pressing A). Then release reset (While still pressing A) and then release A. Now you are ready to upload your code.
  4. If you want to debug your code in the device, you can refer to this link.

Here is my sample code:


#include "AZ3166WiFi.h"
#include "DevKitMQTTClient.h"
#include "AudioClassV2.h"
#include "stm32412g_discovery_audio.h"
#define MFCC_WRAPPER_DEFINED
#define MODEL_WRAPPER_DEFINED
//Constants and variables- Start//
enum AppState
{
APPSTATE_Init,
APPSTATE_Error,
APPSTATE_Recording,
APPSTATE_ButtonAPressed,
APPSTATE_ButtonBPressed
};
// variables will change:
static AppState appstate;
static int buttonStateA = 0;
static int buttonStateB = 0;
static bool hasWifi = false;
static bool hasIoTHub = false;
AudioClass &Audio = AudioClass::getInstance();
const int AUDIO_SIZE = 32000 * 3 + 45;
char *audioBuffer;
int totalSize;
int monoSize;
static char emptyAudio[AUDIO_CHUNK_SIZE];
RingBuffer ringBuffer(AUDIO_SIZE);
char readBuffer[AUDIO_CHUNK_SIZE];
bool startPlay = false;
void SendMessage(char *message)
{
// Send message to Azure
if (hasIoTHub && hasWifi)
{
char buff[512];
// replace the following line with your data sent to Azure IoTHub
snprintf(buff, 512, message);
if (DevKitMQTTClient_SendEvent(buff))
{
Screen.print(1, "Sent...");
}
else
{
Screen.print(1, "Failure...");
}
delay(2000);
}
else
{
// turn LED on-off after 2 seconds wait:
Screen.print("NO BUTTON DETECTED");
delay(1000);
Screen.clean();
}
}
void setup()
{
// put your setup code here, to run once:
memset(emptyAudio, 0x0, AUDIO_CHUNK_SIZE);
if (WiFi.begin() == WL_CONNECTED)
{
hasWifi = true;
Screen.print(1, "Running!!!");
if (!DevKitMQTTClient_Init(false, true))
{
hasIoTHub = false;
return;
}
hasIoTHub = true;
// initialize the pushbutton pin as an input:
pinMode(USER_BUTTON_A, INPUT);
pinMode(USER_BUTTON_B, INPUT);
appstate = APPSTATE_Init;
}
else
{
hasWifi = false;
Screen.print(1, "No Wi-Fi");
}
}
void loop()
{
// put your main code here, to run repeatedly:
Screen.clean();
// while(1)
{
// read the state of the pushbutton value:
buttonStateA = digitalRead(USER_BUTTON_A);
buttonStateB = digitalRead(USER_BUTTON_B);
if (buttonStateA == LOW && buttonStateB == LOW)
{
//SendMessage("A + B");
}
else if (buttonStateA == LOW && buttonStateB == HIGH)
{
// WAVE FORMAT
Screen.clean();
Screen.print(0, "start recordig");
record();
while (digitalRead(USER_BUTTON_A) == LOW && ringBuffer.available() > 0)
{
delay(10);
}
if (Audio.getAudioState() == AUDIO_STATE_RECORDING)
{
Audio.stop();
}
startPlay = true;
}
else if (buttonStateA == HIGH && buttonStateB == LOW)
{
// WAVE FORMAT
if (startPlay == true)
{
Screen.clean();
Screen.print(0, "start playing");
play();
while (ringBuffer.use() >= AUDIO_CHUNK_SIZE)
{
delay(10);
}
Audio.stop();
startPlay = false;
SendMessage(readBuffer);
}
else if (buttonStateA == HIGH && buttonStateB == HIGH)
{
Screen.clean();
}
}
delay(100);
}
}
void record()
{
Serial.println("start recording");
ringBuffer.clear();
Audio.format(8000, 16);
Audio.startRecord(recordCallback);
}
void play()
{
Serial.println("start playing");
Audio.format(8000, 16);
Audio.setVolume(80);
Audio.startPlay(playCallback);
}
void playCallback(void)
{
if (ringBuffer.use() < AUDIO_CHUNK_SIZE)
{
Audio.writeToPlayBuffer(emptyAudio, AUDIO_CHUNK_SIZE);
return;
}
int length = ringBuffer.get((uint8_t *)readBuffer, AUDIO_CHUNK_SIZE);
Audio.writeToPlayBuffer(readBuffer, length);
}
void recordCallback(void)
{
int length = Audio.readFromRecordBuffer(readBuffer, AUDIO_CHUNK_SIZE);
ringBuffer.put((uint8_t *)readBuffer, length);
}

Azure function

This is the simplest of all. All you have to do is to receive the stream and calculate the noise level. This can be very sophisticated but it is out of scope of this article.


using IoTHubTrigger = Microsoft.Azure.WebJobs.EventHubTriggerAttribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.EventHubs;
using System.Text;
using System.Net.Http;
using Microsoft.Extensions.Logging;
using System;
namespace IoTWorkbench
{
public static class IoTHubTrigger1
{
private static HttpClient client = new HttpClient();
[FunctionName("IoTHubTrigger1")]
public static void Run([IoTHubTrigger("%eventHubConnectionPath%", Connection = "eventHubConnectionString")]EventData message, ILogger log)
{
log.LogInformation($"C# IoT Hub trigger function processed a message: {Encoding.UTF8.GetString(message.Body.Array)}: " + System.Text.Encoding.Default.GetString(message.Body));
byte[] buffer = message.Body.ToArray();
short sample16Bit = BitConverter.ToInt16(buffer, 0);
double volume = Math.Abs(sample16Bit / 32768.0);
double decibels = 20 * Math.Log10(volume);
log.LogInformation(decibels.ToString());
}
}
}

Handshaking

In order the device to send messages to the Azure function, the device must know the endpoint in which it should send the data. You can take steps in the this link to register your device with Azure function. It is all about using Azure IoT Workbench.

References

https://docs.microsoft.com/en-us/dynamics365/field-service/developer/connected-field-service-architecture

Photo by Steinar Engeland on Unsplash

Smart Grid PCF for Dynamics 365

PCFs are one of the new hot cakes for Model-Driven and Canvas apps.  I wanted to try something new with this one custom controls.

Thank you, Alex Shlega for his Checkbox List PCF and Danish Naglekar for  PCF builder XRM Toolbox plugin. My new subgrid PCF “SmartGrid” will be released soon to TDG Power Platform Bank after final testings :).

 

Power Platform 24 Live!

We recently completed the first ever 24 hour event exclusively focused on the Power Platform – Power Platform 24. The Dynamics and Power Platform community has fantastic events literally across the globe. As amazing as these in person events are, not everyone can attend as either a speaker or attendee. We wanted to remove geography as an obstacle!

Moving to a virtual format allows the team to include both speakers and attendees who may otherwise miss the opportunity to share with and learn from the community. For organizers, the added benefit is lower overhead because you don’t need to secure a venue, coordinate speaker travel, provide prizes, or feed anyone. All said, lots of pros for this virtual event format.

Our first event went off without a hitch! So maybe we had a very minor glitch or two, but the sessions were fantastic and the event ran like a well oiled machine all because of amazing speakers and a group of dedicated organizers.

I personally learned much from the experience, so I wanted to share some thoughts for those that might want to organize a similar event. This post covers a bit about the approach to organizing the event and some tools used to run the show.

Organizing the Event

The event came together fairly quickly after the idea was thrown out in a group conversation: anyone interested in hosting a virtual Power Platform event? The response was of course, “Heck yeah!” The volunteers immediately began throwing around ideas. We had a lot to discuss but most of it boiled down to these main topics:

  • How do we choose speakers?
  • How do we host each presentation?
  • How do we register attendees?

Most of the organizers have some experience running in person events, virtual events, or both so we started with some best practices in mind. We also brought experience with a variety of tools based from past events. This experience gave us a nice head start, we then simply needed to choose what works best for a virtual event spanning a full 24 hours!

Choosing speakers

The team chose sessionize.com as the platform for a call for speakers and vetting the submissions. If you have not used the platform, definitely check it out. Sessionize is offers excellent tools for both organizers and speakers to to organize event submissions and manage sessions across multiple events. Another huge plus is that for the service is free for free community events. Sessionize features alone could take up a full post!

Once you lock down your call for speakers, we used sessionize to categorize, review, and rate submissions. The organizing team reviewed each of the more than 70 submissions, ranking them based on the information provided by the speaker. This was honestly one of the hardest part of the process because we received so many excellent submissions.

We considered multiple tracks because of the number of submissions, meaning we could run two or three concurrent hour long sessions. This was our first 24 hour event, so we chose a single track of 24 one hour sessions, starting at 8:30 AM EST and running through 8:30 AM the next day.

Hosting the event

This was the big decision: What platform do we use to host the event? We can all list a dozen virtual event platforms in just a few minutes, but that doesn’t actually make things easier. We ended up choosing Teams and a Teams Live Event. This makes sense as this is a Microsoft Power Platform event, but here is an excellent article that Purvin Patel shared which helped make our decision: Produce a live event using Teams. This article outlines how to set up a Team Live Event and details around Producer roles.

The Teams Live event setup means assigning users to a producer role where they can monitor a control the live stream, Q&A channel, manage the event notes, and start/stop the event. Another important feature is the ability to record each session. This may not be a requirement for other events, but we wanted to provide recordings for both attendees and speakers. This is an excellent feature but the limitation of 4 hours per recording is something to keep in mind if you choose this platform. We needed to keep this limitation in mind when scheduling the sessions and producers.

Using a Teams Live event requires Office 365 and Teams licenses. Fortunately, the XrmVirtual crew is already delivering live events using Teams, so they offered to run the event for us. We now had a chosen platform, so we needed to decide how to run the sessions.

Delivering the Sessions

We broke the 24 hours into six blocks of 4 hours and we took volunteers as producers for each segment, which worked perfectly with our 4 hour cap on recording. A producer was logged in the speaker during the session to handle connected issues, answer or raise questions, and manage transitions between speakers.

This meant that we posted six 4 hour Team Live events that ran in sequence. Once these were established, individual invites were sent to each speaker with a link for the correct block of time. This was all handled by the XrmVirtual team and I felt it worked out great as both a speaker and a producer. It was easy for me but I know it took a lot of time to set up!

At the start of each session, a producer logged in to Teams with the correct account, share any slides that were required at the time, and kick off the session, and began recording. The speaker could then just shared their screen and delivered the session. Once the session was complete, the producer shut down the event to end recording while the next producer was already up an running with the next speaker.

Registering event attendees

Registering event attendees seems pretty important, so why is it last in the list?

Well our solution for registering users for the event was pretty simple: we didn’t register users. Fortunately, the Teams Live event platform allows users to connect without a prior registration and post questions anonymously. We had no need to track any user info, manage cancellations, etc. Attendees could jump on to catch a session and disconnect when done.

This could be an issue with different virtual delivery platforms but it did not seem to be an issue for us. I believe we averaged about 100 attendees per session which is a pretty nice number. I’ve had in person sessions with only 5 people, so 100 is pretty nice! We had some excellent questions by attendees which really adds to the delivery. And of course, attendees who missed the live session can jump online and view the recorded sessions on demand!

Testing, 1…2…

One practice that made this event run so smoothly was… practice! The week prior to the event, XrmVirtual team set up test sessions to ensure speakers could connect without issue. Each speaker jumped on to the Team event, shared their screen, and tested their audio. It sounds simple, and it was, but it saved us from potential issues on the day of the event.

We also made sure that producers understood the Teams setup operates. The XrmVirtual team provided a new account from their Office organization for each 4 hour block. Each account was granted producer rights on their respective Team Live events. Having enough accounts is another item to consider if you choose a Teams Live event as a platform.

I was not the one that set up the Teams Live event for all of the sessions, but from an end user perspective, I found this event went smoothly and the Teams platform is fairly easy to use.

Thanks once again!

I will call out the organizing team here in case you want to reach out and say thanks! For me, I wanted to say thanks once again to the organizing team for gathering and vetting the speakers, setting up the infrastructure, communicating with speakers and attendees, taking time to act as producers (at really crazy hours!), processing all of the recorded videos, and advertising the event.

Thanks for simply giving up a chunk of your free time to make this event happen.

Julie Yack
David Yack
Joel Lindstrom
Aiden Kaskela
Beth Burrell
Michael Ochs
Sarah Jelinek

Everyone on the team pitched in, but I think a few special shout outs are in order – thanks to Julie for owning the meetings and technical bits with the producer setup and being online for 16 or so hours monitoring the event real time. And thanks to David Yack for spending his weekend breaking down all of the videos and hosting them for our viewing pleasure.

And thanks to all of the speakers that gave up their time to plan and provide some excellent sessions for the community! Check out the full list of speakers at the Power Platform 24 site! You can check out the recorded events now that they have been posted here!

I am looking forward to another Power Platform 24 event… Keep an eye out for the next event announcement!

Improve efficiency of Call centers using Dynamics 365 and Azure cognitive services

Photo by Hrayr Movsisyan on Unsplash

I am Fascinated by sophistication of Azure services and how they help us to improve our solutions and extend the way we can solve customer problems. Recently I had a requirement to implement  a dynamics 365 solution to enable a call center to capture cases while their operators are offline.

One solution was to provide a self-service portal to customers to log the cases when Call center operators are offline. But in this case the customer was looking for something very quick to implement and having the ability to link incoming cases with their call center channel and derive some reporting based on it.

Approach

I started looking at Azure services and see how I can use Azure cognitive services and speech recognition to help me solve this requirement and like always I Azure did not disappoint me. In this post I would like to share my experience with you and take you to the steps that you would need to create such a solution. Of course possibilities are endless. However, this post will give you a starting point to begin your journey.

I have seen solutions where telephony systems send voice recordings of callers as an email attachment to a queue in CRM. The CRM then converts that queue item to a case and attaches the voice recording as note to the case. The challenge with this solution is the call center operators have to open attachments manually and have to write the description of the case after listening to the audio file. This means their time is spent on inefficient activities whereas they should be utilize in better ways.

Another problem with this approach is size of attachments. As time goes by, audio attachments will increase the database size impacting the maintenance of solution.

Scenario

Our scenario is based on the fact that call center agents are not working 24 hours a day.

While agents  are offline customer should still be able to contact call center record the voice messages to create cases.

We will use the following components:

  1. Azure Blob to receive recorded audio files from telephony system.
  2. Azure cognitive services to listen to recorded audio files and translate the content to a text message. The audio file will be saved in  Azure blob (which is cheaper than CRM database storage).
  3. Azure function (with Azure Blob Binding) to recognize the text from the audio file and extracts the case description.
  4. Dynamics 365 Web API to create a case in CRM using the description extracted from Azure Cognitive services.  We can also add blob metadata like filename, etc. to case properties.
Solution Architecture

The full source code is available at GitHub

However, the main code snippet to perform conversion is below:

 public static async Task <string> RecognitionWithPullAudioStreamAsync ( string key, string region, Stream myBlob , ILogger log )

        {

            // Creates an instance of a speech config with specified subscription key and service region.

            // Replace with your own subscription key and service region (e.g., "westus").

            var config = SpeechConfig.FromSubscription(key, region);

            string finalText = string.Empty;

            var stopRecognition = new TaskCompletionSource<int>();

            // Create an audio stream from a wav file.

            // Replace with your own audio file name.

            using ( var audioInput = Helper. OpenWavFile ( myBlob ) )

            {

                // Creates a speech recognizer using audio stream input.

                using ( var recognizer = new SpeechRecognizer ( config , audioInput ) )

                {

                    // Subscribes to events.

                    recognizer. Recognizing += ( s , e ) =>

                    {                       

                    };

                    recognizer. Recognized += ( s , e ) =>

                    {

                        if ( e. Result. Reason == ResultReason. RecognizedSpeech )

                        {

                            finalText += e. Result. Text + " ";

                        }

                        else if ( e. Result. Reason == ResultReason. NoMatch )

                        {

                            log.LogInformation ( $"NOMATCH: Speech could not be recognized." );

                        }

                    };

                    recognizer. Canceled += ( s , e ) =>

                    {

                        log. LogInformation ( $"CANCELED: Reason={e. Reason}" );

                        if ( e. Reason == CancellationReason. Error )

                        {

                            log. LogInformation ( $"CANCELED: ErrorCode={e. ErrorCode}" );

                            log. LogInformation ( $"CANCELED: ErrorDetails={e. ErrorDetails}" );

                            log. LogInformation ( $"CANCELED: Did you update the subscription info?" );

                        }

                        stopRecognition. TrySetResult ( 0 );

                    };

                    recognizer. SessionStarted += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession started event." );

                    };

                    recognizer. SessionStopped += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession stopped event." );

                        log. LogInformation ( "\nStop recognition." );

                        stopRecognition. TrySetResult ( 0 );

                    };

                    // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.

                    await recognizer. StartContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    // Waits for completion.

                    // Use Task.WaitAny to keep the task rooted.

                    Task. WaitAny ( new [ ] { stopRecognition. Task } );

                    // Stops recognition.

                    await recognizer. StopContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    return finalText;

                }

            }

        }

Important considerations:

  1. [This point is optional, if you use Web API to create cases in CRM] You will need use Multi-tenant configuration, if your Azure Function Tenant and the tenant in which your CRM API is registered, are different. If your Azure function tenant and the tenant in which your CRM API is registered, you can use Single Tenant configuration.
  2. The input file from the telephony to Azure blob must be in a specific format. The required format specification is:
Property Value
File Format RIFF (WAV)
Sampling Rate 8000 Hz or 16000 Hz
Channels 1 (mono)
Sample Format PCM, 16-bit integers
File Duration 0.1 seconds < duration < 60 seconds
Silence Collar > 0.1 seconds

 

4. You can use ffmpeg tool to convert your recording to this specific format. For your testing, you can download and use the tool as below:
Download ffmpeg from this link.
Use the command: ffmpeg -i “<source>.mp3” -acodec pcm_s16le -ac 1 -ar 16000 “<output>.wav”
5. My sample in GitHub covers input in one single chunk of audio. However, if you wish to have continuous streaming, you will need to implement the         StartContinuousRecognitionAsync method.
6. The azure function should be configured to be blob trigger.

PowerApps Licensing Cheat Sheet

During the App in a Day sessions I’ve held there were often questions regarding the types of license that is required, based on the options/features that are part of the built solution. Based on the fact that an image is worth a thousand words, here’s how I’ve been able to easily demonstrate the basic differences between Office365 license, PowerApp P1 and P2 licenses. It’s not about all the nitty gritty details, but the base is there. In doubt always refer to Microsoft’s official documentation!

 

Custom views on lookup wont work without name field

Quick tip here, when setting a custom view on a lookup in the form designer:

pic1

Always ensure that the name attribute is on the view too, even if it won’t be displayed.

pic2

I had noticed that my search results were not working at all when I had typed in the lookup field. After an hour of troubleshooting, I added the name field onto the view and voila! Search and the control, in general, started behaving.

Solution Layering

I’ve recently noticed the Solution Layers button but knew next to nothing about its functionality.  It was added to my ever growing list of, “Ok, I need to check that out when I have some time!” While on a call this past week, the Solution Layers feature came up. After a brief overview on the call and some poking around afterwards, it looks to be a useful feature for developers, business analysts, and administrators.

What are Solution Layers?

Solution Layers is not some hidden, mystery feature.  Microsoft has done a great job recently with their online documentation and the article titled View solution layers includes a nice quick explanation of Solution layers:

  • Let you see the order in which a solution changed a component.
  • Let you view all properties of a component within a specific solution, including the changes to the component.
  • Can be used to troubleshoot dependency or solution-layering issues by displaying change details for a component that was introduced by a solution change.

So the Solution Layers tool offers insight into system components and their relationships to Solution deployments. The significant bit here to me is that it shows changes to the component and when the installation or updates were introduced.

Where do I find Solution Layers?

When you select a Solution component, such as an Entity, Process, or Web Resource, or sub component such as an Entity Form or Attribute, you will now see a button labeled Solution Layers.

For example, I opened the Power Apps Checker solution in a recently provisioned demo environment.  Expanding the Entities, we can see the button on the Analysis Result Detail Entity. Drilling into the Forms list, we see the tool button available with the Information main Form.  

Solution Layers for the Analysis Result Detail Entity
Solution Layers for the Analysis Result Detail Entity
Solution Layers for the Analysis Result Detail Entity Information Form
Solution Layers for the Analysis Result Detail Entity Information Form

If you open the Solution Layers dialog for the Analysis Result Detail Entity, we can see a one item list of Solutions.  This is a list of the Solutions to which this Entity is related.

Entity level Solution Layers
Entity level Solution Layers

Select the Solution listed and you can view the Analysis Result Detail Entity details that are related to the Solution.

 Analysis Result Detail Entity Solution Layer Details
Analysis Result Detail Entity Solution Layer Details

This view provides the list of the changed properties for the Entity when the Solution was imported in the first Changed Properties ‘tab’, and the full list of Entity properties in the All Properties tab. If we open the Information Form for this Entity, we see very similar Information: a single Solution and the detailed changes of the selected Entity Form for that Solution import. 

We only see one item in both the Entity and Entity Form levels because this Entity and all of its components are unique to this Solution. We can also see the list of Changed Properties is the same as the list of All Properties. This tells us that the Analysis Result Detail Entity was installed with Power Apps Checker solution and has not been affected by any other Solution installs.

That is some nice information, but not especially useful. The Solution Layers component really shines when we look at Entities that can be impacted by other solution imports.  For example, a system Entity Contact can be impacted by many different Solutions on your system. Or you may have a custom Entity being deploying as part of a product or an ongoing project that will see regular changes, whether through major Solution releases or hotfix style solution deployments.

Contact is a popular Entity

If we open a different solution that contains the Contact Entity, we see the real power behind this tool. If we open the solution named Sales Navigator for Dynamics 365 Unified Interface that comes with my demo environment, and view the Contact Entity Solution Layers, we see some immediate differences.

Contact Solution Layers Detail - lots of changes!
Contact Solution Layers Detail – lots of changes!

The Contact Entity has been changed by 21 separate Solutions. The first at the bottom of the list is System, but at the top we see Active as the latest. This means that the Entity or one or more Entity sub components were updated with each of these 21 Solution imports. So, how do we see more detail on all of these Entity changes?

Deltas!

If we dig deeper into the Solution components, we can see more granular detail of the changes. We can drill into the Contact Forms list for this Solution and open the Contact Form Solution Layers dialog.

In this view, we can see that the Contact Form has been updated by 11 different Solution Imports. But what has been changed? Open up a solution from the list to find out:

Contact Form Solution Layers Detail
Contact Form Solution Layers Detail

In this view under Changed Properties, we can see detailed changes that were made with the Solution Import. In this example, we see the underlying Form JSON value was updated, and if you scroll a bit, you will also see that the Form XML. With other value types, such as numbers or boolean values, it’s easy to see the changed value.

For more complex types like Form JSON or XML, you can compare the differences to the previous Solution Layer value. Simply open the previous Solution Layer from the list and view the property value under the All Properties view using a standard text diff tool such as WinDiff or Visual Studio.

Why is this a big deal?

The Dynamics 365 CE and the Power Platform with CDS now has a built in method for change tracking of various layers of the solution components. I include the Power Platform here because when you view an Entity from a Model Driven Power Apps , you have the option of switching to Classic View. In Classic View, you can view the Solution Layers exactly as if you were working within a Dynamics 365 CE solution.

This can be incredibly useful when troubleshooting issues or just managing your own deployments. With solid DevOps practices in place, you should be able to view content like this using source code control tools. But if you are working on a project for which those practices were not well established, I can see this feature as a huge help for developers, business analysts, or system administrators.

I recommend reviewing the article listed above and playing around with the feature. For example, check out changes to solution components like Workflows where you can view the changes to the underlying XAML that contains the workflow logic.

I will be looking into it in more detail myself because I can see the possibility for some nice tools built around this capability!

How to create resource in azure: Cognitive Services & PowerApps Part one

So you’ve seen all this discussion around Cognitive Services and now want a piece for yourself – you open your computer, you down a coffee and now you’re thinking “what now”?

Well don’t worry, I’ve got you!

Let’s start with Text Analytics, crack open Azure (https://portal.azure.com/#home) and select “Create a resource”:

Then in the search bar search for “Text” and then select “Text Analytics”:

Then select “Create”:

Then enter the following; Name, Subscription type, Location (locations of where the resource will be held), pricing tier and resource group allocation (for filtering and searching properties). Once happy select “Create” and you’ll be notified when the resource is ready.

Once it is completed it will be available via the notification of completion message (via link) or in “All Resource” (from the navigation pane to the left hand side of the screen), then look for “TextAnalytics” or whatever you named your resource. From here you can view your “Keys” or if you need the endpoint URL select “Overview” and then you’ll see a URL under the category of “End point” e.g. https://uksouth.api.cognitive.microsoft.com/text/analytics/v2.0 (the URL only changes dependant on the location you selected e.g. https://[location].api.cognitive.microsoft.com/text/analytics/v2.0).

Now to create the PowerApp data source, find part two here..

Hack4Good – My First Hackathon

Hack4Good Group Photo

TL;DR

I’ll warn you – this is a long read! To summarise though – this Community is beyond awesome and the Hack4Good event just proved that we can genuinely change the world.

The Hype

When TDG announced that there was to be a hackathon in London, with the focus of it being the Non-Profit/Charity sector, I was straight in there on the registration (after which Mrs H was then informed  that I was booked in – easier to ask forgiveness than seek permission)

This was to be my first ever hackathon, a year ago I hadn’t even HEARD of hackathons, and it ticked so many boxes for me. For those who don’t know, it’s not hacking in the sense of breaking into systems etc – this is all about using software and platforms that are at your disposal to hack together a solution to a scenario within a given time limit. The most innovative, practical, deliverable, and potential-filled solution would win the day.

When the emails started to come out Chris asked (in typical CAPS LOCK STYLE) if I would lead a team. Me being me, I jumped at the chance – in for a penny, in for a pound.

And so the excitement began. Weeks turned into days, and my poor family and friends got fed up of hearing how stoked I was. When I saw this list of other team leaders, and saw the people who were on my team, I started to question my credentials. There were so many legends of the community involved – people I look up to, and follow with eagerness and anticipation.

The Buildup

At 5:30am on Saturday 16th February, loaded with snacks and tech, I headed towards the railway station. Nerves meeting with excitement, doubts meeting determination.

Arriving just before 8am I was struck by just how, on first impressions, the Microsoft Reactor in London is a strange space. Fully stocked drinks area, with stereotypical caffeine overload available, games area, and then a large open space with tables and a large video screen. It almost seemed spartan in its simplicity.

As everyone started to arrive, and we set up our various laptops and devices, that open space suddenly became this hive of technology and potential.

Hugs and Hellos were dished out with abandon, and cries of “It’s so good to meet you at last” were deafening in their abundance. I moved from person to person and finally got to meet people who I’d talked to online or who I’d been following for ages. I was even surprised to find people who wanted to meet me!

The Morning

With typical fervour and energy the trio of Chris Huntingford, Kyle Hill and William Dorrington (who had come over for the start despite having removal lorries outside his front door!) kicked off the day.

A surprise video message from James Phillips, Corporate Vice-President of Microsoft, impressed upon all of us just how much the community is noticed by Microsoft and raised the expectations of all in the room another notch. If our dials were at 11 before that video, they were at 12 afterwards – and even Spinal Tap didn’t get to 12!

I’ll be honest at this point and admit that I can’t remember who presented exactly what and when – my mind was a maelstrom of ideas and planning.

The engaging Architect and Storyteller Alex Rijnoveanu (@WomanVsTech) from Microsoft delivered enthusiasm and encouragement.

The very funny, and trying-not-to-be-as-sweary, Sarah Critchley (@crmcat)presented in a way that only she could – with an idea about helping out stray cats using powerapps and other bits.

m-hance presented alongside Solent Mind, and that I related to what they did in a huge way because of the work I see in my day job at St. Andrew’s Healthcare. It was a sobering presentation in many ways, but also opened up our eyes as to “the art of the possible”.

Saurabh Pant and Sameer Bhangar had flown in from Microsoft (yes, all the way from Seattle) just for this event and then through away their planned roadmap presentation to give us all a major pep talk and stir us up even more. I have to say that the funniest thing was their very friendly (and also slightly sweary) rant about how much they had heard about Samit Saini in the past year! In so doing, it just served to show us all just what was possible – those who knew Samits journey smiled and laughed, and those who didn’t had their eyes opened to a new level of potential.

Quantiq presented some of the work they had done with the Leonard Cheshire charity and also give a glimpse of their toolkit for healthcare and the ideas kept flowing. As I look around at the other teams I could see people taking notes, typing away, and whispering to each other. This hackathon was going to be competitive, but boy was it going to deliver some amazing results.

I’ll apologise now to all the presenters as I haven’t done you justice in my few words, and I may have mangled your presentations up, but believe me when I say that all the presentations hit home with all of us listening. Those presentations took our plans, determination, and enthusiasm up to levels you just wouldn’t believe if you weren’t there!

Let The Hacking Commence

With a final presentation to lay down the rules of engagement, and also to make it clear that stopping for lunch was most definitely not an option, the starters gun was fired and the 4.5 hours of planning, building, and preparing began.

The buzz in the room was electric as each team discussed and planned out their scenario and then grabbing whiteboards and white space to map out what a solution could look like.

I’ll be writing more about the Team White proposal in the coming days, as there is more to come from that, but we settled on a solution that would utilise so much of the stack but would be able to be modularised and deployed as a “solution-in-a-box” approach.

With my amazing team of Penny, Josh, Denis and Raj we set about building Microsoft Forms, PowerApps, Dynamics 365 solutions, Flows, and the concept of the Hololens. Oh yes, Gadget King Raj had brought us a Hololens – and that just expanded the possibilities for us. We weren’t looking at gimmicks and tech-for-techs-sake, we were looking at a genuinely life-changing solution using some amazing software and hardware.

With a soundtrack of some amazing 80’s rock being pumped out (and yes, thanks Chris for Rickrolling us!), everyone was doing something. If you could have harnessed the energy in that room at that point you would have been able to power half of London.

Floor walkers popped by each of the teams each one listening and absorbing before offering advice, help, suggestions and more – but what was even more amazing was that the teams were all talking to each other. You read that right, the teams all talked to each other.

There was sharing of scenarios, encouragement, suggestions for improvement or additions, and helping hands. This was a competition that was like no other. This was a competition in which we ALL wanted to see every team achieve their goals. I’m a mildly (ok, seriously) competitive person at times and yet there was no sense of barging past each other to reach the finish line. This was collaboration and cooperation in competition towards a common goal.

The Winners

And with 4 and a half hours gone in the blink of an eye, the race was run. It was time to do the 5(ish) minute speed-dating presentation of the solutions.

As each team stepped up and shared I really do not know how I held it together. These were genuine scenarios, delivered with innovative solutions, and by passionate people.

Every last one.

We all watched, applauded and cheered. None of us could separate the competition. Judging was going to be tough, and so it proved.

With our hosts waffling as much as possible whilst the judges adjudicated, we all sat wondering just who it would be. We all wanted to win, but we all knew that whoever did win was fully deserving of it.

With the decision made, the announcement came that Team Grey (who had flown over from Germany to take part!) had won with an app for rounding up as you ordered food or transport and donated this to your charity of choice. Writing that makes it sound simplistic, but if you think of the implications of it you soon realise that it has massive potential.

It Is NOT Over!

The final speeches and thank you’s were made, the applause leaving hands feeling rather raw and sore, but this isn’t the end. Every proposition in the room has legs, and every person in the room knew that this couldn’t stop just because the clock had run down.

Saturday saw the start of something, the spark that starts a fire. We all felt it and reading all the posts on twitter and LinkedIn etc after the event just reaffirms that determination.

We saw not a glimpse, but rather a bright shining beacon of the power of the community. I go on and on (and on) about Community but what happened in that room on Saturday, with just a part of the enthusiastic and passionate community present, just proved what we can all achieve if we put our minds to it.

Here TDG we have the Community Collaboration Portal for working on community projects together, and there’s the Power Platform Bank for making solutions available, and then there’s all the social media channels out there as well.

Let’s turn this spark into a raging fire of change. Let’s use our collective skills to build new solutions to old problems.

Oh, and let’s do this again real soon!

 

 

 

New Admin Center – PREVIEW – System Settings Walk through

PLEASE NOTE : THIS POST IS BASED ON A PUBLIC PREVIEW FEATURE.

The video will give you a good idea of what is currently available within the new Admin Center preview system settings within the PowerApps user interface. Microsoft are always adding, updating and improving. This video is no holds barred. If a screen doesn’t work, it doesn’t work… But better to know than not to know. Check it out here…

You can download the PDF version of the presentation HERE. This will give users / admins the ability map the paths in the new UI to the current navigation paths in the classic UI.

I really hope this helps someone out in the quest for PowerApps knowledge and appreciation.

As always, please contact me or leave a comment if you need more information.

UnBoxing the Power Platform Model-Driven Form Designer Preview

I saw recently on Twitter that the WYSIWYG Model-Driven form editor is in preview and I thought, “HEY, Why not do a video without ever opening and testing this functionality”… SO that’s exactly what I did. it’s a little bit all over the show and as I find stuff I get more excited. I had to edit out a cough a few times, but other than that, this is raw footage of a totally new experience for me (gulp).

My opinion is that this is a great preview and I think that this is really going to make config a lot easier when its complete. Yes, there are things missing, but its a PREVIEW. it was fun to test it and I’ll be doing my best to use this as often as I can to i can get used to it.