Improve efficiency of Call centers using Dynamics 365 and Azure cognitive services

Photo by Hrayr Movsisyan on Unsplash

I am Fascinated by sophistication of Azure services and how they help us to improve our solutions and extend the way we can solve customer problems. Recently I had a requirement to implement  a dynamics 365 solution to enable a call center to capture cases while their operators are offline.

One solution was to provide a self-service portal to customers to log the cases when Call center operators are offline. But in this case the customer was looking for something very quick to implement and having the ability to link incoming cases with their call center channel and derive some reporting based on it.

Approach

I started looking at Azure services and see how I can use Azure cognitive services and speech recognition to help me solve this requirement and like always I Azure did not disappoint me. In this post I would like to share my experience with you and take you to the steps that you would need to create such a solution. Of course possibilities are endless. However, this post will give you a starting point to begin your journey.

I have seen solutions where telephony systems send voice recordings of callers as an email attachment to a queue in CRM. The CRM then converts that queue item to a case and attaches the voice recording as note to the case. The challenge with this solution is the call center operators have to open attachments manually and have to write the description of the case after listening to the audio file. This means their time is spent on inefficient activities whereas they should be utilize in better ways.

Another problem with this approach is size of attachments. As time goes by, audio attachments will increase the database size impacting the maintenance of solution.

Scenario

Our scenario is based on the fact that call center agents are not working 24 hours a day.

While agents  are offline customer should still be able to contact call center record the voice messages to create cases.

We will use the following components:

  1. Azure Blob to receive recorded audio files from telephony system.
  2. Azure cognitive services to listen to recorded audio files and translate the content to a text message. The audio file will be saved in  Azure blob (which is cheaper than CRM database storage).
  3. Azure function (with Azure Blob Binding) to recognize the text from the audio file and extracts the case description.
  4. Dynamics 365 Web API to create a case in CRM using the description extracted from Azure Cognitive services.  We can also add blob metadata like filename, etc. to case properties.
Solution Architecture

The full source code is available at GitHub

However, the main code snippet to perform conversion is below:

 public static async Task <string> RecognitionWithPullAudioStreamAsync ( string key, string region, Stream myBlob , ILogger log )

        {

            // Creates an instance of a speech config with specified subscription key and service region.

            // Replace with your own subscription key and service region (e.g., "westus").

            var config = SpeechConfig.FromSubscription(key, region);

            string finalText = string.Empty;

            var stopRecognition = new TaskCompletionSource<int>();

            // Create an audio stream from a wav file.

            // Replace with your own audio file name.

            using ( var audioInput = Helper. OpenWavFile ( myBlob ) )

            {

                // Creates a speech recognizer using audio stream input.

                using ( var recognizer = new SpeechRecognizer ( config , audioInput ) )

                {

                    // Subscribes to events.

                    recognizer. Recognizing += ( s , e ) =>

                    {                       

                    };

                    recognizer. Recognized += ( s , e ) =>

                    {

                        if ( e. Result. Reason == ResultReason. RecognizedSpeech )

                        {

                            finalText += e. Result. Text + " ";

                        }

                        else if ( e. Result. Reason == ResultReason. NoMatch )

                        {

                            log.LogInformation ( $"NOMATCH: Speech could not be recognized." );

                        }

                    };

                    recognizer. Canceled += ( s , e ) =>

                    {

                        log. LogInformation ( $"CANCELED: Reason={e. Reason}" );

                        if ( e. Reason == CancellationReason. Error )

                        {

                            log. LogInformation ( $"CANCELED: ErrorCode={e. ErrorCode}" );

                            log. LogInformation ( $"CANCELED: ErrorDetails={e. ErrorDetails}" );

                            log. LogInformation ( $"CANCELED: Did you update the subscription info?" );

                        }

                        stopRecognition. TrySetResult ( 0 );

                    };

                    recognizer. SessionStarted += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession started event." );

                    };

                    recognizer. SessionStopped += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession stopped event." );

                        log. LogInformation ( "\nStop recognition." );

                        stopRecognition. TrySetResult ( 0 );

                    };

                    // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.

                    await recognizer. StartContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    // Waits for completion.

                    // Use Task.WaitAny to keep the task rooted.

                    Task. WaitAny ( new [ ] { stopRecognition. Task } );

                    // Stops recognition.

                    await recognizer. StopContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    return finalText;

                }

            }

        }

Important considerations:

  1. [This point is optional, if you use Web API to create cases in CRM] You will need use Multi-tenant configuration, if your Azure Function Tenant and the tenant in which your CRM API is registered, are different. If your Azure function tenant and the tenant in which your CRM API is registered, you can use Single Tenant configuration.
  2. The input file from the telephony to Azure blob must be in a specific format. The required format specification is:
Property Value
File Format RIFF (WAV)
Sampling Rate 8000 Hz or 16000 Hz
Channels 1 (mono)
Sample Format PCM, 16-bit integers
File Duration 0.1 seconds < duration < 60 seconds
Silence Collar > 0.1 seconds

 

4. You can use ffmpeg tool to convert your recording to this specific format. For your testing, you can download and use the tool as below:
Download ffmpeg from this link.
Use the command: ffmpeg -i “<source>.mp3” -acodec pcm_s16le -ac 1 -ar 16000 “<output>.wav”
5. My sample in GitHub covers input in one single chunk of audio. However, if you wish to have continuous streaming, you will need to implement the         StartContinuousRecognitionAsync method.
6. The azure function should be configured to be blob trigger.

Remove Flow Ribbon Button

Remove Flow Ribbon Button

Hey D365’ers! Welcome to {{ quirk.works }}, a blog series where we try to solve a Dynamics 365 or Power Platform problem with unconventional solution. For our first part of the series, let’s try to remove the Flow ribbon button.

So you want to remove the Flow ribbon button?

Yes sir, we don’t want our users to see it as per our requirements document.

Tried removing it with Ribbon Workbench?

Ribbon buttons are easy to hide/show using the Ribbon Workbench. Just select the ribbon button, right click, and click Hide.

Hide also the rest of the ribbon button sub-menu.

And there you go. What? It’s not hidden isn’t it?

Here’s a {{ quirk.works }}

There is a quirk with how the Flow ribbon button can be removed.

Click the cog icon on the upper right side of the screen and click Advanced Settings.

Go to Settings > Administration

Go to System Settings.

Open the Customization tab. There you have it, select No from the option Show Microsoft Flow on forms and in the site map. Click the OK button and we’re all set.

Conclusion

Not every ribbon button can be hidden using the Ribbon Workbench, sometime you have to look out of the XrmToolBox. Just playing with some puns, no offense meant for these great tools as I admire these awesome works from Scott and Tanguy. Until our next {{ quirk.works }}, stay-tuned D365’ers. I’ll keep you posted.

 

How to upload Powerapps Audio into sharepoint?

Problem: Recently I have a requirement to upload recorded audio from powerapps to sharepoint.

Requirements seems to be straight forward if you have a basic knowledge of Powerapps and Microsoft flow isn’t it? But there is a catch (let’s see that in sometime :-))

Just for the completeness of this blog, I will reiterate certain Hows.

How to Record and listen the Audio in powerapps?

In powerapps we have a media control named MicroPhone to record the audio, and another media control named Audio to listen to the recorded audio.

Further on how to do that:

In the Microphone control (I named as MyMic) Onstop property collect the recorded audio as follows:

ClearCollect(collInspectionAudio, MyMic.Audio);

In the Audio Control Media property, place the collection which you have used to collect the recorded audio earlier as follows:

First(collInspectionAudio).Url

We now know how to record and listen the audio within powerapps. Now lets discuss the actual challenge on how to upload it to sharepoint.

Analysis

We use Microsoft flow to upload to sharepoint however in my microsoft flow it looks straight forward to pass the audio as First(collInspectionAudio).URL to my sharepoint file content as shown below.

when I execute the Powerapps and hence my flow it is creating an audio file in my sharepoint, however its not playing. when I see the outcome of the Flow i found it strange because File content looks as below:

When I further drilldown into the problem by checking the datatypes in flowstudio. I realized that in Createfile action of Sharepoint connector expects Binary format hereas powerapps send its audio content in byte format.

Solution:

So the only way to get around is accept Byte parameter from Powerapps and use it to upload file in sharepoint.

For that, I used Outlook connector  to send an email. When I ask for attachment parameters in powerapps it can send in byte format.

Note: If you have a requirement to send email of the audio file along with uploading in sharepoint then that’s amazing. If you don’t have such emailing requirement then another slight hack is required.

  1.  Create a condition which never becomes true. (in my case 100 equals 200 which never becomes true ;-))
  2. In the true section call Outlook connector and ask for parameters for Attachment Name and Attachment Content.
  3. In the false section call Sharepoint Connector and create a file by using same parameters which we have in Step 2.

Final working Microsoft Flow looks like this:

Warm Regards,

Pavan Kumar Garlapati

 

 

How to Enable the PowerApps AI Builder

Hey, Will here – just a quick post. I’ve had quite a few people come up to me asking how to enable the new “AI Builder” for Power Apps on their environment – which strikes me as slightly strange as it comes enabled as default. However, here is how you do it.

 

  1. Go to: https://admin.powerplatform.microsoft.com/ 
  2. Then using the navigation pane select “Environments” then select the environment you wish to turn the ‘AI Builder’ on for:
  3. Then select “Settings”:
  4. Then select “Features”:
  5. Then enable the PowerApps AI Builder:

Now go forth and build something AWESOME!

Converting Dynamics’s Geolocation To SQL Geolocation Using Microsoft Flow And Azure Function

Background

One of the awesome features of the Azure Search service is the ability to search information based on location. Azure Search processes, filters, and displays geographic locations. It enables users to explore data based on the proximity of a search result to a physical location. This feature is powered by SQL Server Geolocation data type. Since SQL Server 2008, developers are able store geospatial data in SQL server using Geolocation fields. Geolocation fields allow querying data with location based queries. To facilitate the Azure Search service to search within CRM accounts and contact, I had to pushed my account and contact searchable information to SQL server hosted in Azure. To copy information from Dynamics to Azure SQL server, I used Microsoft flow. Everything worked good except, copying CRM longitude and Latitude to SQL Server.

The problem

The problem with copying longitude and latitude to SQL server Geolocation field is the compatibility. When you try to insert longitude and latitude fields to Geolocation you encounter casting error.

The solution

  1. The solution I used to tackle this problem is making use of Azure Function and converting Longitude and Latitude to Geolocation type in the Azure function and return the response before the Insert action in the flow. See the below steps:
  2. Step 1 is self-explanatory.
  3. The step “CC Contact” extracts the Contact name (or any lookup name property) from a lookup.
  4. The “Http” step, calls the Azure Function to converts the CRM longitude and Latitude to SQL Geolocation field
  5. The “Insert Row” step, inserts our data to SQL server row.
Microsoft Flow
Microsoft Flow

The Azure Function

The Azure function is a very simple function. You will need to import Microsoft.SqlServer.Types Nuget package and use the below code:
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();  
       Coordinates data = JsonConvert.DeserializeObject<Coordinates>(requestBody);  
       SqlGeography point = data.GetGeography();  
       return ( ActionResult ) new OkObjectResult ( $"{point}" );  
 public class Coordinates  
   {  
     public double Longitude { get; set; }  
     public double Latitude { get; set; }  
     public SqlGeography GetGeography ( )  
     {        
       try  
       {  
         return SqlGeography. Point ( Latitude , Longitude , 4326 );  
       }  
       catch ( Exception ex )  
       {  
 // Log ex and handle exception  
         throw ex;  
       }  
     }  
   } 

 

 

Implementing Enterprise Search In Power Platform

Photo by Anthony Martino on Unsplash
Photo by Anthony Martino on Unsplash
Providing good search capabilities is a key feature in modern business applications to support usability and end user satisfaction. We have seen how the search capabilities of the Dynamics platform has evolved from providing “Quick Search” and “Advanced File” to “Relevance Search”. The goal of the platform search features has been to support users to find the relevant information they need in the quickest and easiest form. These search features are out-of-the-box and easy to enable/configure/use. As the platform progresses to offer richer features to users and enable them to search better, the demand for richer and better search techniques grow, and we see instances where the platform capabilities cannot meet user demands with its out-of-the-box capabilities. Before going further about advanced search scenarios, you can read about the platform out-of-the-box search capabilities in this official documentation. In this article I share why we may decide to implement a search solution of our Dynamics solution using Azure Search Service.
In enterprise implementations, business applications are not the only systems used in the organization. We often see call center agents and sales representatives need to obtain their required information from various systems to service customers. Searching users in every system is a cumbersome job which may cause setbacks in end-user adaption. Integrating Dynamics with Azure search offers consolidation of search operations in one specialized search service with ability to connecting to various data sources and apply modern search techniques to find the most relevant data. A practical example of this scenario can be seen in one my recent experiences where the organization users had to search for user information in CRM, SharePoint, Sybase and a pool of CSV files.

Customized Search experience

To facilitate more user adoption, using customized search techniques are highly favorable. In all modern search engines, we see use of “Auto complete”, “Suggestions” and “highlighting” features which can be added to the Dynamics solutions search experience. Displaying search results by support of “Document Preview”, “Document Opening in a customized containers”, “Facets”, “Filter” and “Sorting” are examples that enhance your Dynamics solution’s capabilities.

Customized Search Behavior

The true power of search is demonstrated with different pieces of information are linked together to make sense of a bigger picture. Extracting words and sentences from documents including images and pdf files, extracting key phrases, people names, location names, languages and other custom entities with the help of AI is another unique feature that you can add to your Dynamics’s search capabilities. Another amazing search capability you can have in your Dynamics implementation is the ability to search based on geolocation information, i.e. you can search for all your partner network from CRM or get the location of your field service force. The beauty of implementing your own enterprise search lies in the fact that you can search information in your data stores and link them using AI to generate knowledge and better insight to your data.

Customized Search Result

Another need for customized search in your Dynamics solution to the ability to refine your search result profile. When you use AI in your search, the system gives you the power to see how relevant search results are to your search keywords. And by knowing this you can refine your search profiles to generate a different result for the same keywords. This way you train the AI engine to work better for you and enable users to get more accurate search results.
Architecture

Dynamics integration with Azure Search service can be integrated in the following pattern:

 

  1. Integration through web resources: These web resources will host a web application acting as a client to the search service. The web resource can be a HTML file or an iFrame hosted on forms. The important point in this approach to ensure cross-origin settings in the client application and writing your html in a secure way and according to the best practices.
  2. Integration through custom power platform controls. You may build your own custom control which sends REST requests to the Azure Search and display results by consumes REST responses. The custom control can call Azure Search services using Actions or direct REST calls to Azure Service.
  3. Azure Search works based on indexes and your first step is to push your CRM searchable data to Azure Search indexes. This can be done using Microsoft Flow, Microsoft App Logics, custom solutions or Azure Data Factory. I have used all these tools in my implementations, and you can opt to any of these tools based on your requirements.
  4. Once the data is in your data store, you can create your indexes in the Azure Search. You can go for separate indexes for each data source or combine multiple data sources in one index. Each approach has its own requirements which will need to be met either in your client web application or a separate azure compute resource. Once indexing is done, you can make use Azure Search Rest API directly or using Azure API management to expose your search service to your Dynamics solution.
Summing these all up, you see as business application products get more sophisticated and organizations move from data to big data, engineers now must look for innovative approaches to implement Dynamics Solutions. Microsoft Azure along with Dynamics platform offers necessary tools to solution architects to design such solutions.

Use FetchXml with Flow CDS list records steps

Yesterday I noticed that there is a new CDS connector in my environment. It’s called the CDS Current Environment connector. With the current environment connector, you don’t select your environment–it just uses the environment in which the flow resides. And my favorite thing about the new connector: It lets you use FetchXml queries for list record steps!

Farewell ODATA!

Flow, HTTP Actions, and Files

I am working on a new presentation sample project and I wanted to test invoking an HTTP request from a Flow. Specifically, I want to invoke a Function App from a Flow using an HTTP Flow Action. In my sample, I will kick this off when a new Note is created with an Attachment.

To quickly test calling the HTTP Action, I uses an existing Function App sample that I had worked on a few weeks ago: a small Function App that I put together to test populating a PDF template using CRM data.

Poking around with Function Apps

This sample is creatively named CRMToPDF because it retrieves a record from CRM and populates a fillable PDF form from the CRM record using iText, returning the updated PDF for download. Pretty simple in terms of code, but it was a nice proof of concept testing the iText libraries (more on that in another post!).

Since this Function App returns the PDF file as the response, I was curious as to how Flow would handle it.  Could I “download” a file from a URL in Flow and attach it to an email using the Outlook email Action?

You bet I can!

With a few short steps, I was able to grab the resulting file and attach it to an email. This isn’t a huge surprise since so many Flow connectors already deal with moving files around. But it surprised me how simple it was to accomplish what I wanted to do.

So the Flow I created is really simple:

  • Trigger on a new Note
  • Invoke an HTTP Flow Action
  • Email the resulting PDF to myself

Here are the initial Flow steps, a Trigger and HTTP Action:

Trigger on new Note record, call HTTP GET

The trigger is on ALL Notes, so this would definitely change in the real world. And the HTTP GET only includes my Function App Authorization key. In a real example, I would pass in additional parameters, such as the of the Note ID or Object ID as an additional Query or as part of the request Body.

The Outlook Email Action looks like this:

New Email using the HTTP Response

You can see that this Action is pretty straightforward. It’s just an email to myself from the Owner of the Note. In Dynamics 365 CE, this mean that the System User had to enable sending emails on their behalf which is just a value under Personalized Settings. I just filled in a few bits of other info, like the Body and Subject.

The important part for me here is setting up the Attachment: set the Attachment Name to “CRM2PDF.pdf” and the Attachment Content to the Body of the HTTP Response.

That’s it. Yep. That’s all!

I first started looking at Flow a bit last year and wrote a short post about moving a document from Dynamics 365 CE to SharePoint, Flow Examples: Note attachment to SharePoint. This turned out to be relatively straightforward and a really cool Flow but had a few quirks, like converting the Note document body using the base64ToBinary function.

When I started looking at this sample, I expected some similar required steps, but setting the Body as the Attachment Content just worked. I put this entire Flow together in about 15 minutes, and it worked on the first try! (As a developer, I NEVER expect it to work on the first try!)

This tells me that the Flow engine is aware of the content type being returned by the HTTP get and can handle it properly when moving between the actions. The Actions know how to work with the files between the source HTTP Action and the next Outlook email Action.

That sounds like another obvious comment, but it makes me happy as a developer not having to do any kind of manipulation or parsing or other coding magic! For an idea of what is being returned from the HTTP action, we can look at the Flow Test logs for the HTTP GET Action and open the Outputs:

This isn’t super complex JSON for most developers: HTTP response code, several headers, the filename, etc. But for non developers, this could present an impossible roadblock. With the Flow designer and this huge library of existing Actions, a non developer can point their Flow to a service endpoint and move files about without a single line of code.

That’s some powerful stuff.

Microsoft Flow – How to move SharePoint Online list items to folders

Let’s say you have a SharePoint list with folders organized by continent and you have as well a choice column with the continent name in this. You want to move automatically all the list items with the chosen continent to the specific continent folder.

The first idea would be to use Microsoft Flow to do that, but there is no out-of-the-box action that moves list items to folders. At the moment of writing this post, there is an action to move files but nothing for list items. Even thinking about using SharePoint Rest API with the ‘Send an HTTP request to SharePoint’ connector there is no endpoint to move list items to a folder.

This is not clear in the Microsoft documentation, but actually, the same Rest API endpoint and action that is available for files can be used for list items by using the item file property. In this post, it will be shown how to use SharePoint Rest API to move items with Microsoft Flow, so as soon as an item is created or edited, it will be moved to the right location.

Basically, this Flow will do after being triggered:

1.   Get current list root folder path

2.   Get current item file name and the current path

3.   Build a new file path

4.   Check if the path is actually new and if so move the item

To better understand the following steps, knowledge on SharePoint Rest API and Microsoft Flow expressions is helpful.

To start building the flow, the trigger used is ‘When an item is created or modified’.

In this example, variables are used to store the list title, destination folder name and the site Url. Because the same values will be used in different connectors, it is a better practice to not let them hard-coded in the actions. For example, if this flow is exported to another environment or copied to be used in a different list, the changes to the flow will be minimal, only the initial variables and the trigger, instead of updating a bunch of connectors with hard-coded values.

Next step is to get the list folder Url using the SharePoint Rest API, using the ‘Send an HTTP Request to SharePoint’ action with the GET method. The action was renamed to ‘GetRootFolder’ so it is easier to access its output later in expressions. All the variables actions in this example are renamed as well, just for easier maintenance.

After this action, a new variable is initialized to store the list root folder Url with the following formula (use expressions to access the JSON content that is returned by the SharePoint Rest API, with the body function and the action name you can access the content):

body(‘GetRootFolder’)[‘d’][‘ServerRelativeUrl’]

Next action is to get the current list item File Server relative Url using SharePoint rest API like it was done with the list Root folder. In this case, it is needed to explicitly tell the Rest endpoint to load the ‘FileRef’ and ‘FileLeafRef’ properties.

The following variables are used to build the new list item folder Url, so the below formulas are used to assign values to the proper variables:

Item file name (Set with current file name only, used to build final destination path):

body(‘GetItem’)[‘d’][‘FileLeafRef’]

Item Url (Set with current item server relative Url, using the expression):

body(‘GetItem’)[‘d’][‘FileRef’]

New item Url (Root Folder/New Folder/Item File Name):

concat(variables(‘RootFolder’),‘/’,variables(‘MoveToFolderName’),‘/’,variables(‘ItemFileName’))

Then, the SharePoint ‘Send an HTTP connector to SharePoint’ is used again, to call the Files REST endpoint and move the item to the new location (yes, the same endpoint used for files).

It is validated if the File new folder path is different then the current one:

If it is, then a call to the API for the File/moveTo method is done in order to do the move:

This time a POST request is sent because we actually will request a change in this call to the endpoint, which will take care of the item move (also renamed the action to ‘MoveItem’).

After this last action is called, the item is moved to the proper folder.

The final flow will look like this:

With this flow, regardless of where the items are saved, they will be moved to the proper folder after being created/edited.

Hack4Good – My First Hackathon

Hack4Good Group Photo

TL;DR

I’ll warn you – this is a long read! To summarise though – this Community is beyond awesome and the Hack4Good event just proved that we can genuinely change the world.

The Hype

When TDG announced that there was to be a hackathon in London, with the focus of it being the Non-Profit/Charity sector, I was straight in there on the registration (after which Mrs H was then informed  that I was booked in – easier to ask forgiveness than seek permission)

This was to be my first ever hackathon, a year ago I hadn’t even HEARD of hackathons, and it ticked so many boxes for me. For those who don’t know, it’s not hacking in the sense of breaking into systems etc – this is all about using software and platforms that are at your disposal to hack together a solution to a scenario within a given time limit. The most innovative, practical, deliverable, and potential-filled solution would win the day.

When the emails started to come out Chris asked (in typical CAPS LOCK STYLE) if I would lead a team. Me being me, I jumped at the chance – in for a penny, in for a pound.

And so the excitement began. Weeks turned into days, and my poor family and friends got fed up of hearing how stoked I was. When I saw this list of other team leaders, and saw the people who were on my team, I started to question my credentials. There were so many legends of the community involved – people I look up to, and follow with eagerness and anticipation.

The Buildup

At 5:30am on Saturday 16th February, loaded with snacks and tech, I headed towards the railway station. Nerves meeting with excitement, doubts meeting determination.

Arriving just before 8am I was struck by just how, on first impressions, the Microsoft Reactor in London is a strange space. Fully stocked drinks area, with stereotypical caffeine overload available, games area, and then a large open space with tables and a large video screen. It almost seemed spartan in its simplicity.

As everyone started to arrive, and we set up our various laptops and devices, that open space suddenly became this hive of technology and potential.

Hugs and Hellos were dished out with abandon, and cries of “It’s so good to meet you at last” were deafening in their abundance. I moved from person to person and finally got to meet people who I’d talked to online or who I’d been following for ages. I was even surprised to find people who wanted to meet me!

The Morning

With typical fervour and energy the trio of Chris Huntingford, Kyle Hill and William Dorrington (who had come over for the start despite having removal lorries outside his front door!) kicked off the day.

A surprise video message from James Phillips, Corporate Vice-President of Microsoft, impressed upon all of us just how much the community is noticed by Microsoft and raised the expectations of all in the room another notch. If our dials were at 11 before that video, they were at 12 afterwards – and even Spinal Tap didn’t get to 12!

I’ll be honest at this point and admit that I can’t remember who presented exactly what and when – my mind was a maelstrom of ideas and planning.

The engaging Architect and Storyteller Alex Rijnoveanu (@WomanVsTech) from Microsoft delivered enthusiasm and encouragement.

The very funny, and trying-not-to-be-as-sweary, Sarah Critchley (@crmcat)presented in a way that only she could – with an idea about helping out stray cats using powerapps and other bits.

m-hance presented alongside Solent Mind, and that I related to what they did in a huge way because of the work I see in my day job at St. Andrew’s Healthcare. It was a sobering presentation in many ways, but also opened up our eyes as to “the art of the possible”.

Saurabh Pant and Sameer Bhangar had flown in from Microsoft (yes, all the way from Seattle) just for this event and then through away their planned roadmap presentation to give us all a major pep talk and stir us up even more. I have to say that the funniest thing was their very friendly (and also slightly sweary) rant about how much they had heard about Samit Saini in the past year! In so doing, it just served to show us all just what was possible – those who knew Samits journey smiled and laughed, and those who didn’t had their eyes opened to a new level of potential.

Quantiq presented some of the work they had done with the Leonard Cheshire charity and also give a glimpse of their toolkit for healthcare and the ideas kept flowing. As I look around at the other teams I could see people taking notes, typing away, and whispering to each other. This hackathon was going to be competitive, but boy was it going to deliver some amazing results.

I’ll apologise now to all the presenters as I haven’t done you justice in my few words, and I may have mangled your presentations up, but believe me when I say that all the presentations hit home with all of us listening. Those presentations took our plans, determination, and enthusiasm up to levels you just wouldn’t believe if you weren’t there!

Let The Hacking Commence

With a final presentation to lay down the rules of engagement, and also to make it clear that stopping for lunch was most definitely not an option, the starters gun was fired and the 4.5 hours of planning, building, and preparing began.

The buzz in the room was electric as each team discussed and planned out their scenario and then grabbing whiteboards and white space to map out what a solution could look like.

I’ll be writing more about the Team White proposal in the coming days, as there is more to come from that, but we settled on a solution that would utilise so much of the stack but would be able to be modularised and deployed as a “solution-in-a-box” approach.

With my amazing team of Penny, Josh, Denis and Raj we set about building Microsoft Forms, PowerApps, Dynamics 365 solutions, Flows, and the concept of the Hololens. Oh yes, Gadget King Raj had brought us a Hololens – and that just expanded the possibilities for us. We weren’t looking at gimmicks and tech-for-techs-sake, we were looking at a genuinely life-changing solution using some amazing software and hardware.

With a soundtrack of some amazing 80’s rock being pumped out (and yes, thanks Chris for Rickrolling us!), everyone was doing something. If you could have harnessed the energy in that room at that point you would have been able to power half of London.

Floor walkers popped by each of the teams each one listening and absorbing before offering advice, help, suggestions and more – but what was even more amazing was that the teams were all talking to each other. You read that right, the teams all talked to each other.

There was sharing of scenarios, encouragement, suggestions for improvement or additions, and helping hands. This was a competition that was like no other. This was a competition in which we ALL wanted to see every team achieve their goals. I’m a mildly (ok, seriously) competitive person at times and yet there was no sense of barging past each other to reach the finish line. This was collaboration and cooperation in competition towards a common goal.

The Winners

And with 4 and a half hours gone in the blink of an eye, the race was run. It was time to do the 5(ish) minute speed-dating presentation of the solutions.

As each team stepped up and shared I really do not know how I held it together. These were genuine scenarios, delivered with innovative solutions, and by passionate people.

Every last one.

We all watched, applauded and cheered. None of us could separate the competition. Judging was going to be tough, and so it proved.

With our hosts waffling as much as possible whilst the judges adjudicated, we all sat wondering just who it would be. We all wanted to win, but we all knew that whoever did win was fully deserving of it.

With the decision made, the announcement came that Team Grey (who had flown over from Germany to take part!) had won with an app for rounding up as you ordered food or transport and donated this to your charity of choice. Writing that makes it sound simplistic, but if you think of the implications of it you soon realise that it has massive potential.

It Is NOT Over!

The final speeches and thank you’s were made, the applause leaving hands feeling rather raw and sore, but this isn’t the end. Every proposition in the room has legs, and every person in the room knew that this couldn’t stop just because the clock had run down.

Saturday saw the start of something, the spark that starts a fire. We all felt it and reading all the posts on twitter and LinkedIn etc after the event just reaffirms that determination.

We saw not a glimpse, but rather a bright shining beacon of the power of the community. I go on and on (and on) about Community but what happened in that room on Saturday, with just a part of the enthusiastic and passionate community present, just proved what we can all achieve if we put our minds to it.

Here TDG we have the Community Collaboration Portal for working on community projects together, and there’s the Power Platform Bank for making solutions available, and then there’s all the social media channels out there as well.

Let’s turn this spark into a raging fire of change. Let’s use our collective skills to build new solutions to old problems.

Oh, and let’s do this again real soon!

 

 

 

IoT Button: Process automation with Microsoft Flow using NodeMCU and Arduino IDE

IoT Button: Process automation with Microsoft Flow using NodeMCU and Arduino IDE

 

In this article it will be developed an IoT button applied to the scenario of maintenance of a coffee machine using Microsoft Flow. However, it can be easily adapted to any other scenario or application.

Requirements

  • Access to Microsoft Flow or Azure Logic Apps
  • Arduino IDE
  • NodeMCU development board
  • Push Button
  • 1 x 330 Ω resistor
  • 1 x 1M Ω resistor
  • Jumpers
  • Breadboard
  • Micro USB cable

Setup Microsoft Flow  Environment

    1)    Microsoft Flow portal

Access Microsoft Flow, log in and click “My Flows”.

1.png

 

2)    Create from blank

Click “Create from blank” to create a new workflow.

2.png

 

    3)    Request/Response

Give a name to your Flow. Select the Trigger “Request/Response”.

3.png

    4)    Method GET

In “Advanced Options”, choose  “Method GET”.

4.png

 

5)    Add an action

Click “Add an action” to add a new action.

5.png

 

   6)    Send an email

Choose the action “Office 365 Outlook – Send an email”.

6.png

     7)    Create Flow

Complete all required fields (as you wish), and then click “Create Flow”.

ingles 1.png

    8)    HTTP GET URL

Then copy and save the HTTP GET URL:

https://prod-32.westus.logic.azure.com:443/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>

8.png

 

Hardware Setup

    1)    Building a Circuit on Breadboard

Build the circuit like the one shown below.

MicrosoftFlow-LogicApps-Button-Frittzing Project_bb.png

Software

The ESP8266 NodeMcu comes with a firmware that lets you program the chip with the Lua scripting language. But if you are already familiar with the Arduino way of doing things, you can also use the Arduino IDE to progam the ESP. In this tutorial we’ll use the Arduino IDE.

 

IDE Arduino setup

    1)    Package ESP8266

Download the IDE, and install it. Open the IDE; Choose File -> Preferences, in “Additional Boards Manager URLs” insert the URL “http://arduino.esp8266.com/stable/package_esp8266com_index.json” and than click “OK”. After this steps, your download will start automatically. Once it is finished, restart the IDE.

9.png

 

    Software Setup

Download the file “MicrosoftFlow_IoT_JoaoLucindo.zip” attached and replace the values:

  • SSID by your wireless network name
  • PASSWORD by your wireless network password
  • HOST by the strings of the HTTP GET URL before 443 (in this case:  “https://prod-32.westus.logic.azure.com” )
  • URL  by the strings of the HTTP GET URL after 443 (in this case “/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>”)

 By doing that, the final code will be:

#include <ESP8266WiFi.h>

//static const uint8_t D0   = 16;
//static const uint8_t D1   = 5;
//static const uint8_t D2   = 4;
//static const uint8_t D3   = 0;
//static const uint8_t D4   = 2;
//static const uint8_t D5   = 14;
//static const uint8_t D6   = 12;
//static const uint8_t D7   = 13;
//static const uint8_t D8   = 15;
//static const uint8_t D9   = 3;
//static const uint8_t D10  = 1;

int inPin = 16;   // pushbutton connected to digital pin 0   
int val = 0;     // variable to store the read value
//Include the SSL client
#include <WiFiClientSecure.h>

char ssid[] = "<SSID>";       // your network SSID (name)
char password[] = "<PASSWORD>";  // your network key

//Add a SSL client
WiFiClientSecure client;


void setup() {

  pinMode(inPin, INPUT);      // sets the digital pin 1 as input

   Serial.begin(115200);

  // Set WiFi to station mode and disconnect from an AP if it was Previously
  // connected
  WiFi.mode(WIFI_STA);
  WiFi.disconnect();
  delay(100);

  // Attempt to connect to Wifi network:
  Serial.print("Connecting Wifi: ");
  Serial.println(ssid);
  WiFi.begin(ssid, password);
  while (WiFi.status() != WL_CONNECTED) {
    Serial.print(".");
    delay(500);
  }


Serial.println("");
  Serial.println("WiFi connected");
  Serial.println("IP address: ");
  IPAddress ip = WiFi.localIP();
  Serial.println(ip);

}

String MicrosoftFlow() {
 
  char host[] = "prod-37.westus.logic.azure.com";

  if (client.connect(host, 443)) {
    Serial.println("connected");

    String URL = "/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>";

    Serial.println(URL);

    client.println("GET " + URL + " HTTP/1.1");
    client.print("Host: "); client.println(host);
    client.println("User-Agent: arduino/1.0");
    client.println("");
    }
}

void loop() {
  
  
  val = digitalRead(inPin);  // read input value
  delay(200);
  //Serial.println(val);

  if(val==HIGH){
    MicrosoftFlow();
    delay(1000);
    setup(); 
    }
  
}

 

Now you can compile and upload the code from your computer to the device. You can see the result (after press the push button) in the picture below.

12 - Copy.png

Download package for this from the Power Platform Bank – direct link here: LINK