How to detect car engine anomaly by analyzing engine noise?

Following to my article on “Starting your exciting journey of Connected Field Service and Azure IoT Hub“, I started working on a practical scenario about measuring noise in your surrounding and generating alerts in #PowerPlatform. In this article I want to summarize all resources required to implement such a scenario and my learnings. I hope this will add on the resources available in the community so you can use it as a walkthrough to implement a practical scenario.

In this article, you will see what architectural components are required to implement this simple scenario using Azure Iot and Connected Field Service. This article focuses on what is happening underhood of the platform. Luckily with Connected Field Service Application, you have everything managed behind the scene and you don’t need to worry much but this walkthrough enables to you understand what options you have in such scenarios.

Scenario

The scenario is about connecting MXChip IoT DevKit to your car or any place with noise and analyze the noise level by recording and sending the noise in form of Wave stream to an Azure IoT Hub. The Azure IoT Hub sends the data to an #Azurefunction which calculates the noise level using a simple formula and the function calls a #MicrosoftFlow to create alerts in #PowerPlatform. This can lead to number of endless scenarios.

  • The function for calculating the noise level from a wave file is extremely simple as well. There are so many scientific information which you can read here, here, here and here.
  • Calculating the noise level is not an easy task. There are many considerations involved and if you want to have the real working model, you will need to work on analyzing audio files which is beyond the scope of this demo.
  • It is possible and desirable to calculate the noise level in the device and send only the alerts to Azure IoT. This will reduce the traffic and the load on your Azure. However, for the sake of experiment I am sending all the noise data to Azure and calculate the noise level in Azure function.
  • In this demo, I am not listening to the noise all the time. I start recording on press of button A. I send the noise data to Azure on press of button B. I made this change to the scenario to demonstrate working with buttons in MX Chip and also reduce the traffic to Azure.

Architecture

The architecture of this sample is very simple. I am using an IoT Hub and Azure function to calculate and propagate the IoT events to the #PowerPlatform. On the device side, there is an Arduino application running which listens to noises and sends the noise to the Azure function.

A very comprehensive architecture of a connected field service is created in the below diagram which can simply be implemented using the #ConnectedFieldService application. However, I just wanted to implement it in a simpler way. Full details of the #ConnectedFieldService architecture can be seen in this documentation.

Components

The logical diagram of components is demonstrated below:

Ardiuno App

This component is a very program which reads the input from Audio, ButtonA and ButtonB of the device and does the following:

  1. On startup, it initializes the device and gets ready to listen to surrounding noise. It also checks the connectivity to Azure.
  2. On press of ButtonA , it records and surrounding noise and stores the stream in a buffer.
  3. On press of ButtonB, it sends the stream in the buffer to Azure.

To implement this part of the application, you will need to take following actions:

  1. Setup your device MXChip device. Please refer to this link to start.
  2. Setup your Visual Studio environment. Please refer to this link.
  3. You will need to learn how to deploy your code to the MXChip device. The simple way to upload your code your code to the device is to bring your MXChip device to Configuration mode. This means everytime you want to upload your updated code, Press A (and keep pressing) and then press reset (while still pressing A). Then release reset (While still pressing A) and then release A. Now you are ready to upload your code.
  4. If you want to debug your code in the device, you can refer to this link.

Here is my sample code:


#include "AZ3166WiFi.h"
#include "DevKitMQTTClient.h"
#include "AudioClassV2.h"
#include "stm32412g_discovery_audio.h"
#define MFCC_WRAPPER_DEFINED
#define MODEL_WRAPPER_DEFINED
//Constants and variables- Start//
enum AppState
{
APPSTATE_Init,
APPSTATE_Error,
APPSTATE_Recording,
APPSTATE_ButtonAPressed,
APPSTATE_ButtonBPressed
};
// variables will change:
static AppState appstate;
static int buttonStateA = 0;
static int buttonStateB = 0;
static bool hasWifi = false;
static bool hasIoTHub = false;
AudioClass &Audio = AudioClass::getInstance();
const int AUDIO_SIZE = 32000 * 3 + 45;
char *audioBuffer;
int totalSize;
int monoSize;
static char emptyAudio[AUDIO_CHUNK_SIZE];
RingBuffer ringBuffer(AUDIO_SIZE);
char readBuffer[AUDIO_CHUNK_SIZE];
bool startPlay = false;
void SendMessage(char *message)
{
// Send message to Azure
if (hasIoTHub && hasWifi)
{
char buff[512];
// replace the following line with your data sent to Azure IoTHub
snprintf(buff, 512, message);
if (DevKitMQTTClient_SendEvent(buff))
{
Screen.print(1, "Sent...");
}
else
{
Screen.print(1, "Failure...");
}
delay(2000);
}
else
{
// turn LED on-off after 2 seconds wait:
Screen.print("NO BUTTON DETECTED");
delay(1000);
Screen.clean();
}
}
void setup()
{
// put your setup code here, to run once:
memset(emptyAudio, 0x0, AUDIO_CHUNK_SIZE);
if (WiFi.begin() == WL_CONNECTED)
{
hasWifi = true;
Screen.print(1, "Running!!!");
if (!DevKitMQTTClient_Init(false, true))
{
hasIoTHub = false;
return;
}
hasIoTHub = true;
// initialize the pushbutton pin as an input:
pinMode(USER_BUTTON_A, INPUT);
pinMode(USER_BUTTON_B, INPUT);
appstate = APPSTATE_Init;
}
else
{
hasWifi = false;
Screen.print(1, "No Wi-Fi");
}
}
void loop()
{
// put your main code here, to run repeatedly:
Screen.clean();
// while(1)
{
// read the state of the pushbutton value:
buttonStateA = digitalRead(USER_BUTTON_A);
buttonStateB = digitalRead(USER_BUTTON_B);
if (buttonStateA == LOW && buttonStateB == LOW)
{
//SendMessage("A + B");
}
else if (buttonStateA == LOW && buttonStateB == HIGH)
{
// WAVE FORMAT
Screen.clean();
Screen.print(0, "start recordig");
record();
while (digitalRead(USER_BUTTON_A) == LOW && ringBuffer.available() > 0)
{
delay(10);
}
if (Audio.getAudioState() == AUDIO_STATE_RECORDING)
{
Audio.stop();
}
startPlay = true;
}
else if (buttonStateA == HIGH && buttonStateB == LOW)
{
// WAVE FORMAT
if (startPlay == true)
{
Screen.clean();
Screen.print(0, "start playing");
play();
while (ringBuffer.use() >= AUDIO_CHUNK_SIZE)
{
delay(10);
}
Audio.stop();
startPlay = false;
SendMessage(readBuffer);
}
else if (buttonStateA == HIGH && buttonStateB == HIGH)
{
Screen.clean();
}
}
delay(100);
}
}
void record()
{
Serial.println("start recording");
ringBuffer.clear();
Audio.format(8000, 16);
Audio.startRecord(recordCallback);
}
void play()
{
Serial.println("start playing");
Audio.format(8000, 16);
Audio.setVolume(80);
Audio.startPlay(playCallback);
}
void playCallback(void)
{
if (ringBuffer.use() < AUDIO_CHUNK_SIZE)
{
Audio.writeToPlayBuffer(emptyAudio, AUDIO_CHUNK_SIZE);
return;
}
int length = ringBuffer.get((uint8_t *)readBuffer, AUDIO_CHUNK_SIZE);
Audio.writeToPlayBuffer(readBuffer, length);
}
void recordCallback(void)
{
int length = Audio.readFromRecordBuffer(readBuffer, AUDIO_CHUNK_SIZE);
ringBuffer.put((uint8_t *)readBuffer, length);
}

Azure function

This is the simplest of all. All you have to do is to receive the stream and calculate the noise level. This can be very sophisticated but it is out of scope of this article.


using IoTHubTrigger = Microsoft.Azure.WebJobs.EventHubTriggerAttribute;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.EventHubs;
using System.Text;
using System.Net.Http;
using Microsoft.Extensions.Logging;
using System;
namespace IoTWorkbench
{
public static class IoTHubTrigger1
{
private static HttpClient client = new HttpClient();
[FunctionName("IoTHubTrigger1")]
public static void Run([IoTHubTrigger("%eventHubConnectionPath%", Connection = "eventHubConnectionString")]EventData message, ILogger log)
{
log.LogInformation($"C# IoT Hub trigger function processed a message: {Encoding.UTF8.GetString(message.Body.Array)}: " + System.Text.Encoding.Default.GetString(message.Body));
byte[] buffer = message.Body.ToArray();
short sample16Bit = BitConverter.ToInt16(buffer, 0);
double volume = Math.Abs(sample16Bit / 32768.0);
double decibels = 20 * Math.Log10(volume);
log.LogInformation(decibels.ToString());
}
}
}

Handshaking

In order the device to send messages to the Azure function, the device must know the endpoint in which it should send the data. You can take steps in the this link to register your device with Azure function. It is all about using Azure IoT Workbench.

References

https://docs.microsoft.com/en-us/dynamics365/field-service/developer/connected-field-service-architecture

Photo by Steinar Engeland on Unsplash

How to resolve “error Executing the api /eventhubs” in #MicrosoftFlow?

While trying to connect Microsoft Flow to Azure Event Hub, you cannot retrieve the Event Hub name and instead you get “Error Executing the api /eventhubs” error. The event hub connector in Flow, allows you to connect to event hub using connection strings and get notified as soon as a new event in available in the hub. However, there are certain things you will need to know.

Event Hub Namespace vs. Event Hub

An Event Hubs namespace provides a unique scoping container, referenced by its fully qualified domain name, in which you create one or more event hubs. So Event Hubs are inside Event Hub Namespace. Both of the Event Hub Namespace and Event Hub have their own connection string which can be used to access these resources. However, it is important to know that the Microsoft flow connector for Event hub accepts the Event Hub Namespace’s connection string rather than Event Hub resource’s connection string.

Error “Executing the api /eventhubs”

You will see the below error while trying to use Event Hub resource.

The solution is to use Event Hub Namespace’s connection string.

To confirm whether your connection string is associated with your Event Hubs namespace or with a specific event hub, make sure the connection string doesn’t have the EntityPath parameter. If you find this parameter, the connection string is for a specific Event Hub “entity” and is not the correct string to use with your logic app.

Reference

https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs

Improve efficiency of Call centers using Dynamics 365 and Azure cognitive services

Photo by Hrayr Movsisyan on Unsplash

I am Fascinated by sophistication of Azure services and how they help us to improve our solutions and extend the way we can solve customer problems. Recently I had a requirement to implement  a dynamics 365 solution to enable a call center to capture cases while their operators are offline.

One solution was to provide a self-service portal to customers to log the cases when Call center operators are offline. But in this case the customer was looking for something very quick to implement and having the ability to link incoming cases with their call center channel and derive some reporting based on it.

Approach

I started looking at Azure services and see how I can use Azure cognitive services and speech recognition to help me solve this requirement and like always I Azure did not disappoint me. In this post I would like to share my experience with you and take you to the steps that you would need to create such a solution. Of course possibilities are endless. However, this post will give you a starting point to begin your journey.

I have seen solutions where telephony systems send voice recordings of callers as an email attachment to a queue in CRM. The CRM then converts that queue item to a case and attaches the voice recording as note to the case. The challenge with this solution is the call center operators have to open attachments manually and have to write the description of the case after listening to the audio file. This means their time is spent on inefficient activities whereas they should be utilize in better ways.

Another problem with this approach is size of attachments. As time goes by, audio attachments will increase the database size impacting the maintenance of solution.

Scenario

Our scenario is based on the fact that call center agents are not working 24 hours a day.

While agents  are offline customer should still be able to contact call center record the voice messages to create cases.

We will use the following components:

  1. Azure Blob to receive recorded audio files from telephony system.
  2. Azure cognitive services to listen to recorded audio files and translate the content to a text message. The audio file will be saved in  Azure blob (which is cheaper than CRM database storage).
  3. Azure function (with Azure Blob Binding) to recognize the text from the audio file and extracts the case description.
  4. Dynamics 365 Web API to create a case in CRM using the description extracted from Azure Cognitive services.  We can also add blob metadata like filename, etc. to case properties.
Solution Architecture

The full source code is available at GitHub

However, the main code snippet to perform conversion is below:

 public static async Task <string> RecognitionWithPullAudioStreamAsync ( string key, string region, Stream myBlob , ILogger log )

        {

            // Creates an instance of a speech config with specified subscription key and service region.

            // Replace with your own subscription key and service region (e.g., "westus").

            var config = SpeechConfig.FromSubscription(key, region);

            string finalText = string.Empty;

            var stopRecognition = new TaskCompletionSource<int>();

            // Create an audio stream from a wav file.

            // Replace with your own audio file name.

            using ( var audioInput = Helper. OpenWavFile ( myBlob ) )

            {

                // Creates a speech recognizer using audio stream input.

                using ( var recognizer = new SpeechRecognizer ( config , audioInput ) )

                {

                    // Subscribes to events.

                    recognizer. Recognizing += ( s , e ) =>

                    {                       

                    };

                    recognizer. Recognized += ( s , e ) =>

                    {

                        if ( e. Result. Reason == ResultReason. RecognizedSpeech )

                        {

                            finalText += e. Result. Text + " ";

                        }

                        else if ( e. Result. Reason == ResultReason. NoMatch )

                        {

                            log.LogInformation ( $"NOMATCH: Speech could not be recognized." );

                        }

                    };

                    recognizer. Canceled += ( s , e ) =>

                    {

                        log. LogInformation ( $"CANCELED: Reason={e. Reason}" );

                        if ( e. Reason == CancellationReason. Error )

                        {

                            log. LogInformation ( $"CANCELED: ErrorCode={e. ErrorCode}" );

                            log. LogInformation ( $"CANCELED: ErrorDetails={e. ErrorDetails}" );

                            log. LogInformation ( $"CANCELED: Did you update the subscription info?" );

                        }

                        stopRecognition. TrySetResult ( 0 );

                    };

                    recognizer. SessionStarted += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession started event." );

                    };

                    recognizer. SessionStopped += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession stopped event." );

                        log. LogInformation ( "\nStop recognition." );

                        stopRecognition. TrySetResult ( 0 );

                    };

                    // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.

                    await recognizer. StartContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    // Waits for completion.

                    // Use Task.WaitAny to keep the task rooted.

                    Task. WaitAny ( new [ ] { stopRecognition. Task } );

                    // Stops recognition.

                    await recognizer. StopContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    return finalText;

                }

            }

        }

Important considerations:

  1. [This point is optional, if you use Web API to create cases in CRM] You will need use Multi-tenant configuration, if your Azure Function Tenant and the tenant in which your CRM API is registered, are different. If your Azure function tenant and the tenant in which your CRM API is registered, you can use Single Tenant configuration.
  2. The input file from the telephony to Azure blob must be in a specific format. The required format specification is:
Property Value
File Format RIFF (WAV)
Sampling Rate 8000 Hz or 16000 Hz
Channels 1 (mono)
Sample Format PCM, 16-bit integers
File Duration 0.1 seconds < duration < 60 seconds
Silence Collar > 0.1 seconds

 

4. You can use ffmpeg tool to convert your recording to this specific format. For your testing, you can download and use the tool as below:
Download ffmpeg from this link.
Use the command: ffmpeg -i “<source>.mp3” -acodec pcm_s16le -ac 1 -ar 16000 “<output>.wav”
5. My sample in GitHub covers input in one single chunk of audio. However, if you wish to have continuous streaming, you will need to implement the         StartContinuousRecognitionAsync method.
6. The azure function should be configured to be blob trigger.

IoT Button: Process automation with Microsoft Flow using NodeMCU and Arduino IDE

IoT Button: Process automation with Microsoft Flow using NodeMCU and Arduino IDE

 

In this article it will be developed an IoT button applied to the scenario of maintenance of a coffee machine using Microsoft Flow. However, it can be easily adapted to any other scenario or application.

Requirements

  • Access to Microsoft Flow or Azure Logic Apps
  • Arduino IDE
  • NodeMCU development board
  • Push Button
  • 1 x 330 Ω resistor
  • 1 x 1M Ω resistor
  • Jumpers
  • Breadboard
  • Micro USB cable

Setup Microsoft Flow  Environment

    1)    Microsoft Flow portal

Access Microsoft Flow, log in and click “My Flows”.

1.png

 

2)    Create from blank

Click “Create from blank” to create a new workflow.

2.png

 

    3)    Request/Response

Give a name to your Flow. Select the Trigger “Request/Response”.

3.png

    4)    Method GET

In “Advanced Options”, choose  “Method GET”.

4.png

 

5)    Add an action

Click “Add an action” to add a new action.

5.png

 

   6)    Send an email

Choose the action “Office 365 Outlook – Send an email”.

6.png

     7)    Create Flow

Complete all required fields (as you wish), and then click “Create Flow”.

ingles 1.png

    8)    HTTP GET URL

Then copy and save the HTTP GET URL:

https://prod-32.westus.logic.azure.com:443/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>

8.png

 

Hardware Setup

    1)    Building a Circuit on Breadboard

Build the circuit like the one shown below.

MicrosoftFlow-LogicApps-Button-Frittzing Project_bb.png

Software

The ESP8266 NodeMcu comes with a firmware that lets you program the chip with the Lua scripting language. But if you are already familiar with the Arduino way of doing things, you can also use the Arduino IDE to progam the ESP. In this tutorial we’ll use the Arduino IDE.

 

IDE Arduino setup

    1)    Package ESP8266

Download the IDE, and install it. Open the IDE; Choose File -> Preferences, in “Additional Boards Manager URLs” insert the URL “http://arduino.esp8266.com/stable/package_esp8266com_index.json” and than click “OK”. After this steps, your download will start automatically. Once it is finished, restart the IDE.

9.png

 

    Software Setup

Download the file “MicrosoftFlow_IoT_JoaoLucindo.zip” attached and replace the values:

  • SSID by your wireless network name
  • PASSWORD by your wireless network password
  • HOST by the strings of the HTTP GET URL before 443 (in this case:  “https://prod-32.westus.logic.azure.com” )
  • URL  by the strings of the HTTP GET URL after 443 (in this case “/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>”)

 By doing that, the final code will be:

#include <ESP8266WiFi.h>

//static const uint8_t D0   = 16;
//static const uint8_t D1   = 5;
//static const uint8_t D2   = 4;
//static const uint8_t D3   = 0;
//static const uint8_t D4   = 2;
//static const uint8_t D5   = 14;
//static const uint8_t D6   = 12;
//static const uint8_t D7   = 13;
//static const uint8_t D8   = 15;
//static const uint8_t D9   = 3;
//static const uint8_t D10  = 1;

int inPin = 16;   // pushbutton connected to digital pin 0   
int val = 0;     // variable to store the read value
//Include the SSL client
#include <WiFiClientSecure.h>

char ssid[] = "<SSID>";       // your network SSID (name)
char password[] = "<PASSWORD>";  // your network key

//Add a SSL client
WiFiClientSecure client;


void setup() {

  pinMode(inPin, INPUT);      // sets the digital pin 1 as input

   Serial.begin(115200);

  // Set WiFi to station mode and disconnect from an AP if it was Previously
  // connected
  WiFi.mode(WIFI_STA);
  WiFi.disconnect();
  delay(100);

  // Attempt to connect to Wifi network:
  Serial.print("Connecting Wifi: ");
  Serial.println(ssid);
  WiFi.begin(ssid, password);
  while (WiFi.status() != WL_CONNECTED) {
    Serial.print(".");
    delay(500);
  }


Serial.println("");
  Serial.println("WiFi connected");
  Serial.println("IP address: ");
  IPAddress ip = WiFi.localIP();
  Serial.println(ip);

}

String MicrosoftFlow() {
 
  char host[] = "prod-37.westus.logic.azure.com";

  if (client.connect(host, 443)) {
    Serial.println("connected");

    String URL = "/workflows/<ID>/triggers/manual/paths/invoke?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<ID>";

    Serial.println(URL);

    client.println("GET " + URL + " HTTP/1.1");
    client.print("Host: "); client.println(host);
    client.println("User-Agent: arduino/1.0");
    client.println("");
    }
}

void loop() {
  
  
  val = digitalRead(inPin);  // read input value
  delay(200);
  //Serial.println(val);

  if(val==HIGH){
    MicrosoftFlow();
    delay(1000);
    setup(); 
    }
  
}

 

Now you can compile and upload the code from your computer to the device. You can see the result (after press the push button) in the picture below.

12 - Copy.png

Download package for this from the Power Platform Bank – direct link here: LINK

 

 

 

 

 

 

 

 

 

 

 

Streaming Data Sets into Dynamics 365 Customer Engagement

In this post, we are going to look at the challenge of how to display streaming data sets directly onto a Dynamics 365 Customer Engagement form. While there already exists a way to embed Power BI dashboards and reports within Dynamics 365 Customer Engagement, these are not on a form level. To see how to do this currently, have a look here. When followed, you should observe results similar to the following, where a dashboard is initially displayed and then you can click though to the underlying report(s):
 
 
What you’ll notice from this is that these are personal dashboards that lack the ability to be contextually filtered. So to resolve this, we are going to create a Web Resource that has the ability to display a contextual (and streaming) dashboard on a Dynamics 365 Customer Engagement form!
 
To get started, lets have a look at what this will look like architecturally:
 
 
From the architecture, you should notice that we need to create a custom HTML Web Resource that will serve as a placeholder for the Power BI dashboard. When the form loads, we are going to use JavaScript to process the incoming parameters which can include both configurations and contextual data based on the record (form) that the Web Resource is being rendered on. The JavaScript will then call a reusable Dynamics 365 Action that will consume the incoming parameters before calling a Dynamics 365 Plugin. This plugin is necessary as it will help us execute a token exchange with the Azure Key Vault based on the currently logged in user. This token is then used in retrieving a specific secret which contains the required configurations necessary to render the Power BI report contextually and in an authenticated state back on the Dynamics 365 Customer Engagement form.
 
Simultaneously, the Power BI dashboard will be receiving a continuous stream of data from an MX Chip (IoT Device) that is connected to an Azure IoT Hub. This stream of data is provided through the Stream Analytics service which continually processes the incoming data and is able to send it as an output direct to Power BI before it is visualised. For reference, the Stream Analytics Job should look something similar to this:
 
 
You will notice that there is a dedicated Power BI output in the above and that we have limited the Stream Analytics job just to look for our MX Chip device. We also need to include a bit of DAX to format the incoming IoTAlert data to be a bit more readable. Examples of the incoming data, the DAX, and the Power BI configs are below:
 
 
As a result of this, we should now be able to see the streaming data set on the Dynamics 365 Customer Engagement form after a bit of Power BI visualisation magic as follows:
 
 
As we have parameterised the initial Web Resource on the form, this Dashboard is able to pre-filter visuals should we wish, and can also easily be embedded on the form and record type of your choosing! The following video demonstrates the complete pattern in action: