Trial Environments for Power Apps Training

Want to get hands on with the Power Platform but don’t have access to an environment? Well, you can sign up for a free trial instance that provides 30 days access to various Power Platform platform components. Once set up, you can build Canvas or Model Driven Power Apps, Power Automate Flows, or a Power Apps Portal.

If you want guided training, Microsoft offers a series of events called App In A Day (AIAD). Power Platform AIAD partner led training sessions have become very popular, and for good reason – they provide a hands on introduction to the Power Platform led by excellent trainers.  And it’s all free!  If you can’t attend a session in person, you can just download the labs from this link and work at your own pace. Whether attending or working at your own pace, a Trial environment is usually required if you don’t have access to a Power Platform sandbox.

This post will provide some instructions for setting up your Office 365 and Power Platform trial environment. This is not an overly complex process, but it can be a bit confusing, depending on your level of experience.

You may also take on the role of instructor, whether for the AIAD or for internal team training. This means ensuring your attendees have an environment where they can work on their labs or other projects.

One option here is to have each attendee set up their own trial. Again, not overly complex, but it will depend on the audience and your time available. So another option is pre-deploying a series of environments for your attendees.

So we will first take a look at the steps available to set up an individual Office and Power Platform trials. From there, we can look at how to set up a batch of 20 additional trial environments for a group of attendees.

Creating Office 365 PowerApps Trial Environments

The AIAD Trainer package includes instructions on setting up the free Trials. You will first set up a Office 365 E3 Trial and then add a Power Apps Plan 2 Trial to the new subscription. When I delivered the session, I reworked the notes a bit as the process had changed since originally written and some of the attendees were a bit confused. These notes also encourage attendees to use non work email address for the trial. This ensures they have the appropriate access to work the labs and protect from inadvertent changes to work subscriptions. The updated steps are:

Sign Up For Office 365 E3 Trial account at the following link – Office 365 E3 Trial

Welcome!
Welcome, let’s get to know you

We recommend you not use your work Office 365 email address to avoid confusion.  For example, I used an Outlook.com email address.  This email will be used to send you the new account details.

Create your user ID

Choose a unique name for your account. For a real instance, this would be your company name.  For our labs, you might choose something like admin and aiadrestontest:

User ID and password

You will receive a validation code to… Prove. You’re. Not. A. Robot.

Prove it!

Once you complete this step, you should receive an email with your account info.

Login to your new Office Trial account and use the following link set up the PowerApps trial – PowerApps Plan 2 Trial Offer

Check out

Here you just need to choose Try now!

Order receipt

From here, click the link to the Users page. It may take a moment or two to load.

Active users

Select the menu next to the key icon by clicking the vertical ellipses and choose Manage Product Licenses.

Licenses and Apps

In the Account details pane to the right, select Licenses and Apps.  Check the Microsoft PowerApps Plan 2 item and choose Save.

You should now be able login to the Power Apps portal at https://make.powerapps.com/home

Create additional Users and Environments

If you are just looking for an environment for your personal testing, you are all set! You have a new Office 365 and Power Apps trial with 25 available licenses. However, if you would like to set up multiple environments, you have more work to do. You could have each attendee repeat the steps above either before the session or as part of the lab prep. This can be challenging for some and may take time out of your training session.

Another option is to build out new Environments for each of the 24 additional licenses provided with this trial. You would need to do the following:

  • In the Office Admin center, Create a new User
  • Assign the Office and Power Apps licenses
  • In the Power Apps admin portal, create a new Environment
  • Assign the new User admin rights to the Environment
  • Provision a new Database for the environment

This can be a time consuming so the AIAD Trainer pack also included some Power Shell scripts to automate the steps above. The script creates new user accounts with random names, assign licenses, creates a new Environment and CDS database for each. Once created, your attendees can simply login with the new user credentials and work on their own labs or sample projects. I would love to find the original author to give them credit since it’s a big time saver and also a great example of some of the ALM capabilities.

The script does some really cool stuff but it came with a few issues. I’ve updated this script to fix a few issues found with the original version. When I first attempted to run the scripts, I was unable to authenticate to my new account. This was because of a few updates to the Power Platform and was relatively easy to fix.

Over the last few months, I’ve made some additional updates to address some issues and to make it easier to use. For example, you can pass the user name and password rather than being prompted each time. The script will now only create Trial environments by default, instead of the original Trial and Production Pairs. Recent changes cause failures when creating more than 10 Production environments, likely because of storage limitations being enforced.

I’ve also added exception handling. In the original script, errors were being swallowed inside of one of the loops. While testing, I thought the script was just running super slowly, but it was actually in a loop continuing to throw errors. I kicked off the script on one environment and stepped away from my machine, not realizing that I was basically hitting the server over and over again with exceptions. The cool thing? Microsoft watches for these errors!

The embarrassing bit? I know this because the engineering team saw the name of the account, realized that it was for AIAD, and reached out to the AIAD management team who contacted me. The engineering team wanted to know what the heck I was doing! I felt a bit silly but they were all very nice nice about it. I explained that I was running this script and I shared it with them, and they offered some advice on updates.

It turns out that I was seeing errors because before running any automated PowerShell scripts against a new tenant, you must login at least once to the https://make.powerapps.com site. The engineering team said that they planned on addressing this in the future, but for now, you must login at least once before running the PowerShell script.

You can find the script, and a document with the steps outlined in the previous section, on Github at http://github.com/jamesnovak/setupaiad. I’ve included some notes to the ReadMe page outlining the command line parameters and their usage. Running the script should be fairly straightforward even if you are not a developer:

  • Download the entire repository, including child folder scripts
  • Open PowerShell and change directory to the download folder
  • Run the SetupAIAD.ps1 with the relevant parameters

For example, using the account described in the Trial instructions:

.\SetupAIAD.ps1 -TargetTenant 'aiadrestontest' -UserName 'admin' -Password 'password' -TenantRegion 'US' -NewUserPassword 'Password!' -UserCount 20 -MaxRetryCount 3 -SleepTime 5

In this example, the script would create 20 user accounts, user1 through user20. Each account will have an Office and Power Apps license assigned and login credentials like: user1@aiadrestontest.onmicrosoft.com and Password! as their password. Once the users login, they will see an Environment corresponding to their username, such as User1-Dev with a provisioned CDS database. The new user should have administrative access to the Environment. I’ve found this to be a nice time saver when setting up a many accounts at once!

NOTE: If you re-run the script, it will DELETE these existing environments. I have not added logic to check for the accounts and add missing. That might be a nice feature for a future version, but for now, my intention with this script an clean setup. This method also restarts the 30 day Trial period on the Environments.

I have a few additional parameters to add, such as whether to create Production environments. You can make some simple updates to the scrip to enable this now, but you may hit limits at 10 user accounts.

I plan on keeping this script and attached document up to date, but if you have any other suggestions, feel free to add an Issue or drop me an email!

Summing up

We now have instructions on how to create a new Office 365 and Power Apps Trials that provide full access to Power Platform components. We can also quickly create batches of 20 or so logins at a time on these new trials with a PowerShell script.

So if you have a group of 40 people attending an AIAD session or an internal training session, you can create two Trials and run the PowerShell script twice. Last week, I was providing some training for a group and created 80 accounts (4 trials, 4 runs of the script) in about 20-25 minutes. Much faster than manually setting all this up!

I hope this helps you kick off your own AIAD session or internal Power Platform brown bag sessions for your team! As always, any comments, questions, and suggestions are welcome.

How to resolve “error Executing the api /eventhubs” in #MicrosoftFlow?

While trying to connect Microsoft Flow to Azure Event Hub, you cannot retrieve the Event Hub name and instead you get “Error Executing the api /eventhubs” error. The event hub connector in Flow, allows you to connect to event hub using connection strings and get notified as soon as a new event in available in the hub. However, there are certain things you will need to know.

Event Hub Namespace vs. Event Hub

An Event Hubs namespace provides a unique scoping container, referenced by its fully qualified domain name, in which you create one or more event hubs. So Event Hubs are inside Event Hub Namespace. Both of the Event Hub Namespace and Event Hub have their own connection string which can be used to access these resources. However, it is important to know that the Microsoft flow connector for Event hub accepts the Event Hub Namespace’s connection string rather than Event Hub resource’s connection string.

Error “Executing the api /eventhubs”

You will see the below error while trying to use Event Hub resource.

The solution is to use Event Hub Namespace’s connection string.

To confirm whether your connection string is associated with your Event Hubs namespace or with a specific event hub, make sure the connection string doesn’t have the EntityPath parameter. If you find this parameter, the connection string is for a specific Event Hub “entity” and is not the correct string to use with your logic app.

Reference

https://docs.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs

Power Platform 24 Live!

We recently completed the first ever 24 hour event exclusively focused on the Power Platform – Power Platform 24. The Dynamics and Power Platform community has fantastic events literally across the globe. As amazing as these in person events are, not everyone can attend as either a speaker or attendee. We wanted to remove geography as an obstacle!

Moving to a virtual format allows the team to include both speakers and attendees who may otherwise miss the opportunity to share with and learn from the community. For organizers, the added benefit is lower overhead because you don’t need to secure a venue, coordinate speaker travel, provide prizes, or feed anyone. All said, lots of pros for this virtual event format.

Our first event went off without a hitch! So maybe we had a very minor glitch or two, but the sessions were fantastic and the event ran like a well oiled machine all because of amazing speakers and a group of dedicated organizers.

I personally learned much from the experience, so I wanted to share some thoughts for those that might want to organize a similar event. This post covers a bit about the approach to organizing the event and some tools used to run the show.

Organizing the Event

The event came together fairly quickly after the idea was thrown out in a group conversation: anyone interested in hosting a virtual Power Platform event? The response was of course, “Heck yeah!” The volunteers immediately began throwing around ideas. We had a lot to discuss but most of it boiled down to these main topics:

  • How do we choose speakers?
  • How do we host each presentation?
  • How do we register attendees?

Most of the organizers have some experience running in person events, virtual events, or both so we started with some best practices in mind. We also brought experience with a variety of tools based from past events. This experience gave us a nice head start, we then simply needed to choose what works best for a virtual event spanning a full 24 hours!

Choosing speakers

The team chose sessionize.com as the platform for a call for speakers and vetting the submissions. If you have not used the platform, definitely check it out. Sessionize is offers excellent tools for both organizers and speakers to to organize event submissions and manage sessions across multiple events. Another huge plus is that for the service is free for free community events. Sessionize features alone could take up a full post!

Once you lock down your call for speakers, we used sessionize to categorize, review, and rate submissions. The organizing team reviewed each of the more than 70 submissions, ranking them based on the information provided by the speaker. This was honestly one of the hardest part of the process because we received so many excellent submissions.

We considered multiple tracks because of the number of submissions, meaning we could run two or three concurrent hour long sessions. This was our first 24 hour event, so we chose a single track of 24 one hour sessions, starting at 8:30 AM EST and running through 8:30 AM the next day.

Hosting the event

This was the big decision: What platform do we use to host the event? We can all list a dozen virtual event platforms in just a few minutes, but that doesn’t actually make things easier. We ended up choosing Teams and a Teams Live Event. This makes sense as this is a Microsoft Power Platform event, but here is an excellent article that Purvin Patel shared which helped make our decision: Produce a live event using Teams. This article outlines how to set up a Team Live Event and details around Producer roles.

The Teams Live event setup means assigning users to a producer role where they can monitor a control the live stream, Q&A channel, manage the event notes, and start/stop the event. Another important feature is the ability to record each session. This may not be a requirement for other events, but we wanted to provide recordings for both attendees and speakers. This is an excellent feature but the limitation of 4 hours per recording is something to keep in mind if you choose this platform. We needed to keep this limitation in mind when scheduling the sessions and producers.

Using a Teams Live event requires Office 365 and Teams licenses. Fortunately, the XrmVirtual crew is already delivering live events using Teams, so they offered to run the event for us. We now had a chosen platform, so we needed to decide how to run the sessions.

Delivering the Sessions

We broke the 24 hours into six blocks of 4 hours and we took volunteers as producers for each segment, which worked perfectly with our 4 hour cap on recording. A producer was logged in the speaker during the session to handle connected issues, answer or raise questions, and manage transitions between speakers.

This meant that we posted six 4 hour Team Live events that ran in sequence. Once these were established, individual invites were sent to each speaker with a link for the correct block of time. This was all handled by the XrmVirtual team and I felt it worked out great as both a speaker and a producer. It was easy for me but I know it took a lot of time to set up!

At the start of each session, a producer logged in to Teams with the correct account, share any slides that were required at the time, and kick off the session, and began recording. The speaker could then just shared their screen and delivered the session. Once the session was complete, the producer shut down the event to end recording while the next producer was already up an running with the next speaker.

Registering event attendees

Registering event attendees seems pretty important, so why is it last in the list?

Well our solution for registering users for the event was pretty simple: we didn’t register users. Fortunately, the Teams Live event platform allows users to connect without a prior registration and post questions anonymously. We had no need to track any user info, manage cancellations, etc. Attendees could jump on to catch a session and disconnect when done.

This could be an issue with different virtual delivery platforms but it did not seem to be an issue for us. I believe we averaged about 100 attendees per session which is a pretty nice number. I’ve had in person sessions with only 5 people, so 100 is pretty nice! We had some excellent questions by attendees which really adds to the delivery. And of course, attendees who missed the live session can jump online and view the recorded sessions on demand!

Testing, 1…2…

One practice that made this event run so smoothly was… practice! The week prior to the event, XrmVirtual team set up test sessions to ensure speakers could connect without issue. Each speaker jumped on to the Team event, shared their screen, and tested their audio. It sounds simple, and it was, but it saved us from potential issues on the day of the event.

We also made sure that producers understood the Teams setup operates. The XrmVirtual team provided a new account from their Office organization for each 4 hour block. Each account was granted producer rights on their respective Team Live events. Having enough accounts is another item to consider if you choose a Teams Live event as a platform.

I was not the one that set up the Teams Live event for all of the sessions, but from an end user perspective, I found this event went smoothly and the Teams platform is fairly easy to use.

Thanks once again!

I will call out the organizing team here in case you want to reach out and say thanks! For me, I wanted to say thanks once again to the organizing team for gathering and vetting the speakers, setting up the infrastructure, communicating with speakers and attendees, taking time to act as producers (at really crazy hours!), processing all of the recorded videos, and advertising the event.

Thanks for simply giving up a chunk of your free time to make this event happen.

Julie Yack
David Yack
Joel Lindstrom
Aiden Kaskela
Beth Burrell
Michael Ochs
Sarah Jelinek

Everyone on the team pitched in, but I think a few special shout outs are in order – thanks to Julie for owning the meetings and technical bits with the producer setup and being online for 16 or so hours monitoring the event real time. And thanks to David Yack for spending his weekend breaking down all of the videos and hosting them for our viewing pleasure.

And thanks to all of the speakers that gave up their time to plan and provide some excellent sessions for the community! Check out the full list of speakers at the Power Platform 24 site! You can check out the recorded events now that they have been posted here!

I am looking forward to another Power Platform 24 event… Keep an eye out for the next event announcement!

How to call #webapi from #PowerPlatform #Portals

In almost all of my portal projects I get a question from my clients: “How to call an external api from #portals”? This question has been common that I decided to write about my experience on this topic which might be helpful for the community. This post will focus on two main areas:

  1. The available options to integrate #portals with external
  2. A step by step guide on one of the least discussed options which is using Oauth Implicit Grant flow and how I created a simple demo for one of my customers

Scenario

I would like to give a business context to this scenario. Any enterprise solution requires integration and interaction of multiple systems which #portals could be one of them. Imagine a scenario where a customer is looking for a quote on a product in the company portal. In this case the #portal is required to bring quote details from CPQ (Configure Price Quote) system to the portal. In another scenario, a #portal is required to integrate with a core banking system to get the customer’s latest balances. In these scenarios and similar ones, we will require the #portal to integrate with an external api to get information.

In order to enable such integrations, the #portal must be able to make calls in a secure way as most of the internal systems require authentication before anything can happen. So what are the options available?

Solutions

Since #powerplatform #portals are tightly integrated with #powerplatform, in most cases the integration is done through the #powerplatform itself. However, the integration through these #powerplatform has three flavors.

  1. The first one is creating actions in the platform which communicated with external API and manages the requests and responses; then calling the actions through a workflow where the workflow is triggered using Entity Form or Entity List events. 
Portal Integration with Web Api
Portal Integration with Web Api using Actions

 

  • The second option is to use #MicrosoftFlow to encapsulate the Workflow and Action part in a Flow. The benefit of this solution is that you won’t need to write code (in most cases but not guaranteed) to call #webapi

    Portal Integration using Flow
    Portal Integration using Flow
  • The above two options, use #PowerPlatform to facilitate the integration and all calls are routed through the platform. However, going through the server is not always feasible. There are situations in which you would like to make client side calls from javascript using Ajax from #portals to call external API. However, the main concerns in these scenarios are authentication. And the solution provided by the platform is “Oauth Implicit Grant Flow“.If you would like to learn more about what is the ”

    Oauth Implicit Grant Flow” beyond the #PowerPlatform, you can read more here.

 

There are concerns over the Oauth Implicit Grant flow and the recommendation is to use “Oauth code grant flow”. According to the Oauth working group, “t is generally not recommended to use the implicit flow (and some servers prohibit this flow entirely). In the time since the spec was originally written, the industry best practice has changed to recommend that public clients should use the authorization code flow with the PKCE extension instead.”. Microsoft is aware of this restriction however, it is believed Oath implicit grant flow is still ok to use.

I have proposed an idea to implement the Oauth code grant flow in this IDEA. Please vote for it.

Now getting back to the topic: How to Integrate:

Portal Integration with Oauth Implicit Grant Flow
Portal Integration with Oauth Implicit Grant Flow

In this scenario, there is no server side calls are required. A complete documentation is available here. However, the documentation is not very helpful if you want to do things quickly since there is a learning cycle involved. OAuth 2.0 implicit grant flow supports endpoints that a client can call to get an ID token. Two endpoints are used for this purpose: authorize and token. I will not go to the details of these calls and I assume you already know what these are.

So here is what you will have to do:

  1. Create your web api. You can download the sample api from this Github project. This website is no different than any MVP website. So you can create your own with Web APIs. 
  2. Next is to register your application in Azure Active Directory. This is a free service which you can use to provide authentication to your web api. A step by step details of the registration process is in this link.The REDIRECT URL must be the direct link to the page you created in the step # 2. You will need to note the following after this step:

    – Client ID
    – Redirect URL

  3. Let’s say you have a Quote page in your portal and you would like to place a button on the portal page to get Quotations from your internal website. You will have to put a custom HTML in your “Content Page” (not the main page) of the portal. This custom HTML will be used to add a QUOTE button to the portal and also retrieve the Quotation by use of a custom javascrtip code.
<h2>The QUOTE BUTTON</h2>

<button type="button" onclick="callAuthorizeEndpoint()">Give me a Quote!</button>

 <script>
//Remove this line to avoid State validation
$.cookie("useStateValidation",1);
function callAuthorizeEndpoint(){
//Used for State validation
var useStateValidation = $.cookie("useStateValidation");
var appStateKey = 'p07T@lst@T3';
var sampleAppState = {id:500, name:"logic"};
//Replace with Client Id Registered on CRM
var clientId = "CLIENT ID OBTAINED FROM AZURE ACTIVE DIRECTORY";
//Replace with Redirect URL registered on CRM
var redirectUri = encodeURIComponent("https://MYPORTAL.powerappsportals.com/REDIRECT_PAGE/");
//Authorize Endpoint
var redirectLocation = `/_services/auth/authorize?client_id=${clientId}&redirect_uri=${redirectUri}`;
//Save state in a cookie if State validation is enabled
if(useStateValidation){
$.cookie(appStateKey, JSON.stringify(sampleAppState));
redirectLocation = redirectLocation + `&state=${appStateKey}`;
console.log("Added State Parameter");
}

//Redirect
window.location = redirectLocation;
}
</script>


  1. Modify the source code in the web api website to use the Client ID and Redirect URL in its startup page.
public virtual Task ValidateIdentity(OAuthValidateIdentityContext context)
        {
try
{
if (!context.Request.Headers.ContainsKey("Authorization"))
{
return Task.FromResult<object>(null);
}

// Retrieve the JWT token in Authorization Header
var jwt = context.Request.Headers["Authorization"].Replace("Bearer ", string.Empty);
var handler = new JwtSecurityTokenHandler();
var token = new JwtSecurityToken(jwt);
var claimIdentity = new ClaimsIdentity(token.Claims, DefaultAuthenticationTypes.ExternalBearer);
var param = new TokenValidationParameters
{
ValidateAudience = false, // Make this false if token was generated without clientId
ValidAudience = "CLIENT ID", //Replace with Client Id Registered on CRM. Token should have been fetched with the same clientId.
ValidateIssuer = true,
IssuerSigningKey = _signingKey,
IssuerValidator = (issuer, securityToken, parameters) =>
{
var allowed = GetAllowedPortal().Trim().ToLowerInvariant();

if (issuer.ToLowerInvariant().Equals(allowed))
{
return issuer;
}
throw new Exception("Token Issuer is not a known Portal");
}
};

SecurityToken validatedToken = null;
handler.ValidateToken(token.RawData, param, out validatedToken);
var claimPrincipal = new ClaimsPrincipal(claimIdentity);
context.Response.Context.Authentication.User = claimPrincipal;
context.Validated(claimIdentity);
}
catch(Exception exception)
{
System.Diagnostics.Debug.WriteLine(exception);
return null;
}
return Task.FromResult<object>(null);

}
  1. The next step is to use Custom HTML on the Redirect PAGE so that you can make the call to the Web API by the token obtained in this step.
function getResultInUrlFragment(hash){
    if(hash){
        var result = {};
        hash.substring("1").split('&').forEach(function(keyValuePair){
            var arr = keyValuePair.split('=');
//  Add to result, only the keys with values
            arr[1] && (result[arr[0]] = arr[1]);
        });
return result;
    }
else{
return null;
    }
}
//Validate State parameter
//Returns true for valid state and false otherwise
function validateState(stateInUrlFragment){
if(!stateInUrlFragment){
console.error("State Validation Failed. State parameter not found in URL fragment");
return false;
    }

// State parameter in URL Fragment doesn't have a corresponding cookie.
if(!$.cookie(stateInUrlFragment)){
console.error("State Validation Failed. Invalid state parameter");
return false;
    }
return true;
}

var useStateValidation = $.cookie("useStateValidation");
var appState = null;

//Fetch the parameters in Url fragment
var authorizeEndpointResult = getResultInUrlFragment(window.location.hash);

//Validate State
if(useStateValidation){
if(!validateState(authorizeEndpointResult.state)){
authorizeEndpointResult = null;
    }
else{
appState = $.cookie(authorizeEndpointResult.state);        
console.log("State: "+appState);
    }
}

//Display token
if(authorizeEndpointResult){
    var data = authorizeEndpointResult.token;
console.log("Token:" + data);
   $.ajax({
type: "GET",
url: "https://URL_TO_THE_WEB_API.azurewebsites.net/api/external/ping",
contentType: "application/json; charset=utf-8",
dataType: "json",
headers: {
Accept:"text/plain; charset=utf-8",
        "Authorization": "Bearer "+data
},
success: function (data) {
alert(JSON.stringify(data));
console.log(data);
}, //End of AJAX Success function
failure: function (data) {
alert(data.responseText);
}, //End of AJAX failure function
error: function (data) {
alert(data.responseText);
} //End of AJAX error function
});
}

I hope this post helps you a bit to make your portals connect to the outside world!

Improve efficiency of Call centers using Dynamics 365 and Azure cognitive services

Photo by Hrayr Movsisyan on Unsplash

I am Fascinated by sophistication of Azure services and how they help us to improve our solutions and extend the way we can solve customer problems. Recently I had a requirement to implement  a dynamics 365 solution to enable a call center to capture cases while their operators are offline.

One solution was to provide a self-service portal to customers to log the cases when Call center operators are offline. But in this case the customer was looking for something very quick to implement and having the ability to link incoming cases with their call center channel and derive some reporting based on it.

Approach

I started looking at Azure services and see how I can use Azure cognitive services and speech recognition to help me solve this requirement and like always I Azure did not disappoint me. In this post I would like to share my experience with you and take you to the steps that you would need to create such a solution. Of course possibilities are endless. However, this post will give you a starting point to begin your journey.

I have seen solutions where telephony systems send voice recordings of callers as an email attachment to a queue in CRM. The CRM then converts that queue item to a case and attaches the voice recording as note to the case. The challenge with this solution is the call center operators have to open attachments manually and have to write the description of the case after listening to the audio file. This means their time is spent on inefficient activities whereas they should be utilize in better ways.

Another problem with this approach is size of attachments. As time goes by, audio attachments will increase the database size impacting the maintenance of solution.

Scenario

Our scenario is based on the fact that call center agents are not working 24 hours a day.

While agents  are offline customer should still be able to contact call center record the voice messages to create cases.

We will use the following components:

  1. Azure Blob to receive recorded audio files from telephony system.
  2. Azure cognitive services to listen to recorded audio files and translate the content to a text message. The audio file will be saved in  Azure blob (which is cheaper than CRM database storage).
  3. Azure function (with Azure Blob Binding) to recognize the text from the audio file and extracts the case description.
  4. Dynamics 365 Web API to create a case in CRM using the description extracted from Azure Cognitive services.  We can also add blob metadata like filename, etc. to case properties.
Solution Architecture

The full source code is available at GitHub

However, the main code snippet to perform conversion is below:

 public static async Task <string> RecognitionWithPullAudioStreamAsync ( string key, string region, Stream myBlob , ILogger log )

        {

            // Creates an instance of a speech config with specified subscription key and service region.

            // Replace with your own subscription key and service region (e.g., "westus").

            var config = SpeechConfig.FromSubscription(key, region);

            string finalText = string.Empty;

            var stopRecognition = new TaskCompletionSource<int>();

            // Create an audio stream from a wav file.

            // Replace with your own audio file name.

            using ( var audioInput = Helper. OpenWavFile ( myBlob ) )

            {

                // Creates a speech recognizer using audio stream input.

                using ( var recognizer = new SpeechRecognizer ( config , audioInput ) )

                {

                    // Subscribes to events.

                    recognizer. Recognizing += ( s , e ) =>

                    {                       

                    };

                    recognizer. Recognized += ( s , e ) =>

                    {

                        if ( e. Result. Reason == ResultReason. RecognizedSpeech )

                        {

                            finalText += e. Result. Text + " ";

                        }

                        else if ( e. Result. Reason == ResultReason. NoMatch )

                        {

                            log.LogInformation ( $"NOMATCH: Speech could not be recognized." );

                        }

                    };

                    recognizer. Canceled += ( s , e ) =>

                    {

                        log. LogInformation ( $"CANCELED: Reason={e. Reason}" );

                        if ( e. Reason == CancellationReason. Error )

                        {

                            log. LogInformation ( $"CANCELED: ErrorCode={e. ErrorCode}" );

                            log. LogInformation ( $"CANCELED: ErrorDetails={e. ErrorDetails}" );

                            log. LogInformation ( $"CANCELED: Did you update the subscription info?" );

                        }

                        stopRecognition. TrySetResult ( 0 );

                    };

                    recognizer. SessionStarted += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession started event." );

                    };

                    recognizer. SessionStopped += ( s , e ) =>

                    {

                        log. LogInformation ( "\nSession stopped event." );

                        log. LogInformation ( "\nStop recognition." );

                        stopRecognition. TrySetResult ( 0 );

                    };

                    // Starts continuous recognition. Uses StopContinuousRecognitionAsync() to stop recognition.

                    await recognizer. StartContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    // Waits for completion.

                    // Use Task.WaitAny to keep the task rooted.

                    Task. WaitAny ( new [ ] { stopRecognition. Task } );

                    // Stops recognition.

                    await recognizer. StopContinuousRecognitionAsync ( ). ConfigureAwait ( false );

                    return finalText;

                }

            }

        }

Important considerations:

  1. [This point is optional, if you use Web API to create cases in CRM] You will need use Multi-tenant configuration, if your Azure Function Tenant and the tenant in which your CRM API is registered, are different. If your Azure function tenant and the tenant in which your CRM API is registered, you can use Single Tenant configuration.
  2. The input file from the telephony to Azure blob must be in a specific format. The required format specification is:
Property Value
File Format RIFF (WAV)
Sampling Rate 8000 Hz or 16000 Hz
Channels 1 (mono)
Sample Format PCM, 16-bit integers
File Duration 0.1 seconds < duration < 60 seconds
Silence Collar > 0.1 seconds

 

4. You can use ffmpeg tool to convert your recording to this specific format. For your testing, you can download and use the tool as below:
Download ffmpeg from this link.
Use the command: ffmpeg -i “<source>.mp3” -acodec pcm_s16le -ac 1 -ar 16000 “<output>.wav”
5. My sample in GitHub covers input in one single chunk of audio. However, if you wish to have continuous streaming, you will need to implement the         StartContinuousRecognitionAsync method.
6. The azure function should be configured to be blob trigger.

Remove Flow Ribbon Button

Remove Flow Ribbon Button

Hey D365’ers! Welcome to {{ quirk.works }}, a blog series where we try to solve a Dynamics 365 or Power Platform problem with unconventional solution. For our first part of the series, let’s try to remove the Flow ribbon button.

So you want to remove the Flow ribbon button?

Yes sir, we don’t want our users to see it as per our requirements document.

Tried removing it with Ribbon Workbench?

Ribbon buttons are easy to hide/show using the Ribbon Workbench. Just select the ribbon button, right click, and click Hide.

Hide also the rest of the ribbon button sub-menu.

And there you go. What? It’s not hidden isn’t it?

Here’s a {{ quirk.works }}

There is a quirk with how the Flow ribbon button can be removed.

Click the cog icon on the upper right side of the screen and click Advanced Settings.

Go to Settings > Administration

Go to System Settings.

Open the Customization tab. There you have it, select No from the option Show Microsoft Flow on forms and in the site map. Click the OK button and we’re all set.

Conclusion

Not every ribbon button can be hidden using the Ribbon Workbench, sometime you have to look out of the XrmToolBox. Just playing with some puns, no offense meant for these great tools as I admire these awesome works from Scott and Tanguy. Until our next {{ quirk.works }}, stay-tuned D365’ers. I’ll keep you posted.

 

How to upload Powerapps Audio into sharepoint?

Problem: Recently I have a requirement to upload recorded audio from powerapps to sharepoint.

Requirements seems to be straight forward if you have a basic knowledge of Powerapps and Microsoft flow isn’t it? But there is a catch (let’s see that in sometime :-))

Just for the completeness of this blog, I will reiterate certain Hows.

How to Record and listen the Audio in powerapps?

In powerapps we have a media control named MicroPhone to record the audio, and another media control named Audio to listen to the recorded audio.

Further on how to do that:

In the Microphone control (I named as MyMic) Onstop property collect the recorded audio as follows:

ClearCollect(collInspectionAudio, MyMic.Audio);

In the Audio Control Media property, place the collection which you have used to collect the recorded audio earlier as follows:

First(collInspectionAudio).Url

We now know how to record and listen the audio within powerapps. Now lets discuss the actual challenge on how to upload it to sharepoint.

Analysis

We use Microsoft flow to upload to sharepoint however in my microsoft flow it looks straight forward to pass the audio as First(collInspectionAudio).URL to my sharepoint file content as shown below.

when I execute the Powerapps and hence my flow it is creating an audio file in my sharepoint, however its not playing. when I see the outcome of the Flow i found it strange because File content looks as below:

When I further drilldown into the problem by checking the datatypes in flowstudio. I realized that in Createfile action of Sharepoint connector expects Binary format hereas powerapps send its audio content in byte format.

Solution:

So the only way to get around is accept Byte parameter from Powerapps and use it to upload file in sharepoint.

For that, I used Outlook connector  to send an email. When I ask for attachment parameters in powerapps it can send in byte format.

Note: If you have a requirement to send email of the audio file along with uploading in sharepoint then that’s amazing. If you don’t have such emailing requirement then another slight hack is required.

  1.  Create a condition which never becomes true. (in my case 100 equals 200 which never becomes true ;-))
  2. In the true section call Outlook connector and ask for parameters for Attachment Name and Attachment Content.
  3. In the false section call Sharepoint Connector and create a file by using same parameters which we have in Step 2.

Final working Microsoft Flow looks like this:

Warm Regards,

Pavan Kumar Garlapati

 

 

How to Enable the PowerApps AI Builder

Hey, Will here – just a quick post. I’ve had quite a few people come up to me asking how to enable the new “AI Builder” for Power Apps on their environment – which strikes me as slightly strange as it comes enabled as default. However, here is how you do it.

 

  1. Go to: https://admin.powerplatform.microsoft.com/ 
  2. Then using the navigation pane select “Environments” then select the environment you wish to turn the ‘AI Builder’ on for:
  3. Then select “Settings”:
  4. Then select “Features”:
  5. Then enable the PowerApps AI Builder:

Now go forth and build something AWESOME!

Converting Dynamics’s Geolocation To SQL Geolocation Using Microsoft Flow And Azure Function

Background

One of the awesome features of the Azure Search service is the ability to search information based on location. Azure Search processes, filters, and displays geographic locations. It enables users to explore data based on the proximity of a search result to a physical location. This feature is powered by SQL Server Geolocation data type. Since SQL Server 2008, developers are able store geospatial data in SQL server using Geolocation fields. Geolocation fields allow querying data with location based queries. To facilitate the Azure Search service to search within CRM accounts and contact, I had to pushed my account and contact searchable information to SQL server hosted in Azure. To copy information from Dynamics to Azure SQL server, I used Microsoft flow. Everything worked good except, copying CRM longitude and Latitude to SQL Server.

The problem

The problem with copying longitude and latitude to SQL server Geolocation field is the compatibility. When you try to insert longitude and latitude fields to Geolocation you encounter casting error.

The solution

  1. The solution I used to tackle this problem is making use of Azure Function and converting Longitude and Latitude to Geolocation type in the Azure function and return the response before the Insert action in the flow. See the below steps:
  2. Step 1 is self-explanatory.
  3. The step “CC Contact” extracts the Contact name (or any lookup name property) from a lookup.
  4. The “Http” step, calls the Azure Function to converts the CRM longitude and Latitude to SQL Geolocation field
  5. The “Insert Row” step, inserts our data to SQL server row.
Microsoft Flow
Microsoft Flow

The Azure Function

The Azure function is a very simple function. You will need to import Microsoft.SqlServer.Types Nuget package and use the below code:
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();  
       Coordinates data = JsonConvert.DeserializeObject<Coordinates>(requestBody);  
       SqlGeography point = data.GetGeography();  
       return ( ActionResult ) new OkObjectResult ( $"{point}" );  
 public class Coordinates  
   {  
     public double Longitude { get; set; }  
     public double Latitude { get; set; }  
     public SqlGeography GetGeography ( )  
     {        
       try  
       {  
         return SqlGeography. Point ( Latitude , Longitude , 4326 );  
       }  
       catch ( Exception ex )  
       {  
 // Log ex and handle exception  
         throw ex;  
       }  
     }  
   } 

 

 

Implementing Enterprise Search In Power Platform

Photo by Anthony Martino on Unsplash
Photo by Anthony Martino on Unsplash
Providing good search capabilities is a key feature in modern business applications to support usability and end user satisfaction. We have seen how the search capabilities of the Dynamics platform has evolved from providing “Quick Search” and “Advanced File” to “Relevance Search”. The goal of the platform search features has been to support users to find the relevant information they need in the quickest and easiest form. These search features are out-of-the-box and easy to enable/configure/use. As the platform progresses to offer richer features to users and enable them to search better, the demand for richer and better search techniques grow, and we see instances where the platform capabilities cannot meet user demands with its out-of-the-box capabilities. Before going further about advanced search scenarios, you can read about the platform out-of-the-box search capabilities in this official documentation. In this article I share why we may decide to implement a search solution of our Dynamics solution using Azure Search Service.
In enterprise implementations, business applications are not the only systems used in the organization. We often see call center agents and sales representatives need to obtain their required information from various systems to service customers. Searching users in every system is a cumbersome job which may cause setbacks in end-user adaption. Integrating Dynamics with Azure search offers consolidation of search operations in one specialized search service with ability to connecting to various data sources and apply modern search techniques to find the most relevant data. A practical example of this scenario can be seen in one my recent experiences where the organization users had to search for user information in CRM, SharePoint, Sybase and a pool of CSV files.

Customized Search experience

To facilitate more user adoption, using customized search techniques are highly favorable. In all modern search engines, we see use of “Auto complete”, “Suggestions” and “highlighting” features which can be added to the Dynamics solutions search experience. Displaying search results by support of “Document Preview”, “Document Opening in a customized containers”, “Facets”, “Filter” and “Sorting” are examples that enhance your Dynamics solution’s capabilities.

Customized Search Behavior

The true power of search is demonstrated with different pieces of information are linked together to make sense of a bigger picture. Extracting words and sentences from documents including images and pdf files, extracting key phrases, people names, location names, languages and other custom entities with the help of AI is another unique feature that you can add to your Dynamics’s search capabilities. Another amazing search capability you can have in your Dynamics implementation is the ability to search based on geolocation information, i.e. you can search for all your partner network from CRM or get the location of your field service force. The beauty of implementing your own enterprise search lies in the fact that you can search information in your data stores and link them using AI to generate knowledge and better insight to your data.

Customized Search Result

Another need for customized search in your Dynamics solution to the ability to refine your search result profile. When you use AI in your search, the system gives you the power to see how relevant search results are to your search keywords. And by knowing this you can refine your search profiles to generate a different result for the same keywords. This way you train the AI engine to work better for you and enable users to get more accurate search results.
Architecture

Dynamics integration with Azure Search service can be integrated in the following pattern:

 

  1. Integration through web resources: These web resources will host a web application acting as a client to the search service. The web resource can be a HTML file or an iFrame hosted on forms. The important point in this approach to ensure cross-origin settings in the client application and writing your html in a secure way and according to the best practices.
  2. Integration through custom power platform controls. You may build your own custom control which sends REST requests to the Azure Search and display results by consumes REST responses. The custom control can call Azure Search services using Actions or direct REST calls to Azure Service.
  3. Azure Search works based on indexes and your first step is to push your CRM searchable data to Azure Search indexes. This can be done using Microsoft Flow, Microsoft App Logics, custom solutions or Azure Data Factory. I have used all these tools in my implementations, and you can opt to any of these tools based on your requirements.
  4. Once the data is in your data store, you can create your indexes in the Azure Search. You can go for separate indexes for each data source or combine multiple data sources in one index. Each approach has its own requirements which will need to be met either in your client web application or a separate azure compute resource. Once indexing is done, you can make use Azure Search Rest API directly or using Azure API management to expose your search service to your Dynamics solution.
Summing these all up, you see as business application products get more sophisticated and organizations move from data to big data, engineers now must look for innovative approaches to implement Dynamics Solutions. Microsoft Azure along with Dynamics platform offers necessary tools to solution architects to design such solutions.

Use FetchXml with Flow CDS list records steps

Yesterday I noticed that there is a new CDS connector in my environment. It’s called the CDS Current Environment connector. With the current environment connector, you don’t select your environment–it just uses the environment in which the flow resides. And my favorite thing about the new connector: It lets you use FetchXml queries for list record steps!

Farewell ODATA!