Develop 1 Limited Blog | Microsoft Dynamics 365 Solutions

Microsoft Dynamics 365 Solutions
  1. Have you noticed recently that when you run npm install on your PCF projects, you get a high severity vulnerabilities error (or maybe you were spammed by the GitHub 🤖 dependabot like I was)?
    Luckily, it's not necessarily a reason to panic! 😅

    As of the time of writing this (14th April 2023), there is currently a vulnerability in the xml2js package which pcf-scripts depends on, so if you run npm audit, you will see something like:

    # npm audit report
    
    xml2js  <0.5.0
    Severity: high
    xml2js is vulnerable to prototype pollution  - https://github.com/advisories/GHSA-776f-qx25-q3cc
    No fix available
    node_modules/xml2js
      pcf-scripts  *
      Depends on vulnerable versions of xml2js
      node_modules/pcf-scripts
      pcf-start  *
      Depends on vulnerable versions of xml2js
      node_modules/pcf-start
    
    3 high severity vulnerabilities

    This error is not as scary as it sounds and the good news is that the pcf-scripts package is only used a build-time and it doesn't get used at run-time. The xml2js package doesn't affect the functionality or security of your PCF control at all (unless you are using it in your own code of course!) since it is not included in your final PCF bundle.js when used by the pcf-scripts package. 🙌

    So how do you fix this? 🤔

    Well until the owner of the xml2js package releases a new version or the pcf-scripts package is updated not to require it, there isn't anything you can do!

    Since pcf-scripts is included in the devDependencies section of the packages.json and is only used for development purposes, the way to determine if you have any issues that will impact your PCF bundle.js is to run the command:

    npm audit --omit=dev

    This will check only the packages that are in the dependencies section, and you should get the message:

    found 0 vulnerabilities

    Congratulations! 🥳

  2. One of the new features now supported in PCF (Power Apps Component Framework) code components are 'object outputs'. A PCF component has a manifest file that defines the inputs/output properties that it accepts, and each of those properties has a data type. Until now, it was only supported to use data types that are supported by Dataverse (E.g. Decimals, Lookups, Choices etc. ). Object-typed property support was introduced in 1.19.4 of pcf-scripts (see the release notes). They have not yet made it to the control manifest schema documentation, but I expect that to follow soon once the component samples have been updated.

    With object-typed output properties, we can now specify a schema for the output record which will then be picked up in Power Fx. Amongst the scenarios that this unlocks are:

    • Performing calculations or calling APIs, and then returning the results as an object output with a static schema. The output property can then have arrays and nested objects that will be visible in Canvas Apps at design time.
    • Raise the OnChange event and provide a Record as an output similar to the built-in Selected property using a dynamic schema definition. 

    For the second scenario, let's imagine a scenario where you have a grid, and when the user selects a command in a row, you want to output both the event type and the source row, without needing to provide a key that must be used to look up the record. For this scenario, we will take the input schema of the dataset being passed to the grid (e.g. Accounts or Contacts), and then map it to an object output schema for a property named EventRow. When the schema of the input dataset changes, the schema of the output property also changes to match.

    Define the output property in the ControlManifest.Input.xml

    For each object output property, there must be a dependent Schema property that will be used by Canvas Apps to display the auto-complete on the object. We add two properties, the output and the schema:

    <property name="EventRow" display-name-key="EventRow" of-type="Object" usage="output"/>
    <property name="EventRowSchema" display-name-key="EventRowSchema" of-type="SingleLine.Text" usage="bound" hidden="true"/>

    Now we must also indicate that the Schema property is used as the schema for the EventRow property by adding the following below inside the control element:

    <property-dependencies>
        <property-dependency input="EventRowSchema" output="EventRow" required-for="schema" />
    </property-dependencies>

    Notice that the property-dependency element joins the EventRowSchema and EventRow properties together to be used to determine the schema as indicated by required-for="schema".

    Define the JSON Schema

    In our example, whenever the input dataset changes, we must update the output schema to reflect the same schema so that we can see the same properties. The output schema is defined using the json-schema format.

    To use the JSON schema types, we can add the definitely typed node module using:

    npm install --save @types/json-schema

    Once this has been installed, you can use the type JSONSchema4 to describe the output schema by adding the following to your index.ts:

    private getInputSchema(context: ComponentFramework.Context<IInputs>) {
        const dataset = context.parameters.records;
        const columnProperties: Record<string, any> = {};
        dataset.columns
            .filter((c) => !c.isHidden && (c.displayName || c.name))
            .forEach((c) => {
                const properties = this.getColumnSchema(c);
                columnProperties[c.displayName || c.name] = properties;
            });
        this.columnProperties = columnProperties;
        return columnProperties;
    }
    private getColumnSchema(column: ComponentFramework.PropertyHelper.DataSetApi.Column): JSONSchema4 {
        switch (column.dataType) {
            // Number Types
            case 'TwoOptions':
                return { type: 'boolean' };
            case 'Whole.None':
                return { type: 'integer' };
            case 'Currency':
            case 'Decimal':
            case 'FP':
            case 'Whole.Duration':
                return { type: 'number' };
            // String Types
            case 'SingleLine.Text':
            case 'SingleLine.Email':
            case 'SingleLine.Phone':
            case 'SingleLine.Ticker':
            case 'SingleLine.URL':
            case 'SingleLine.TextArea':
            case 'Multiple':
                return { type: 'string' };
            // Other Types
            case 'DateAndTime.DateOnly':
            case 'DateAndTime.DateAndTime':
                return {
                    type: 'string',
                    format: 'date-time',
                };
            // Choice Types
            case 'OptionSet':
                // TODO: Can we return an enum type dynamically?
                return { type: 'string' };
            case 'MultiSelectPicklist':
                return {
                    type: 'array',
                    items: {
                        type: 'number',
                    },
                };
            // Lookup Types
            case 'Lookup.Simple':
            case 'Lookup.Customer':
            case 'Lookup.Owner':
                // TODO: What is the schema for lookups?
                return { type: 'string' };
            // Other Types
            case 'Whole.TimeZone':
            case 'Whole.Language':
                return { type: 'string' };
        }
        return { type: 'string' };
    }

    As you can see, each dataverse data type is mapped across to a JSON schema equivalent. I am still trying to establish the correct schema for complex objects such as Choices and Lookups, so I'll update this post when I find out more, but I expect that some of them such as Choice columns may not be possible.

    Output the schema

    Since the input schema can change at any time, we add the following to detect if it has changed, and then call notifyOutputChanged if it has:

    private updateInputSchemaIfChanged() {
        const newSchema = JSON.stringify(this.getInputSchema(this.context));
        if (newSchema !== this.inputSchema) {
            this.inputSchema = newSchema;
            this.eventRow = undefined;
            this.notifyOutputChanged();
        }
    }

    Inside updateView, we then simply make a call to this to detect the change. I've not worked out a way of detecting the change other than comparing the old and new schema. It would be good if there were a flag in the context.updatedProperties array but there does not seem to be one as far as I can find.

    Generate the output record object to match the schema

    In our example, each time the selection changes we raise the OnChange event and output the row that was selected (similar to the Selected property that raises the OnSelect event). In order to do this, we have to map the selected record onto an object that has the properties that the schema defines:

    private getOutputObjectRecord(row: ComponentFramework.PropertyHelper.DataSetApi.EntityRecord) {
        const outputObject: Record<string, string | number | boolean | number[] | undefined> = {};
        this.context.parameters.records.columns.forEach((c) => {
            const value = this.getRowValue(row, c);
            outputObject[c.displayName || c.name] = value;
        });
        return outputObject;
    }
    private getRowValue(
        row: ComponentFramework.PropertyHelper.DataSetApi.EntityRecord,
        column: ComponentFramework.PropertyHelper.DataSetApi.Column,
    ) {
        switch (column.dataType) {
            // Number Types
            case 'TwoOptions':
                return row.getValue(column.name) as boolean;
            case 'Whole.None':
            case 'Currency':
            case 'Decimal':
            case 'FP':
            case 'Whole.Duration':
                return row.getValue(column.name) as number;
            // String Types
            case 'SingleLine.Text':
            case 'SingleLine.Email':
            case 'SingleLine.Phone':
            case 'SingleLine.Ticker':
            case 'SingleLine.URL':
            case 'SingleLine.TextArea':
            case 'Multiple':
                return row.getFormattedValue(column.name);
            // Date Types
            case 'DateAndTime.DateOnly':
            case 'DateAndTime.DateAndTime':
                return (row.getValue(column.name) as Date)?.toISOString();
            // Choice Types
            case 'OptionSet':
                // TODO: Can we return an enum?
                return row.getFormattedValue(column.name) as string;
            case 'MultiSelectPicklist':
                return row.getValue(column.name) as number[];
            // Lookup Types
            case 'Lookup.Simple':
            case 'Lookup.Customer':
            case 'Lookup.Owner':
                // TODO: How do we return Lookups?
                return (row.getValue(column.name) as ComponentFramework.EntityReference)?.id.guid;
            // Other
            case 'Whole.TimeZone':
            case 'Whole.Language':
                return row.getFormattedValue(column.name);
        }
    }

    Again, I am unsure of the shape that is needed to support lookups and choice columns, so I am simply mapping them to numbers and strings at this time. 

    We can now use this to output the record when the selection changes:

    this.eventRow = this.getOutputObjectRecord(dataset.records[ids[0]]);
    this.notifyOutputChanged();

    In the getOutputs, we then simply add:

    public getOutputs(): IOutputs {
        return {
            EventRowSchema: this.inputSchema,
            EventRow: this.eventRow,
        } as IOutputs;
    }

     

    Implement getOutputSchema

    Notice above, we output both the selected record and its schema. If the schema has changed, this then triggers Power Apps to make a call to the method called getOutputSchema. This is where the actual JSON schema is returned and used by Power Apps:

    public async getOutputSchema(context: ComponentFramework.Context<IInputs>): Promise<Record<string, unknown>> {
        const eventRowSchema: JSONSchema4 = {
            $schema: 'http://json-schema.org/draft-04/schema#',
            title: 'EventRow',
            type: 'object',
            properties: this.getInputSchema(context),
        };
        return Promise.resolve({
            EventRow: eventRowSchema,
        });
    }

    The result

    Once this is done and published, your component will now have a new EventRow property, inheriting the same schema as the input record - with the caveat that Choices and Lookups will be strings, rather than complex types.

    If you had bound the grid to Accounts, the EventRow property might look similar to:

    You can grab the code for this example from GitHub: https://github.com/scottdurow/PCFDynamicSchemaOutputExample 

    This functionality takes us one step closer to parity with the first-party controls in canvas apps - I'm just now waiting for custom events to be supported next!

     

  3. Delegation of queries in Canvas Apps/Custom Pages has long been a troublesome topic and we are always looking out for the triangle of doom, or the double blue underline of eternal stench (well, it is Halloween soon!)

    I try to keep a close eye on the connector delegation support table in the official documentation for any changes and additions. Part of what I love about the Power Platform is that new features are constantly being released, often without fanfare! 

    Here is the current delegation support at the time of writing (for posterity from the docs):

    Item Number [1] Text [2] Choice DateTime [3] Guid
    Filter Yes Yes Yes Yes Yes
    Sort Yes Yes Yes Yes -
    SortByColumns Yes Yes Yes Yes -
    Lookup Yes Yes Yes Yes Yes
    =, <> Yes Yes Yes Yes Yes
    <, <=, >, >= Yes Yes No Yes -
    In (substring) - Yes - - -
    In (membership) (preview) Yes Yes Yes Yes Yes
    And/Or/Not Yes Yes Yes Yes Yes
    StartsWith - Yes - - -
    IsBlank Yes [4] Yes [4] No [4] Yes [4] Yes
    Sum, Min, Max, Avg [5] Yes - - No -
    CountRows [6] [7], CountIf [5] Yes Yes Yes Yes Yes

     

    The Caveats are important - especially around aggregation limits:

    1. Numeric with arithmetic expressions (for example, Filter(table, field + 10 > 100) ) aren't delegable. Language and TimeZone aren't delegable.

    2. Doesn't support Trim[Ends] or Len. Does support other functions such as Left, Mid, Right, Upper, Lower, Replace, Substitute, etc.

    3. DateTime is delegable except for DateTime functions Now() and Today().

    4. Supports comparisons. For example, Filter(TableName, MyCol = Blank()).

    5. The aggregate functions are limited to a collection of 50,000 rows. If needed, use the Filter function to select 50,000

    6. CountRows on Dataverse uses a cached value. For non-cached values where the record count is expected to be under 50,000 records, use CountIf(table, True).

    7. For CountRows, ensure that users have appropriate permissions to get totals for the table.

    Old 'in' delegation limit

    The really exciting addition to this table is the mention of 'In (membership)'. It is currently marked as preview but can be used in the latest version of canvas studio.

    Previously, if you had written a formula to get all the accounts that had a primary contact of A or B it might look like:

    Set(varInFilter, 
        [
            First(Contacts).Contact, 
            Last(Contacts).Contact
        ]);
    
    ClearCollect(colAccounts,
        Filter(Accounts, 'Primary Contact'.Contact in varInFilter)
    );
    

    In this situation, previously you would have been presented with the delegation warnings:

    When you execute the query you would have seen the warning:

    The reason being is that the query that was executed against Dataverse would be:

    /api/data/v9.0/accounts?
    $select=accountid,dev1_AccountStatus,primarycontactid,_dev1_accountstatus_value,_primarycontactid_value
    

    Here there are no filters that are sent to the server to filter by the primary contact, so the delegation limit will be hit.

    The new 'In' server-side delegation!

    With the new behaviour, if you are using version 3.22102.32 or later (See all versions), the 'in' operator is now delegable to Dataverse. This means you will see no warning:

    And inside the monitor, you see a clean delegated query!

    This is because the filtering is now performed on the server using the OData query:

    /api/data/v9.0/accounts?
    $filter=(primarycontactid/contactid eq... or primarycontactid/contactid eq...)&$select=accountid,primarycontactid,_primarycontactid_value
    

    The key part here is that the primarycontactid is filtered using the OR query. This is great news because we no longer will hit that delegation limit.

    Those troublesome polymorphic relationships

    One of the constant challenges in Power Fx is the support for polymorphic relationships in Dataverse when performing delegated queries. This new support is no exception, unfortunately. If you were to write the following formula you would still hit the delegation limit:

    ClearCollect(colcontacts,
        Filter(Contacts, AsType('Company Name',[@Accounts]).Account in varInFilter)
    )

    I'm going to be keeping an eye out for this to be supported in the future and I'll let you know! 

    Check out my video showing this new 'in' delegation when used with the Creator Kit! 

    @ScottDurow

  4. My free tutorial course on writing Dataverse web resources has been available for over a year now, and it has had over 1000 people enrol! The course uses version 1 of dataverse-ify, and of course over that time I've been working on version 2 which is currently available in beta.

    What is Dataverse-ify?

    Dataverse-ify aims to simplify calling the Dataverse WebAPI from TypeScript inside model-driven apps, Single Page Applications (SPAs) and integration tests running inside VSCode/Node. It uses a small amount of metadata that is generated using dataverse-gen and a set of early bound types to make it easy to interact with dataverse tables and columns using a similar API that you might be used to using if you've used the IOrganizationService inside C#. 

    You can use the new version by adding @2 on the end of the node modules:

    For example:

    npx dataverse-auth@2
    npx dataverse-gen@2
    npm install dataverse-ify@2
    

    Soon, I will be removing the beta tag and publishing it so that it will install by default. There are a few breaking changes detailed in the Upgrading readme, but I will be publishing more samples including a Single Page Application that uses dataverse-ify even where the Xrm.WebApi is not available.

    I wanted to give you a peek at one of the features that I am really excited about in version 2 - support for ExecuteMultiple with batch and change set support. A batch allows you to send multiple requests in a single request, and change sets allow you to send multiple requests that will be executed as a transaction. This can give your client-side code a performance boost and make it easier to perform a single changeset where if one request fails, they all will fail. Custom API requests can even be wrapped up in executeMultiple!

    Imagine that you have a Command Bar button that calls a JavaScript function from a grid that needs to make an update to a column on all of the selected records, and then wait for a triggered flow to run, as indicated by the updated column being reset. The updates can be wrapped up in an ExecuteMultiple batch rather than being made by lots of Update requests.

    Create the top-level function

    When a command bar calls a JavaScript function it can return a Promise if there is asynchronous work being performed. In our case, we don't want the model-driven app to wait until our flows are run, so we can use Promise.resolve on an internal function to 'fire and forget' a long-running task:

    static async CreateProjectReportTrigger(entityIds: string[]): Promise<void> {
      // Fire and forget the internal command so it does not cause a ribbon action timeout
      Promise.resolve(ProjectRibbon.CreateProjectReportTriggerInternal(entityIds));
    }

    Create the Internal function and initialize the metadata cache

    Inside the internal function, we need to first set our metadata that was created using dataverse-gen - this provides dataverse-ify with some of the information, it needs to work out the data types of columns that are not present in the WebApi responses. We also create a random value to update the column that will trigger flow:

    setMetadataCache(metadataCache);jj
    const requestCount = entityIds.length;
    const trigger = "trigger" + Math.random().toString();

    Make the update using executeMultiple (this is not C# remember, it's TypeScript!)

    This is where the magic happens - we can create an array of UpdateRequest objects using the entitiyIds provided to function from the Command Bar:

    // Trigger the flow for each selected project (using a batch)
    const updates = entityIds.map((id) => {
      return {
        logicalName: "Update",
        target: {
          logicalName: dev1_projectMetadata.logicalName,
          dev1_projectid: id,
          dev1_reportgenerationstatus: trigger,
        } as dev1_Project,
      } as UpdateRequest;
    });
    const serviceClient = new XrmContextDataverseClient(Xrm.WebApi);
    await serviceClient.executeMultiple(updates);

    You can see that the updates array is simply passed into executeMultiple which then will bundle them up inside a $batch request. If you wanted to, you can run the updates inside a transaction by simply wrapping the batch inside an array:

    await serviceClient.executeMultiple([updates]);

    This array could actually contain multiple change sets which each would run independently inside a transaction.

    So the resulting function would be:

    static async CreateProjectReportTriggerInternal(entityIds: string[]): Promise<void> {
      // Update a column on the selected records, to trigger a flow
      try {
        setMetadataCache(metadataCache);
        const requestCount = entityIds.length;
        const trigger = "trigger" + Math.random().toString();
        // Trigger the flow for each selected project (using a batch)
        const updates = entityIds.map((id) => {
          return {
            logicalName: "Update",
            target: {
              logicalName: dev1_projectMetadata.logicalName,
              dev1_projectid: id,
              dev1_reportgenerationstatus: trigger,
            } as dev1_Project,
          } as UpdateRequest;
        });
        const serviceClient = new XrmContextDataverseClient(Xrm.WebApi);
        await serviceClient.executeMultiple(updates);
        // Monitor the result
        const query = `<fetch aggregate="true">
          <entity name="dev1_project">
          <attribute name="dev1_projectid" alias="count_items" aggregate="countcolumn" />
          <filter>
              <condition attribute="dev1_reportgenerationstatus" operator="eq" value="${trigger}" />
          </filter>
          </entity>
      </fetch>`;
        let complete = false;
        do {
          const inProgressQuery = await serviceClient.retrieveMultiple(query, { returnRawEntities: true });
          complete = inProgressQuery.entities.length === 0;
          if (!complete) {
            const inProgressCount = inProgressQuery.entities[0]["count_items"] as number;
            complete = inProgressCount === 0;
            // Report status
            Xrm.Utility.showProgressIndicator(`Generating Reports ${requestCount - inProgressCount}/${requestCount}`);
            await ProjectRibbon.sleepTimeout(2000);
          }
        } while (!complete);
        Xrm.Utility.closeProgressIndicator();
      } catch (e) {
        Xrm.Utility.closeProgressIndicator();
        Xrm.Navigation.openErrorDialog({ message: "Could not generate reports", details: JSON.stringify(e) });
      }
    }
    
    static sleepTimeout(ms: number): Promise<void> {
      return new Promise((resolve) => setTimeout(resolve, ms));
    }

    This code adds in the polling for the number of records that have yet to have the flow run and reset the dev1_reportgenerationstatus attribute, indicating that it is completed or reports an error.

    The batch request would look similar to:

    --batch_1665710705198
    Content-Type: application/http
    Content-Transfer-Encoding: binary
    
    PATCH /api/data/v9.0/dev1_projects(2361e495-1419-ed11-b83e-000d3a2ae2ee) HTTP/1.1
    Content-Type: application/json
    
    {"dev1_reportgenerationstatus":"trigger0.39324146578062336","@odata.type":"Microsoft.Dynamics.CRM.dev1_project"}
    --batch_1665710705198
    Content-Type: application/http
    
    PATCH /api/data/v9.0/dev1_projects(e8184b63-1823-ed11-b83d-000d3a39d9b6) HTTP/1.1
    Content-Type: application/json
    
    {"dev1_reportgenerationstatus":"trigger0.39324146578062336","@odata.type":"Microsoft.Dynamics.CRM.dev1_project"}
    
    --batch_1665710705198--

    The code obviously can be improved by adding a timeout and better reporting of errors - but this shows the general idea of using executeMultiple using dataverse-ify version 2.

    There are lots of other improvements in version 2 - so if you've used version 1 please do give version 2 a go whilst it's in beta and report any issues inside GitHub.

    In my next post on version 2, I'll show you how to call a Custom API using a batch and changeset. If you want to peek before then - take a look at the tests for version 2 - they give lots of examples of its use.

    @ScottDurow

     

  5. One of the constant challenges we face when writing canvas apps and custom pages using Power Fx is ensuring that the queries we use are always delegatable. Not all operators are delegatable to the server when using Filter or Sort, which can sometimes create a performance challenge. Furthermore, some types of queries are not possible such as group-by and complex link entity queries. Wouldn't it be great to be able to run a FetchXML query and then use the results in your app? In this post, I'll show you a pattern that allows you to do just that using the new PasrseJSON Power Fx function.

    Creating the query mechanism

    We will use a Cloud Flow to perform the FetchXml query, which will be called from Power Fx. This side-steps any delegation issues since we know that the query is always run on the server.

    1. Create a new instant cloud flow (I will call it 'PowerApp: FetchXml Query'). Select Power Apps as the trigger.

    2. Add a Dataverse List Rows action, and then in the Table name and Fetch Xml Query parameters, add Ask in Power Apps under Dynamic Content.

    3. Add a Respond to Power Apps action, Add a string output and enter the expression:

    outputs('List_rows')?['body']['value']

    Your flow should look similar to the following:

    Perform query using Power Fx

    Imagine that you wanted to find all the distinct accounts that had at least one contact with an activity that is due in the current month. This would be difficult using standard Power Fx since you would need to perform multiple queries, possibly causing performance issues. 

    1. First we create the query using the Accounts view in a model-driven app. The query looks like the following:

    2. Now we can use Download FetchXML to get the query. You can also use the awesome FetchXML builder from Jonas.

    3. Inside your canvas app, enable the ParseJSON feature inside Settings:

    NOTE: You will need to save and reload your app for this setting to take effect.

    4. Under the Power Automate tab, add the FetchXml Query Cloud Flow so that it is available to our app.

    5. Inside your app, make a call to the following Power Fx. This could be inside a Screen OnVisible, or a button:

    // Get all accounts with at least one contact that has an activity due this month
    UpdateContext({ctxAccountsThisMonth:
        ForAll(
            Table(
                ParseJSON('PowerApp:FetchXmlQuery'.Run("
                 <fetch version='1.0' output-format='xml-platform' mapping='logical' no-lock='true' distinct='true'>
                    <entity name='account'>
                        <attribute name='name' />
                        <attribute name='accountid' />
                        <filter type='and'>
                        <condition attribute='statecode' operator='eq' value='0' />
                        <condition attribute='industrycode' operator='not-null' />
                        </filter>
                        <link-entity alias='accountprimarycontactidcontactcontactid' name='contact' from='contactid' to='primarycontactid' link-type='outer' visible='false'>
                        </link-entity>
                        <link-entity name='contact' alias='aa' link-type='inner' from='parentcustomerid' to='accountid'>
                        <link-entity name='activitypointer' alias='ae' link-type='inner' from='regardingobjectid' to='contactid'>
                            <filter type='and'>
                            <condition attribute='scheduledend' operator='this-month' />
                            </filter>
                        </link-entity>
                        </link-entity>
                    </entity>
                    </fetch>
                 ",
                    "accounts").results
                    )
                ),
                {
                    accountid:GUID(Value.accountid),
                    name:Text(Value.name)
                }
            )
        })
    
    

    This code simply calls the Fetch XML Query and then maps the results into a collection. It picks out each attribute from the results and converts it to the correct data type (e.g. Text or number).

    You can now use this data in your app! Each time you call this flow, the query will be re-evaluated without any caching so be careful how many times you call it in order to minimise API entitlement consumption. When you want to join the results back to the standard Power Fx Accounts data source, if you show the results in a gallery you could use:

     LookUp(Accounts,Account=Gallery1.Selected.accountid).'Account Name'.

    Performing aggregate group by queries from your canvas app

    Imagine that you wanted to show the total number of accounts per industry code inside your app. You could easily use a Power BI query for this - however, there are sometimes when you need the data to be native to the app.

    Using the awesome FetchXML builder, you might create the Fetch XML to look similar to:

    <fetch aggregate='true'>
      <entity name='account'>
        <attribute name='industrycode' alias='group' groupby='true' />
        <attribute name='industrycode' alias='industrycode_count' aggregate='count' />
      </entity>
    </fetch>

    If you use this in your Power Fx in a similar way to the code above, you will find that the Cloud Flow gives an error similar to:

    An ODataPrimitiveValue was instantiated with a value of type 'Microsoft.Xrm.Sdk.OptionSetValue'. ODataPrimitiveValue can only wrap values which can be represented as primitive EDM types.

    The reason behind this is that the Dataverse connector for Flow does not know how to interpret the metadata being returned from the aggregate query since each row is not an account, but an aggregate row. You will get a similar error if you try and group by a Lookup:

    An ODataPrimitiveValue was instantiated with a value of type 'Microsoft.Xrm.Sdk.EntityReference'. ODataPrimitiveValue can only wrap values which can be represented as primitive EDM types.

    To work around this we must create a 'primitive' data type column to group by. So for an OptionSet/Choice, we would create a numeric column that contains the Choice Value and for a Lookup, we would create a String column that contains the Lookup GUID. We can then group by these columns. I call these 'materialized columns'. To do this you could either create a Plugin or a Cloud Flow The cloud flow should be triggered when a record is created or updated, and then update the two materialized 'primitive' columns.

    1. First create the Cloud flow to be triggered when an account is created or updated:

    2. Add a second Dataverse Update a row step, and set the:

    Row Id : triggerOutputs()?['body/accountid']

    Account Status ID (This is the custom string column)  : triggerOutputs()?['body/_dev1_accountstatus_value']

    Industry Code Value (This is the custom number column) : triggerOutputs()?['body/industrycode]

    3. After you save your flow, when you update your accounts you should find that the two custom materialized columns will contain the primitive version of the Choice and Lookup columns. This now allows us to perform grouping/aggregate queries inside a flow.

    4. Inside your app, add the following Power Fx to a screen OnVisible:

    Concurrent(
    // Get accounts by industry code
    UpdateContext({ctxCountByIndustry:
        ForAll(
            Table(
                ParseJSON('PowerApp:FetchXmlQuery'.Run("
                    <fetch aggregate='true'>
                        <entity name='account'>
                            <attribute name='dev1_industrycodevalue' alias='group' groupby='true' />
                            <attribute name='dev1_industrycodevalue' alias='industrycode_count' aggregate='count' />
                        </entity>
                    </fetch>",
                    "accounts").results
                    )
                ),
                {
                    group:Value(Value.group),
                    industrycode_count:Value(Value.industrycode_count)
                }
            )
        })
    ,
    // Get the industry code name/value pairs for the current language
    UpdateContext({ctxIndustryCodes:
        ForAll(
            Table(
                ParseJSON('PowerApp:FetchXmlQuery'.Run("
                    <fetch>
                    <entity name='stringmap'>
                            <attribute name='stringmapid' />
                            <attribute name='attributevalue' />
                            <attribute name='displayorder' />
                            <attribute name='value' />
                            <filter>
                            <condition attribute='attributename' operator='eq' value='industrycode' />
                            <condition attribute='objecttypecode' operator='eq' value='1' />
                            <condition attribute='langid' operator='eq' value='1033' />
                            </filter>
                        </entity>
                    </fetch>",
                    "stringmaps").results
                    )
                ),
                {
                    attributevalue:Value(Value.attributevalue),
                    value:Text(Value.value),
                    displayorder:Value(Value.displayorder)
                }
            )
        })
    );
    

    There are two queries here, the first is the aggregate query returning the accounts grouped by industry code (the custom materialized column). The second is a query that returns the name/value pairs for the industry code choice column in the current language since we can no longer use the standard enum (Industry (Accounts)) that Power Fx provides us. The Power Fx enums are text-based only and you cannot get access to the Choice numeric value.

    Notice that the two queries are performed inside a Concurrent function to ensure that they are run in parallel so that they can run as quickly as possible.

    5. We can now show the results inside a Chart by binding the Chart.Items to :

    AddColumns(Filter(ctxCountByIndustry,!IsBlank(group)),"IndustryName",LookUp(ctxIndustryCodes,attributevalue = group).value)

    Note: The IndustryName column is added so that the chart can show the Choice Text value instead of the numeric value that we used to group by. The result might look something like this:

    So that's it. I hope that eventually, these kinds of queries will be possible natively using Power Fx without the need to call a Cloud Flow. Maybe even, allow querying Dataverse using the SQL endpoint.

    Hope this help!

    @ScottDurow

     

  6. If you wanted to add a button to a command bar to perform an update on multiple records in a grid, you can easily create a formula that results in slow performance caused by multiple grid refreshes. This post outlines, the most performant way of applying updates to multiple records from a Power Fx command button.

    Step 1 - Add your button

    Inside the modern command bar editor, add a button to the Main Grid or Sub Grid command bars of the Account entity.

    Step 2 - Set the Visibility rule

    Any buttons on a grid that apply to selected records will only become visible if you provide a visibility rule.

    Select the Visibility property, and select Show on condition from formula.

    In the formula bar at the top, enter the following:

    !IsEmpty(Self.Selected.AllItems)

    Step 3 - Add the command formula

    Select the Action of the button, and select Run formula.

    In the formula bar at the top, enter the following:

    // Paralle Updates
    Patch(
        Accounts, 
        ForAll(Self.Selected.AllItems, 
            {
                Account:ThisRecord.Account,
                'Credit Hold':'Credit Hold (Accounts)'.Yes 
            }
        )
    )

    Note: If you are adding a button for a different entity, you will need to change the table name (Accounts) and primary key column name (Account).

    Step 4 - Save and Publish, then Play your app!

    The changes will take a short while to appear in your app. You will see a message similar to the following when the changes are ready:

    Parallel vs Sequential Patches

    If you used the following formula, it would result in multiple grid refreshes since the Patches will be done in sequence.

    // Sequential Updates
    ForAll(Self.Selected.AllItems, 
        Patch(
            Accounts, 
            ThisRecord, 
            { 'Credit Hold':'Credit Hold (Accounts)'.No }
        )
    );

    Using the parallel version above instead of this sequential one will try and perform as many parallel updates as the browser can handle. This is due to the limited number of HTTP requests that can be sent simultaneously from the browser.
    You can see in the timeline below, that only 6 parallel requests are in progress at once.

    Despite this, this technique will be considerably more efficient than performing the updates sequentially, and the grid will only be refreshed the once, instead of with each record updated.

     

  7. The Power Apps team have had a long-standing quality tool called the Solution Checker. This was built into the Dataverse solution management interface, ALM Power Platform Build Tools and even a PowerShellversion. The challenge with the Solution Checker has always been that by the time you find out about the issues, it's often too late and the solution has been committed to source control and is already in the CI/CD pipeline, and furthermore the results were often false positives due to the way that TypeScript 'transpiles' into JavaScript.

    Now the Power Apps team have released an ESLint plugin that gives you warnings and errors as you write your code (if you have the eslint extension installed), or when you run ESLint at the command line.

    I've previously written on using ESLint in your development environment - and this is a really welcome addition to the rule set. 

    At this time, the rule-set is most applicable for Web Resource development at the moment with the majority describing deprecations, however, it is certainly worth adding them to PCF projects as well (alongside the other react recommended rules).

    To install you use:

    npm install @microsoft/eslint-plugin-power-apps --save-dev

    To add to your existing .eslint.json file, add @microsoft/power-apps to the plugins section:

    "plugins": [
       ...
        "@microsoft/power-apps"
    ],

    Then add each of the rules to the rules section with the level you want to enforce: 

    "rules": {
      "@microsoft/power-apps/avoid-2011-api": "error",
      "@microsoft/power-apps/avoid-browser-specific-api": "error",
      "@microsoft/power-apps/avoid-crm2011-service-odata": "warn",
      "@microsoft/power-apps/avoid-crm2011-service-soap": "warn",
      "@microsoft/power-apps/avoid-dom-form-event": "warn",
      "@microsoft/power-apps/avoid-dom-form": "warn",
      "@microsoft/power-apps/avoid-isactivitytype": "warn",
      "@microsoft/power-apps/avoid-modals": "warn",
      "@microsoft/power-apps/avoid-unpub-api": "warn",
      "@microsoft/power-apps/avoid-window-top": "warn",
      "@microsoft/power-apps/do-not-make-parent-assumption": "warn",
      "@microsoft/power-apps/use-async": "error",
      "@microsoft/power-apps/use-cached-webresource": "warn",
      "@microsoft/power-apps/use-client-context": "warn",
      "@microsoft/power-apps/use-global-context": "error",
      "@microsoft/power-apps/use-grid-api": "warn",
      "@microsoft/power-apps/use-navigation-api": "warn",
      "@microsoft/power-apps/use-offline": "warn",
      "@microsoft/power-apps/use-org-setting": "error",
      "@microsoft/power-apps/use-relative-uri": "warn",
      "@microsoft/power-apps/use-utility-dialogs": "warn"
    }

    I would really like to see a recommended set of rules that can be used like other ESLint rule sets, so when more rules are added we don't need to explicitly add them to the config. If you agree - please consider upvoting my suggestions!

    My .esconfig looks something like the following for a Web Resource project:

    {
      "parser": "@typescript-eslint/parser",
      "env": {
        "browser": true,
        "commonjs": true,
        "es6": true,
        "jest": true,
        "jasmine": true
      },
      "extends": [
        "plugin:@typescript-eslint/recommended",
        "plugin:prettier/recommended",
        "prettier",
        "plugin:sonarjs/recommended"
      ],
      "parserOptions": {
        "project": "./tsconfig.json"
      },
      "plugins": [
        "@typescript-eslint",
        "prettier",
        "@microsoft/power-apps",
        "sonarjs"
      ],
      "ignorePatterns": [
        "**/generated/*.ts"
      ],
      "rules": {
        "prettier/prettier": "error",
        "eqeqeq": [
          2,
          "smart"
        ],
        "prettier/prettier": "error",
        "arrow-body-style": "off",
        "prefer-arrow-callback": "off",
        "linebreak-style": [
          "error",
          "windows"
        ],
        "quotes": [
          "error",
          "double"
        ],
        "semi": [
          "error",
          "always"
        ],
        "@microsoft/power-apps/avoid-2011-api": "error",
        "@microsoft/power-apps/avoid-browser-specific-api": "error",
        "@microsoft/power-apps/avoid-crm2011-service-odata": "warn",
        "@microsoft/power-apps/avoid-crm2011-service-soap": "warn",
        "@microsoft/power-apps/avoid-dom-form-event": "warn",
        "@microsoft/power-apps/avoid-dom-form": "warn",
        "@microsoft/power-apps/avoid-isactivitytype": "warn",
        "@microsoft/power-apps/avoid-modals": "warn",
        "@microsoft/power-apps/avoid-unpub-api": "warn",
        "@microsoft/power-apps/avoid-window-top": "warn",
        "@microsoft/power-apps/do-not-make-parent-assumption": "warn",
        "@microsoft/power-apps/use-async": "error",
        "@microsoft/power-apps/use-cached-webresource": "warn",
        "@microsoft/power-apps/use-client-context": "warn",
        "@microsoft/power-apps/use-global-context": "error",
        "@microsoft/power-apps/use-grid-api": "warn",
        "@microsoft/power-apps/use-navigation-api": "warn",
        "@microsoft/power-apps/use-offline": "warn",
        "@microsoft/power-apps/use-org-setting": "error",
        "@microsoft/power-apps/use-relative-uri": "warn",
        "@microsoft/power-apps/use-utility-dialogs": "warn"
      }
    }

     

    Note: This uses the prettier and sonarjs plugins that you will also need to install.

    I also add the following scripts to my package.json:

    "lint": "eslint **/*.ts",
    "lint:fix": "npm run lint -- --fix",

    These then allow you to run the rules across the entire project and automatically fix them where possible using npm run lint:fix.

    I really love how the solution package has evolved into this set of pro-coder-friendly ESLint rules. I can't wait to see how it develops with more rules over time.

    Read more on the official blog post from Microsoft.

    Take a look at the npm package information for eslint-plugin-power-apps.

  8. React version 18 has recently been pushed into npm which is great if all of your components support it, however, if you are working with Fluent UI then you may stumble across the following error:

    npm ERR! code ERESOLVE
    npm ERR! ERESOLVE unable to resolve dependency tree
    npm ERR!
    npm ERR! While resolving: my-app20@0.1.0
    npm ERR! Found: @types/react@18.0.8
    npm ERR! node_modules/@types/react
    npm ERR!   @types/react@"^18.0.8" from the root project
    npm ERR!
    npm ERR! Could not resolve dependency:
    npm ERR! peer @types/react@">=16.8.0 <18.0.0" from @fluentui/react@8.67.2
    npm ERR! node_modules/@fluentui/react
    npm ERR!   @fluentui/react@"*" from the root project
    npm ERR!
    npm ERR! Fix the upstream dependency conflict, or retry
    npm ERR! this command with --force, or --legacy-peer-deps
    npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
    npm ERR!
    npm ERR! See C:\Users\...\AppData\Local\npm-cache\eresolve-report.txt for a full report.
    
    npm ERR! A complete log of this run can be found in:
    npm ERR!     C:\Users\...\AppData\Local\npm-cache\_logs\....-debug-0.log
    
    

    This might happen if you are doing either of the following:

    1. When creating a standard PCF project using pac pcf init and then using npm install react followed by npm install @fluentui/react
    2. Using create-react-app with the standard typescript template, followed by npm install @fluentui/react

    The reason in both cases for the error is that once React 18 is installed, Fluent UI will not install since it requires a version less than 18. The Fluent UI team are working on React 18 compatibility but I do not know how long it will be until Fluent UI supports React 18. 

    These kinds of issues often crop up when node module dependencies are set to automatically take the newest major version of packages.

    How to fix the issue?

    Fundamentally the fix is to downgrade the version of React and the related libraries before installing Fluent UI:

    pac pcf init

    If you are using standard controls - you might consider moving to virtual controls.
    Doing this actually requires a specific version of React and Fluent UI to be installed and so there is no issue.
    Check out my blog post on how to convert to a virtual control and install the specific versions required.

    Alternatively, if you are installing react after using pac pcf init with a standard control you can install version 17 specifically using:

    npm install react@17 react-dom@17 @types/react@17 @types/react-dom@17

    After you've done that, you can install fluent as usual using:

    npm install @fluentui/react@latest

    create-react-app

    Create-react-app is a command-line utility that is commonly used to quickly create a react app - and is often used for testing purposes when building pcf components. Now that React 18 has been released, using create-react-app will also install react-18. The scripts and templates have all be updated accordingly. 

    Unfortunately, you can't use an older version of create-react-app that used the older version of react (e.g. npx create-react-app@5.0.0) because you will receive the error:

    You are running `create-react-app` 5.0.0, which is behind the latest release (5.0.1).
    We no longer support global installation of Create React App.

    The Fluent UI team are actually working on a create-react-app template for fluent that specifically installs react17 - but until then you will need to follow these steps:

    1. Use create-react-app as usual:
      npx create-react-app my-app --template typescript
    2. After your app has been created use:
      cd my-app
      npm uninstall react react-dom @testing-library/react @types/react @types/react-dom
      npm install react@17 react-dom@17 @testing-library/react@12 @types/react@17 @types/react-dom@17
    3. Since the latest template is designed for React 18 you will need to make some minor modifications to index.ts:
      Replace import ReactDOM from 'react-dom/client'; with import ReactDOM from 'react-dom';
      Replace the following code:
      const root = ReactDOM.createRoot(
        document.getElementById('root') as HTMLElement
      );
      root.render(
        <React.StrictMode>
          <App />
        </React.StrictMode>
      );
      With the code:
      ReactDOM.render(
        <React.StrictMode>
          <App />
        </React.StrictMode>,
        document.getElementById('root')
      );
      This is required because React 18 does not support the ReactDOM.render method anymore.

    Once Fluent UI has been updated to support React 18, these steps will not be required - however, if you are using Virtual Controls, then until the platform is updated, your controls will continue to need to use React 16.8.6.

    Hope this helps!

    @ScottDurow

  9. The long-awaited 'virtual control' feature is finally in preview which means you can start to try converting your controls to be virtual - but what does this actually mean?

    What are virtual code component PCF controls?

    Virtual controls are probably better named React code components since this is their defining feature. Using them has the following benefits:

    1. Uses the host virtual DOM - The code component natively is added to the hosting apps 'Virtual DOM' instead of creating its own. This has performance benefits when you have apps that contain many code components. See more about the React virtual DOM: Virtual DOM and Internals – React (reactjs.org)
    2. Shared libraries - When using React and Fluent UI which is the best practice for creating code components, the libraries are bundled into the code components bundle.js using web-pack. If you have many different types of code components on your page, each with its own bundled version of these libraries it can lead to a heavy footprint, even when using path-based imports. With shared libraries, you can re-use the existing React and Fluent UI libraries that are already made available by the platform and reduce the memory footprint.

    You can create a new virtual control to see this in action using the Power Platform CLI with:

    pac pcf init -ns SampleNamespace -n VirtualControl -t field -npm -fw react
    

    The key parameter is -fw react which indicates to use the new virtual control template:

    But how do you convert your existing code-components to virtual controls?

    If you have a code component that uses React and Fluent UI today, then you can follow the steps below to convert them and benefit from the points above. If you would prefer a video of how to do this you can check out my youtube tutorial on react virtual controls.

    1. Set control-type to virtual

    Inside the ControlManifest.Input.xml, update the attribute control-type on the control element from standard, to virtual

    For example, from:

    <control namespace="SampleNamespace" constructor="CanvasGrid" version="1.0.0" display-name-key="CanvasGrid" description-key="CanvasGrid description" control-type="standard" >

    to:

    <control namespace="SampleNamespace" constructor="CanvasGrid" version="1.0.0" display-name-key="CanvasGrid" description-key="CanvasGrid description" control-type="virtual" >

    2. Add platform-library references

    Again, inside the ControlManifest.Input.xml, locate the resources element add the platform libraries for React and Fluent. This will tell the platform that the component needs these libraries at runtime.

    <resources>
          <code path="index.ts" order="1"/>
          <platform-library name="React" version="16.8.6" />
          <platform-library name="Fluent" version="8.29.0" />
    </resources>
    

    Note: It is important to ensure that the version of React and Fluent that you are using is supported by the platform.

    3. Ensure you are using the same version of Fluent and React as the platform

    To ensure you are using the correct versions of React and Fluent you can uninstall your previous ones and then add the specific version referenced above:

    npm uninstall react react-dom @fluentui/react
    npm install react@16.8.6 react-dom@16.8.6 @fluentui/react@8.29.0

    Note: If you are using deep path based imports of fluent - check you are using the root library exports as I describe in a previous post - this is to ensure the exports will be picked up correctly.

    4. Implement ComponentFramework.ReactControl

    The key part of the change to index.ts is that now we must implement a new interface ComponentFramework.ReactControl<IInputs, IOutputs>  instead of ComponentFramework.StandardControl<IInputs, IOutputs>

    Locate the class implementation in index.ts and update the implements interface to be:

    export class YourControlName implements ComponentFramework.ReactControl<IInputs, IOutputs>

    5. Update the signature of updateView

    The old method signature of updateView returned void, but now you must return a ReactElement so that it can be added to the virtual DOM of the parent app. Update the signature to be:

    updateView(context: ComponentFramework.Context<IInputs>): React.ReactElement

    6. Remove ReactDOM.render

    Since we are using the virtual DOM of the parent app, we no longer need to use ReactDOM. You will normally have code similar to:

    ReactDOM.render(React.createElement(MyComponent)
    ),this.container);

    Replace this now with simply:

    return React.createElement(MyComponent);

    7. Remove calls to unmountComponentAtNode

    Previously you would have had to dismount the React Virtual DOM elements in the code component's destroy method. Locate the destroy method and remove the line:

    ReactDOM.unmountComponentAtNode(this.container);

    8. Make sure you are using the latest version of the Power Apps CLI

    To ensure that your Power Apps CLI supports virtual controls, ensure it is updated to the latest version. I recommend doing this using the VSCode extension if you are not already using it and removing the old MSI installed version. You will need to do a npm update pcf-scripts pcf-start to grab the latest npm modules that support react virtual controls!

    That's it!

    It really is that simple. If you now use npm start watch you'll see your component rendered, but the bundle.js size will be smaller and when you deploy, it'll be faster in apps that contain many components.

    Check out the official blog post about this feature for more info.

    Hope this helps!

    @ScottDurow

     

  10. If you are using Fluent UI in your code components (PCF) you probably are also using path-based imports to reduce your bundle size. This technique ensures that when you build your code component, the bundle doesn't include the entire Fluent UI library, but instead just the components that you need. With the recent update to Fluent UI, you might receive an error similar to the following:

    ERROR in ./somefile.tsx 
    Module not found: Error: Package path ./lib/components/CommandBar is not exported from package C:\src\CommandBar\node_modules\@fluentui\react (see exports field in C:\demo4\CommandBar2\node_modules\@fluentui\react\package.json)

    This is probably caused by your paths pointing to a folder that is not included in the new explicit export paths that have been added to the Fluent UI react package

    To ensure that you maintain compatibility with each update to the Fluent UI library, instead of using:

    import { CommandBar } from '@fluentui/react/lib/components/CommandBar';

    You should instead use:

    import { CommandBar } from '@fluentui/react/lib/CommandBar';

    See more information in the docs: Best practices for code components - Power Apps | Microsoft Docs

    That's all for now!

    @ScottDurow

  11. One of the biggest causes of unexpected bugs in canvas apps is the delegation of queries. For instance, if you want to sort by the owner of an account, you can use the Power Fx query:

    Sort(Accounts,'Created By'.'Full Name', Ascending)

    You will get a delegation warning on this since the sorting will only happen in memory and not on the server. This means if you have the delegation limit set to 500, only the first 500 records will be sorted instead of sorting the entire dataset on the server-side. This may not show up whilst you are developing the app, but it might not work as expected in production once deployed.

    Perhaps more concerningly, if you are using data-shaping to add a column (AddColumns) to then sorted using this column, the delegation warning will not even show up.

    When using PCF (code-components) inside canvas apps, a nice feature is that we have much more control over the paging/filtering/sorting/linking of Dataverse queries. This is part of the declarative vs imperative story (but that’s for another time).


    When binding a dataset to a Dataverse connector, we can use OData style operators to manipulate the query, rather than the Power Fx Sortand Filter functions and their associated delegation challenges.
    E.g.

        onSort = (name: string, desc: boolean): void => {
            const sorting = this.context.parameters.records.sorting;
            while (sorting.length > 0) {
                sorting.pop();
            }
            this.context.parameters.records.sorting.push({
                name: name,
                sortDirection: desc ? 1 : 0,
            });
            this.context.parameters.records.refresh();
        };
    
        onFilter = (name: string, filter: boolean): void => {
            const filtering = this.context.parameters.records.filtering;
            if (filter) {
                filtering.setFilter({
                    conditions: [
                        {
                            attributeName: name,
                            conditionOperator: 12, // Does not contain Data
                        },
                    ],
                } as ComponentFramework.PropertyHelper.DataSetApi.FilterExpression);
            } else {
                filtering.clearFilter();
            }


    This is awesome since we don’t need to worry about any delegation provided that the query can be translated into an OData query.

    But…watch out

    If you have a grid that performs a dynamic sort operation using each column name, and the user sorts on the account.createdby column which is of type Lookup.Simple - you might think that this would be a matter of using the column name:

    this.context.parameters.records.sorting.push({
                name: "createdby",
                sortDirection: desc ? 1 : 0,
    });

    After all, createdby is the column name that is given to us by Power Apps in the dataset metadata:

    {
      "name": "createdby",
      "displayName": "createdby",
      "order": 5,
      "dataType": "Lookup.Simple",
      "alias": "createdby",
      "visualSizeFactor": 1,
      "attributes": {
        "DisplayName": "Created By",
        "LogicalName": "createdby",
        "RequiredLevel": -1,
        "IsEditable": false,
        "Type": "Lookup.Simple",
        "DefaultValue": null
      }
    }

    Strangely, this does not cause any errors a run-time and the data is actually sorted so it looks like it’s working - but on closer examination, the query that is set to the server is:

    accounts?$orderby=createdby+desc&$select=accountid,name,_createdby_value

    Seems legit? But the response is actually an exception:

    The $orderby expression must evaluate to a single value of primitive type.

    The reason being is that the createdby logical name is not expected by the WebApi when sorting, instead, it expects the name _createdby_value.
    What appears to be happening is that after the query fails,  canvas apps uses a fallback approach of performing the sorting in-memory in a non-delegated fashion - but this is not reported in an obvious way. The only indicators are the network trace and the somewhat confusing errorMessageon the dataset object of Invalid array length.


    To get around this, we can’t pass the column name that is used in our dataset, but instead to use the OData name expected:

    this.context.parameters.records.sorting.push({
                name: "_createdby_value",
                sortDirection: desc ? 1 : 0,
    });
    

    This seems slightly unusual until you remember that we are simply hitting the OData endpoint - bringing us back into the imperative world with a bit of a bump! Remember it won’t do all that fancy caching and performance optimizations that Power Fx does for you!


    Hope this helps,
    @ScottDurow

  12. At some point, over the last few months, a change was introduced to the Power Platform CLI such that if you have the ESLint VS Code add-in installed, after using pac pcf init, you may see an error in VS code:

    • 'ComponentFramework' is not defined.eslint(no-undef)

    This might look something like this in the index.ts file:

    The reason for this is that the pac pcf init template now includes an .eslintrc.json however, it is configured to use JavasScript rules rather than TypeScript ones.

    To fix this you simply need to edit the .eslintrc.json  file.

    Find the extends section and replace the standard ruleset with;

    "extends": [
            "eslint:recommended",
            "plugin:@typescript-eslint/recommended"
        ]

    You also might see some other odd errors such as:

    • Unexpected tab character.eslint(no-tabs)
    • Mixed spaces and tabs.eslint(no-mixed-spaces-and-tabs)

    The reason for this is that the template used to create the index.ts file contains a mix of tabs and spaces for indentation. eslint is warning about this - so you can either add the following lines to the top of the file, or you can change the indentation to use spaces using the Format Document command.

    /* eslint-disable no-mixed-spaces-and-tabs */
    /* eslint-disable no-tabs */

    Hope this helps!

  13. You might have seen the announcement about Modernized Business Units in Microsoft Dataverse.

    I made a video on it as well to show you the opportunities that it opens up when designing Microsoft Dataverse security models for both model-driven and canvas apps.

    In summary, the change can be broken down into two parts:

    1. You can now assign a Security Role from a business unit outside of the user's own business unit - this allows users to access records in different business units as though they were a member of that business unit. This could replace more traditional approaches that might have previously involved sharing and/or team membership.
    2. Records can now have an owning business unit that is different from the business unit of the Owning Users/Team. This means that when users move between business units, there are potentially fewer scenarios where you need to re-assign ownership of those records, and the user can maintain access to their records without complex workarounds.

    Check out my videoand the official docs for more info.

    Whilst I was exploring this new feature it occurred to me that this was perhaps the way that the Security Role assignment and Owning Business Unit was always meant to work from the start. Here are my reasons:

    1. The owningbusinessunit field has always been there in table definitions - but shrouded in mystery!
      1. It was automatically updated by the platform to match the business unit of the owning user or team.
      2. You couldn't add this field to forms.
      3. You couldn't always use it in view search criteria because it was set to being non-searchable for some entities - but enabled for others.
      4. There was always mystery surrounding this field since there were limitations to its use - but if you wrote a plugin, you could set it inside the plugin pipeline to a value different to the owning user/team's business unit - but with unknown consequences!
      5. Alex Shlega even wrote a blog post about this mysterious field a few years ago.
    2. Security Roles have always been business unit specific:
      1. If you have ever had to programmatically add a Security Role to a user - you'll have had to first find the specific Security Role for the user's Business Unit since each Security Role created is copied for every single business unit - with a unique id for each.
      2. When moving a user between Business Units, their Security Roles were removed, because they were specific to the old business unit (This is now changing with this new feature thankfully!)
      3. I can't be 100% certain - but I have some dim-and-distant memory that when using a beta version of CRM 4.0 or maybe CRM 2011, there was the option to select the business unit of a Security Role when editing it - as it is today you can't do this and receive the message 'Inherited roles cannot be updated or modified :

        Now that would introduce some interesting scenarios where you could vary the privileges of the same role inside different business units.

    Maybe I dreamt that last point - but it certainly seems that whoever originally designed the data model for Business Units and Security Role assignment wanted to allow for users to have roles assigned from different business units - or at least supporting varying role privileges across different business units. Or maybe, it was a happy coincidence that the data model already supported this new feature!
    I wonder if there is anyone who worked on the original code who can comment!

    @ScottDurow

     

  14. As you know, I'm 'super excited'* about the new Power Fx low-code Command Bar buttons (First Look Video) (Ribbon Workbench compared to Power Fx) - especially the ease at which you can update multiple records in a grid. To allow the user to select a number of records on a grid, and then perform an operation on each, in turn, would have taken plenty of pro-code TypeScript/JavaScript before but now can be turned into a simple ForAll expression. 

    * That one's for you @laskewitz 😊

    The one thing that always gets left out - Error Handling!

    It's easy to assume that every operation will always succeed - but my motto is always "if it can fail, it will fail"!

    Just because we are writing low-code, doesn't mean we can ignore 'alternative flows'. With this in mind, let's add some error handling to our updates. 

    Step 1 - Add your Grid Button

    Using the Command Bar editor, add a button to an entity grid and use the formula similar to :

    With({updated:ForAll(Self.Selected.AllItems As ThisRecord, 
        If(Text(ThisRecord.'Credit Hold')="No", 
            Patch(Accounts, ThisRecord, { 'Credit Hold':'Credit Hold (Accounts)'.Yes });"Updated"),
            "Skipped"
        )
    )},
    With({
        updatedCount: CountRows(Filter(updated,Value="Updated")),
        skippedCount: CountRows(Filter(updated,Value="Skipped"))
        },
        Notify("Updated " & Text(updatedCount) & " record(s) [" & Text(skippedCount) & " skipped ]");
    ));

    In this example code, we are updating the selected Account records and marking them as on 'Credit Hold' - but only if they are not already on hold. 

    Imagine if we had some logic that ran inside a plugin when updating accounts that performed checks and threw an error if the checks failed. This code would silently fail and the user would not know what had happened. To work around this we can use the IsError function and update the code accordingly:

    With({updated:ForAll(Self.Selected.AllItems As ThisRecord, 
        If(Text(ThisRecord.'Credit Hold')="No", 
            If(IsError(Patch(Accounts, ThisRecord, { 'Credit Hold':'Credit Hold (Accounts)'.Yes }))=true,"Error","Updated"),
            "Skipped"
        )
    )},
    With({
        updatedCount: CountRows(Filter(updated,Value="Updated")),
        errorCount: CountRows(Filter(updated,Value="Error")),
        skippedCount: CountRows(Filter(updated,Value="Skipped"))
        },
        Notify("Updated " & Text(updatedCount) & " record(s) [ " & Text(errorCount) & " error(s) "  & Text(skippedCount) & " skipped ]");
    ));

    Save and Publish your Command Button. This creates/updates a component library stored in the solution that contains your model-driven app.

    Step 2- Open the component library and enable 'Formula-level error management'

    Since we are using the IsError function we need to enable this feature inside the component library. This along with the IfError function can be used to check for errors when performing Patch operations.

    Inside your solution, edit the command bar Component Library(it will end with _DefaultCommandLibrary), then select Settings and toggle on the Formula-level error management feature:

     

     Make sure you save, then publish your component library.

    Step 3 - Re-open the Command Bar Editor and publish

    After editing the Component library, it seems to be necessary to always re-publish inside the command bar editor (You will need to make a small change to make the editor enable the Save and publish button). You will also need to refresh/reload your model-driven app to ensure the new command bar button is picked up.

    Done! You should now have error handling in your command bar buttons 😊

    Hope this helps,

    @ScottDurow

    P.S. SUMMIT NA 2021 is next week! I can't believe it! I'll be speaking about custom pages and Power FX command buttons - if you are able, come and check out my session on Next Gen Commanding.

     

  15. If you have canvas apps that use code components then you will be used to the hard link between the namespace of the code component and the canvas apps that use it. Also, if you have your canvas apps in a solution, then there are now solution dependencies added for the code components used to ensure that they are installed before you import the solution to a target environment. You can read more about code component ALM in the Microsoft docs.

    How do we swap out components easily?

    Occasionally, you may need to change the namespace of a control (or perhaps change a property name) but it is very time-consuming to remove the control from all the apps, then re-add it after the update. This is especially painful if you have lots of Power Fx code referencing it.

    Power Apps CLI to the rescue

    I have long been an advocate of storing everything as code - and canvas apps are no exception here. The once experimental canvas app sopa tool (Power Apps as source code) is now all grown-up and is included in the Power Platform CLI. 😊

    In this post, I am going to show you how to take a solution, unpack it, update the code components, and then re-pack so that they will use the new components. This is made possible largely by canvas apps now being unpacked into Power Fx.

    Step 1 - Export your solution

    Export your solution that contains the canvas apps - but don't include the code components. Let's imagine you have two apps that both use two code components and the solution is named MyAppsSolution.zip.

    Step 2 - Unpack your solution

    Make sure you have the Power Platform CLI installed - I find it really easy to use the Power Platform Tools VSCode extension.

    Using PowerShell (assuming you are in the root folder that contains the solution zip):

    pac solution unpack --zipfile MyAppsSolution.zip --folder Solution

    This will create a folder named Solution that contains all the unpacked elements. There will also be a folder named CanvasApps that contains the msapp files and the corresponding metadata for your apps.

    Step 3 - Unpack the canvas apps to Power Fx

    Now we can unpack are two apps into the Power Fx that we will need to edit. There are lots of files created in addition to the Power Fx source code that contains metadata about the elements used in the app.

    Using PowerShell (assuming you are in the root folder that contains the solution zip):

    $dir=Get-Location
    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Extension -eq '.msapp' } | ForEach-Object { 
        pac canvas unpack --msapp $_.FullName --sources "CanvasAppsSrc\$($_.BaseName)"
    }

    You will now how your Power Fx source for each of your apps under the CanvasAppsSrc folder!

    There is an entropy folder that is created for round-trip rebuilding, but we can safely delete these folders using PowerShell:

    Get-ChildItem -Recurse -Directory -Path $dir | where-object { $_.BaseName -eq 'Entropy' } | ForEach-Object { 
        Write-Host "Removing Entropy Folder $($_.FullName)"
        Remove-Item $_.FullName -Force -Recurse
    }

    Step 4 - Search/Replace

    This step is the more tricky part - we need to change all the references to the code component's old namespace/solution prefix and replace it with the new one. This is easier if you have names that are unique!

    Open up the root folder in VSCode and use the global search and replace function (with case sensitivity turned on) to preview what you are doing. You will find the replacements needed in jsonyaml and xml files.

    Step 5 - Rename files

    When changing the namespace/publisher prefix of your code components, it is also necessary to rename the resource files that are part of your code components since each file is prefixed with the namespace. Additionally, the app resource file names are prefixed with the solution publisher.

    You can use the PowerShell:

    $oldnamespace = "Samples";
    $newnamespace = "MyNamespace"
    $oldpublisher = "samples_";
    $newpublisher = "myprefix_"
    
    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Name.StartsWith($oldnamespace) } | ForEach-Object {
        rename-item -LiteralPath $_.FullName $_.FullName.Replace($oldnamespace,$newnamespace) 
    }
    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Name.StartsWith($oldpublisher) } | ForEach-Object {
        rename-item -LiteralPath $_.FullName $_.FullName.Replace($oldpublisher,$newpublisher) 
    }

    Step 6 - Repacking

    You can now re-pack the apps and the solution!

    Get-ChildItem -Recurse -force -Path $dir | where-object { $_.Extension -eq '.msapp' } | ForEach-Object { 
        pac canvas pack --msapp $_.FullName --sources "CanvasAppsSrc\$($_.BaseName)"
    }
    
    pac solution pack --zipfile UpdatedSolution.zip --folder Solution

    Step 7 - Update code components & Import solution

    Ensure that you have published the new versions of your re-build code components to your environment making sure to increment the control versions. It is very important to increment the version of the code components so that the canvas apps will detect the new version upon opening up after the update. This will force the update to the new code-component resource files.

    You can now import the UpdatedSolution.zip into your environment.

    When you open each canvas app you will get the upgrade component prompt and your app will be using the new code components under the updated namespace!

    If you updated the code-component resource files then theoretically you could perform the same on a managed solution and remove the need to open and republish each app, but I've not tried this!

    Hope this helps!

    @ScottDurow

     

  16. If you are developing code components (PCF) for canvas apps you'll be used to using the 'Get more components' panel. When adding the code-component to the canvas app, occasionally you will receive the somewhat unhelpful error:

    Couldn't import components

     There are no more details provided in the expanded down arrow area:

    I'm putting this here for my future self (who always seems to forget about this issue) and anyone else who comes across this error.

    The cause is usually because you have a property-set that has the same name as an in-build property or one of your own (e.g. Enabled/Selected)

    The resolution is simply to prefix your property-set names with the name of the data-set:

    <data-set name="Items"... >
    <property-set name="ItemsSelected" .. />
    <property-set name="ItemsEnabled" .. />
    </data-set>

    Hope this helps!

     

     

  17. One of the longest-standing mottos of the Power Platform has been ‘no cliffs’. This somewhat odd phrase has definitely stood the test of time due to its simplicity and its powerful message. That is - you shouldn’t ever find yourself developing solutions using the Power Platforms ‘low-code’ features and suddenly finding that you hit an un-passable ‘cliff’. Instead, there are multiple avenues forwards using various degrees of ‘pro-code’ features. I think the message also hints at being able to start as a small app and grow into a larger enterprise one without hitting cliffs – although this story is still unfolding.

    In parallel to the Power Platform ‘no-cliffs’ story is the saga of the 1st party Dynamics 365 apps that sit on top of the Dataverse and Power Platform (tale of three realms). Originally, Dynamics 365 was the platform, but since the two have mostly separated, they are on somewhat separate development cycles. That said, of course, many features are built by the Power Platform team for a specific need of the Sales/Service/Marketing apps – or at least, they use the features before they are publicly available. This creates a rather nice testing ground and promotes a component based architecture – rather than the original monolithic platform approach of the Dynamics CRM days.

    Black-box user interfaces

    But here’s the thing. With each release of the Dynamics 365 apps comes user interface features that are both beautiful, but alas painfully out of reach of pure Power Platform Apps unless you want to do some awesome pro-coding!

    There have been some amazing steps made recently that makes it much easier to create beautiful low-code model-driven app user interfaces:

    1. App convergence - custom pages
    2. Power Fx commanding
    3. Model-driven side panels
    4. Fluent UI controls in canvas apps
    5. PCF code components in canvas apps

    These awesome features allow us to create beautiful low-code user interfaces and then extend them using pro-code components. Using PCF code components inside Custom Pages makes it possible to create some really complex user interfaces using libraries like React and Fluent UI – but it’s certainly not low-code!

     

    Take the new Deal Manager in 2021 Wave 2 of the Dynamics 365 Sales App. It has some rather juicy-looking user interface. Underneath all that beauty is some awesomely productivity functionality such as opportunity scoring & custom metrics.

    My point? I would love it if the platform allowed us to build low-code user interfaces with ease and efficiency to look just like this – or at least similar. If you have the appetite and the willingness to build/support custom user interfaces, then the components used by the 1st party apps should be there to use in custom apps instead of having to revert to pro-code. The primary reasons to buy licenses for the 1st party apps should be about the functionality and features that it provides. The user interface should be provided by the platform. Furthermore, if we wanted to customise the 1st party user interface, they should be easily extendable. The Deal Manager currently is one monolithic closed Power Apps Component Framework control that has very limited customisability.

    I would love for the ethos of the 1st party apps to be about delivering as much functionality as possible using low-code features rather than reverting to pro-code – this would benefit both the Platform and its customers.

    Extendable using the low-code platform

    I hope in future releases there will be more focus on component-based user interfaces where screens like the 1st party Deal Manager screens are actually composite screens built using mostly standard components that are provided by the platform – with only very exceptional 1st party specific user interfaces being inside PCF components.

    This would make these screens editable/extendable by customers if desired instead of the black-box that they mostly are today. If a completely different user interface is required that looks similar, then the same components should be able to be added using low-code to glue them together.

    This is needed so that we don’t return to the days when it was very common for ‘out of the box’ features to be black-boxes and not usable or extendable by customizers.

    Starting from Primitives is hard to maintain

    Canvas Apps often get referred to as ‘pixel perfect’ apps. You can certainly build beautiful user interfaces using primitives such as lines, rectangles, galleries, and buttons. But this comes at a cost. The Dataverse for teams sample apps are visually stunning – but if you look at how they are written you will see what I mean. Starting from primitives in this way to build apps that look like model-driven apps is very complex and hard to maintain. The trouble is that it starts to undermine the benefits of a low-code platform when you have apps of this level of complexity. When we build business apps, we need components that hide us from most of the complexity and leave us to write the functionality needed. This is what app convergence is really about – being able to have the best of both worlds - model-driven and canvas apps:

    1. Build complex metadata-driven user interfaces quickly that are consistent and easy to maintain
    2. Create custom layouts using drag-and-drop components with Power Fx code to glue functionality together.

    So, at what point can we say that the apps have converged?

    I don’t think converged apps means the end of stand-alone canvas apps – there is always going to be a time and place for ‘pixel-perfect’ low code apps that have a very custom look and feel. Instead, let’s hope we see canvas components such as forms, editable grids, command bars, tab controls & model-driven charts that can be glued together with Power Fx, to create beautiful custom composed pages so we can focus on building business features, and having to hand-craft every grid, command bar and form. This is not a new idea - check out my app convergence predictions from back in 2019 (Why do we have 2 types of apps) where I describe my idea of being able to ‘unlock’ a model-driven page so that it turns into a canvas page, composed with all the model-driven app components – now wouldn’t that be cool!

     UG Summit NA

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. I will be talking more about app convergence and doing some demos of where we are today - App Convergence is Almost Here with Customer Pages in Model-Driven Apps.

    If you are in North America and feel comfortable yet with an in-person event you can also register using the Promo code Durow10 to get a 10% discount if you've not already registered! 

    @ScottDurow

     

     

     

     

  18. Until recently I didn’t consider attending in-person events at all. I saw calls for speakers for live events - but I just moved on. Sound familiar? I have been avoiding the idea of in-person events altogether!

    Thanks to the nudge from a few folks, I realised that now is the time to re-boot – and suddenly I realised how much I’ve missed attending this kind of in-person event. Somehow, I’d convinced myself that virtual was all we will ever need. Back in 2020, I was booked to attend Community Summit in Europe just before the world closed down, and like many of you, was left out of pocket when the event was cancelled. This naturally left a bit of a sour taste, and in-person events became tainted by the scramble that followed. Of course, I presented at the virtual UG summit, but it really didn’t feel that different from any of the other user group virtual meetings that I regularly attend. 

    Was it the end?

    Was this the end of the Dynamics Community UG events as we knew them? Here is a photograph from the archives - a UG meeting back in 2011 held at the Microsoft Reading UK Offices. Can you spot a very fresh-faced me? Can you spot yourself?! Anyone else you know?

     Crowd of people at 2011 CRMUG meeting - Scott is looking freshface and young!

    Here is a picture of Simon Whitson with me, trying to re-create the updated UG logo!

     Scott and Simon are chest bumping to look like the Dynamics Community logo that is two arcs with a circle.

    Looking back at past community summits has reminded me how important these events are to being part of the community, connecting, learning and sharing together in person - something that I now realise I had forgotten about amongst the focus on running virtual events.

    ⏩Fast-forward to today, and I’m so excited to be attending the Community Summit NA on October 12-15 that is being held in Houston, Texas. Having been part of community summits in Europe and helping run the UK community days for so many years, these types of events have played a huge part in my professional development.

    I moved to Canada last year, and so now it’s my opportunity to attend the North American version of this yearly community event. I think it’s also time to start planning some face-to-face meetings with the Vancouver Power Apps User Group

    Mileage may vary 

    I realise that there are some people that are still not ready (or even able) to attend in-person meetings, and I really don’t want to offend anyone. I especially feel for those that are entering into yet another lock-down. If this is you, then please accept my sincerest apologies for even talking about in-person events. I think the success and benefits we have enjoyed from virtual events during these difficult months will mean that they will continue to be successful alongside in-person ones.

    So what can we learn? 🤔

    Community events should have a true sense of ‘belonging’, where we can all come together, connect, and learn from one another. We have learnt so much from these past months on how to be more inclusive and more accessible during events, I’m looking forwards to learning how to apply this experience forwards to make in-person community events even better! Here are a few topics that I’ve started to think about:

    • Medic Stations – how can we make these more approachable and less intimidating? Not everyone wants to engage in this way – how can we offer alternatives so you can get your questions answered without having to compete with others and wait in line?
    • Breakout sessions – are these as accessible as they could? Is the language always inclusive? Q&A time sometimes can be such a rush at the end, especially where there is another session starting afterwards. Is the session description an accurate description of the content so that attendees can easily identify if it’s for them or not?
    • Expo halls – these can be crazy loud, busy and sometimes intimidating. How can it be made easier to navigate the maze to find what you want? Is there a way to find a time that is quieter so that everyone can get the most out of what is on offer?
    • R & R – When we are at home, we usually can find space to relax and recharge. Conversely, in-person can be so exhausting with moving from room to room whilst our brains are being crammed full of new information, not to mention the social anxiety that you might feel. How can breaks be made to count more? Are refreshments provided to suit everyone’s needs?
    • Hybrid Sessions  - Should all sessions be streamed for those who are unable to make the physical event - or is it sufficient to have the sessions recorded and then available on-demand later?

    Perhaps you have more ideas – I would love to hear them - do get in touch and let me know!

    See you there?

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Here are my sessions:

    If you are in North America, do you feel comfortable yet with an in-person event? You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

    The most important part is - they are offering a 100% money-back guarantee should the event be cancelled due to COVID-19- and naturally there is a significant focus on Health and Safety.

    Looking forwards to meeting IRL 😊

    @ScottDurow

  19. As a follow-on to my last post on adding custom page dialogs to model-driven forms, in this post, I'm going to show you how you can easily do the same using the Next Gen Commanding Buttons.

    Ribbon Workbench Smart Buttons have two parts:

    • Smart Button Manifest - the manifest file included in the smart button solution that defines the templates that are picked up by the Ribbon Workbench
    • JavaScript library - the actual run-time JavaScript library that is called by the smart buttons themselves

    Since the JavaScript library is just a normal web resource, it can be called from a Commanding V2 button as well since you can use JavaScript actions as well as Power Fx actions!

    1. Ensure you have the latest version of the smart button solution installed and add a custom page to your model-driven app as described by my last post.

    2. Edit the command bar using the new command bar editor.

    3. Add a new button to the command bar and select Run Javascript instead of Run formula for the Action parameter.

    4. Use the + Add Library button. Search for smartbuttons and select the dev1_/js/SmartButtons.ClientHooks.js library.

    5. Set the function parameter to be SmartButtons.ClientHooks.SmartButtons.OpenDialog

    6. Add the following parameters:

    Parameter 1 (On a Grid): SelectedControlSelectedItemIds

    Parameter 1 (On a Form): PrimaryItemIds

    Parameter 2: The unique name of your custom page

    Parameter 3: The width of your dialog (or zero for a sidebar)

    Parameter 4: The height of your dialog (or zero for a side-bar)  

    Parameter 5:  The title of your dialog

    Parameter 6: PrimaryControl

    Parameter 7:SelectedControl 

    Parameter 1 is a dynamic property that passes the id of the record currently selected - Parameters 6 & 7 give the code a context to use when calling the client-side api.
    Once this is done you will see something like the following:

    Commanding V2 designer with show dialog smart button

    7. If you are adding a button to a grid, you will also need to set Visibility to Show on condition from formula with the expression:

    CountRows(Self.Selected.AllItems)=1

    This will ensure that the button is only shown when a single record is selected in the grid.

    8. Save and Publish

    ...and that's all there is to it! Using Smart Buttons in the Ribbon Workbench has the advantage that it will set up much of this for you and only ask you for the parameters needed, but the new commanding designer is so easy to use it makes using the Smart Button library really straightforward. 

    P.S. There is a bug that will be fixed by Microsoft in the coming weeks where commanding v2 JavaScript buttons do not show up correctly on forms.

    See more at community Summit NA!

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Can't wait to show you all the cool new features. You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

    @ScottDurow 😊

  20. Now that custom pages are released (in preview), we are one step closer to the convergence towards a single app type that has the best of model-driven apps and canvas apps.

    Previously, I had released a Ribbon Workbench smart button that allowed opening a canvas app as a dialog via a command bar button. With the latest release of the smart buttons solution you can add a button to open a custom page as a dialog box or sidebar. This creates a really native feel to the dialog since it's included inside the page rather than an embedded IFRAME, and the good news is that it's really easy to upgrade from the previous Canvas App dialog smart button!

    Demo of Custom Page Dialog

    Step 1: Add a custom page to your solution

    Open the make.preview.powerapps.com, and open the solution that contains your model-driven app.

    Inside the solution, select  + New -> App -> Page. 

    Add Page

    The page editor will open, which is essentially a single Screen canvas app. In this example, I create a dialog to update the Credit Hold flag on a record and add some notes to the Description. In order to do this, we need to get a reference to the record that the dialog is being run on. Inside the App OnStart event, add the following code:

    Set(varRecordId, If(
        IsBlank(Param("recordId")),
        GUID("d2f2d759-8ef8-eb11-94ef-0022486dba07"),
        GUID(Param("recordId"))
        ));
    Set(varSelectedRecord, LookUp(Accounts, Account = varRecordId))

    Notice that there is a hard-coded GUID there - this is simply for testing purposes when running inside the designer. You can replace it with the GUID of a record in your dev environment - or you could use First(Account) to get a test record. When the dialog is opened inside the app, the recordid parameter will contain the GUID of the current record.

    The size of the screen needs to be adjusted to accommodate the borders of the dialog - so edit the Screen Width and Height properties to be:

    HeightMax(App.Height, App.MinScreenHeight)-24
    WidthMax(App.Width, App.MinScreenWidth)-10

    Now we can add a root container with a width and height set to Parent.Width & Parent.Height - this will result in a responsive layout. You can then add child layout containers that hold the dialog controls. The layout might look like the following:

    Custom Page designer

    Notice the nested horizontal and vertical layout containers which work great for creating responsive layouts. This is especially important because we want our dialog to work both as a popup modal dialog as well as a sidebar dialog. The sidebar dialog will fill the available height and so our dialog content should also expand to fill the available height. 

    We can use the name of the account selected by using the variable we set in the App OnStart, by setting the Text of a label to the expression:

    Concatenate("Are you sure you want to submit the account '",varSelectedRecord.'Account Name',"' for credit check?")

    The Cancel button can close the dialog using:

    Back()

    Note: This is slightly different from a canvas app smart button that would call Exit().

    The Confirm button that can run the expression:

    Patch(Accounts,varSelectedRecord,{'Credit Hold':'Credit Hold (Accounts)'.Yes,Description:txtNotes.Value});
    Back();

    This will simply update the credit hold and description columns for the selected record and then close the dialog.

    You can download my example dialog from here - https://github.com/scottdurow/RibbonWorkbench/raw/master/SmartButtonsUCI/SampleCustomPageSmartButton.zip 

    When you Save and Publish your custom page, it will be given a unique name that we will use when creating the smart button:

    Custom Page Unique Name

    Unfortunately, you can't copy this unique name from the solution editor, but in the next step once it is added to the app designer it can be selected and copied!

    Step 2: Add a custom page to the app

    The custom page preview allows you to add the custom page to the app in the model-driven app navigation, but we can also add it to the app without it being visible. This is required to enable opening the custom page as a dialog.

    Open your model-driven app in the preview editor (Open in preview) and select Pages -> Add Page -> Custom (preview) -> Next -> Use and existing Page

    Select the page you created in step 1. Uncheck Show in navigation, and then click Add.

    You can now easily copy the unique name of the custom page that you'll need in the next step when adding the smart button.

    Unique Name in App Designer

    You now need to Save and Publish the app.

    Note: You will need to Save and Publish each time you make a change to your custom page.

    Step 3: Install the latest smart button solution

    You will need the latest smart buttons solution – https://github.com/scottdurow/RibbonWorkbench/releases

    Step 4: Add dialog smart button

    When you open the Ribbon Workbench for the environment that the Smart Button solution and Canvas App is installed into, you can then drop the ‘Open Dialog’ button on either a Form, SubGrid, or Home Grid.

    Set the smart button properties to be:

    Title: The text to display on the button
    Dialog Url/Custom Page Unique name: The unique name copied from the app designer. E.g. dev1_pageaccountsubmitforcreditcheck_e748f
    Width: 470
    Height: 350 (or zero to show the dialog as a sidebar)
    Dialog Title: The name to show at the top of the dialog. E.g. Credit Check 

    Now you just need to save and publish and that's it!

    Note: You might need to enable Wave 2 2021 depending on which release your environment is on. I have seen some environments not work correctly when using custom pages due to the recordId parameter not being correctly passed to the custom page.

    Migrating from canvas app dialog smart buttons

    If you have been using the canvas app dialog smart button approach, then you can very easily migrate to this custom page technique by performing the following:

    1. Create a custom page as described above, but copy and paste the screen contents from your existing canvas app. It's cool that you can copy and paste controls between custom pages and canvas apps!
    2. Update the layout to use the new responsive containers.
    3. Add the custom page to your model-driven app.
    4. Update the Open Dialog smart button with the unique name of the custom page instead of the canvas url.

    Remember that this feature is still in preview and does not work inside the native mobile/tablet apps at this time. You can read more about how this smart button works in the docs topic: Navigating to and from a custom page using client API (preview).

    In my next post, I'll show you how to do this using the Commanding V2 designer rather than the Ribbon Workbench!

    See more at community Summit NA!

    Those nice people at Dynamic Communities have asked me to be a Community Ambassador for SUMMIT NA 2021. This is going to be an exciting in-person event (fingers crossed) where I'll be speaking about custom pages and Power FX command buttons - it would be great to see you there if you are at all able to travel to Houston, Texas. Can't wait to show you all the cool new features. You can also register using the Promo code Durow10 to get a 10% discount if you've not already registered!

  21. [Update June 2022] The modern command designer is now GA!

    Power Fx command bar buttons (Commanding V2) is the latest exciting feature to be released into preview by the Power Platform team! Check out Casey's blog post and my first look video where I show how amazingly easy it is to add complex functionality to your model-driven command bars!

    The Ribbon Workbench marked its 10-year anniversary this year and so it's fitting that the new Power Fx command buttons for model-driven apps have been released. This exciting new feature is part of the continued journey of converging the goodness of both model-driven apps and canvas apps into a single app that gives the best of both worlds! In this post, I'll identify some of the differences in functionality. This initial release provides the foundation for Power Fx and as you'll see there are still gaps - but I am confident that the functionality will be developed over the coming months.

    Key Design differences

    The Ribbon Workbench (and the underlying RibbonXml that supports it) has many legacy components that are baggage from the days when there was a Ribbon rather than a command bar. Things like Groups, Tabs & Templates have no meaning in the command bar as we see it today. For this reason, the new Power Fx command buttons have greatly simplified the model for customizing the model-driven app command bar.

    Here are some of the key differences in the design:

    • Buttons, Commands & Visibility Rules are linked - In the Ribbon Workbench, you would create a button and then associate it with a command. With Power Fx commands, the button, command, and visibility rules are all linked together as a single unit.
    • Localized Labels are part of the solution translations - In the Ribbon Workbench, button label translations were part of the RibbonXml, whereas with Power Fx commands you can use the standard export/import translations feature for the solution to provide translations.
    • Customizations are deployed via separate solution components - In the Ribbon Workbench, your command buttons were deployed via the entity/table solution component. With Power Fx commands, you add the Component Library to your solution to deploy command buttons. 
    • No need for a solution import - Since the Power Fx commands are deployed using Component Libraries, there is no need for the lengthy export/unpack/update/rezip/import cycle that happens when you publish from inside the Ribbon Workbench. This makes working with the Power FX Command buttons much quicker!
    • Power Fx solution packager required to see command details - When exporting the solution that contains the Command Component Libraries, the expressions are inside the .msapp files. To see the details, you will need to use the new Power Fx Solution Packager functionality to extract into yaml files and add this to source control. The great news is that canvas app unpacking/packing is now included in the PAC CLI.

    You can still use JavaScript Commands!

    Possibly one of the most important features of the new commanding feature is that you can still call your existing JavaScript for commands (but not Enable rules at this time). Why is this important? Because it makes the path to migrate to Version 2 commands easier where the functionality is not yet possible in Power Fx expressions.

    Common Requirements

    The following table shows common requirements that I find needed when customizing the command bar using the Ribbon Workbench. You'll see that there are still gaps that will require the Ribbon Workbench for the time being - but these will be addressed over time.

    Common Requirement Ribbon Workbench Commanding V2
    Hide existing OOTB button Hide Action Not yet available
    Move existing OOTB button Customize Button and Drag to the new location Not yet available
    Change label/icon of existing OOTB button Customize Button and edit properties Not yet available
    Change command of existing OOTB button Customize Command and edit actions Not yet available
    Pass CommandValueId to JavaScript Context when the same command is used on multiple buttons Set CommandValueId property Not applicable since the command is not separate from the button
    Update a form value and then save the record Ribbon Workbench 'QuickJS' Smart Button or custom JavaScript.
    The PrimaryControl Parameter provides the event context which can be used to access the form context.
    Patch(Accounts,Self.Selected.Item,{'Credit Hold':'Credit Hold (Accounts)'.Yes});
    Note: The form is automatically saved and refreshed!
    Update/Create a related record Ribbon Workbench 'QuickJS' Smart Button or custom JavaScript that uses the WebApi and then calls refresh on the formContext provided by a PrimaryControl parameter. Update Related Record:
    Patch(Accounts,Self.Selected.Item,{'Description':"edit"});

    Create related record (An additional data source must be added to the component library )

    Patch(Tasks,Defaults(Tasks),{Regarding:Self.Selected.Item,Subject:Concatenate("Test",Text(Now()))});
    Note: The form is automatically refreshed!
    Add buttons to a flyout button Use the Flyout or SplitButton toolbox control with a MenuSection [UPDATE] Now available at GA!
    Dynamically populate a flyout button (e.g. from a WebApi call) Use the PopulateQueryCommand with a Custom JavaScript Action Not yet available
    Add buttons to the Application Ribbon so that they appear in multiple locations (including the global command bar) Add the Application Ribbon to the solution loaded into the Ribbon Workbench. The entity type can be used in an EntityRule to show buttons for multiple entities. [UPDATE]  Although you cannot add to the Global Command Bar  - you can change the scope of buttons to appear across apps or across all tables. There is no designer support at this time for changing scope.
    Run a command on multiple selected records on a grid Use a Custom JavaScript Command that accepts SelectedControlSelectedItemIds as a parameter - and then iterate over the array, performing an action for each.

    New! To apply an update to multiple selected records use something similar to:

    Patch(
        Accounts, 
        ForAll(Self.Selected.AllItems, 
            {
                Account:ThisRecord.Account,
                'Credit Hold':'Credit Hold (Accounts)'.Yes 
            }
        )
    )
    Note: This will ensure that the records are updated in parallel, instead of one at a time.
    Display a blocking wait spinner whilst a long-running task is in progress Use showProgressIndicator inside Custom JavaScript. Not yet available in Power Fx command expressions
    Run a workflow Ribbon Workbench 'Run Workflow' Smart Button or custom JavaScript. Trigger a workflow on change of a form field change
    Run a report Ribbon Workbench 'Run Report' Smart Button or custom JavaScript. Use a Custom JavaScript function - you can call the Smart Button JavaScript
    Run a cloud flow Ribbon Workbench 'Run Webhook' Smart Button or custom JavaScript.

    Use a Custom JavaScript function - you can call the Smart Button JavaScript

    Open a custom page using Navigate() (or JavaScript) and then run the flow from there.

    Open a popup custom page dialog from a button Ribbon Workbench 'Open dialog' Smart Button linked to a Canvas App Use a Custom JavaScript function - you can call the Smart Button JavaScript or use your own JavaScript.
    There is also Confirm() that allows you to present a simple Yes/No style popup.

    Visibility Rules

    Perhaps the biggest gap in functionality at this time is in the area of visibility rules:

    Common Requirement Ribbon Workbench Commanding V2
    Show button only for a specific Form Use a Custom JavaScript Enable Rule or add the RibbonXml to the FormXml UPDATE: This no longer works and currently there is no alternative.
    Use a visibility rule similar to :
    'My custom table (Forms)'.'My custom form'=true
    Show button only for a specific View Use a Custom JavaScript Enable Rule UPDATE: This no longer works and currently there is no alternative.
    Use a visibility rule similar to :
    'My custom table (Views)'.'My custom form'=true
    Show button only for a specific App Use a Custom JavaScript EnableRule that returns true for specific app unique names Commands are added to specific apps.
    [Update] You can change the scope of buttons to appear across apps or across all tables. There is no designer support at this time for changing scope.
    Show a button only when online/offline Use the CrmOfflineAccessStateRule Not yet available
    Show a button based on the user's security privileges Use the RecordPriviledgeRule or MiscellaneousPrivilegeRule Use DataSourceInfo() to determine if the user has access to a specific table. Use RecordInfo() to determine access to a specific record.
    Show a button based on certain entity metadata (e.g. IsActivity) Use the EntityPropertyRule in a Display Rule Some information is available from the DataSourceInfo.
    Show a button only for existing or read-only records. Use the FormStateRule in a Display Rule

    [UPDATE]  Use a visibility expression similar to:
    Self.Selected.State = FormMode.New

    You can also determine if the record is 'Dirty' using: Self.Selected.Unsaved = true

    Show a button only when a single record is selected in the grid Use the SelectionCountRule inside an EnableRule

    Visibility Expression:
    CountRows(Self.Selected.AllItems)>0
    OR 
    !IsEmpty(Self.Selected.AllItems)

    I prefer the CountRows version because it's more consistent with other situations like this next one.
    Show a button only when a single record is selected in the grid Use the SelectionCountRule inside an EnableRule Visibility Expression:
    CountRows(Self.Selected.AllItems)=1
    Show a button based on a form field value ValueRule inside an EnableRule. refreshRibbon must be called inside the onchange event of the form field.

    Visibility Expression:
    Self.Selected.Item.'Credit Hold'='Credit Hold (Accounts)'.Yes

    NOTE: refreshRibbon still must be called if you want the button to show/hide when the field is changed.

    Currently, there is an issue when using optionsets/status reasons like this where you will need to cast to a String and compare using:

    Text(Self.Selected.Item.'Credit Hold')="Yes"

    Show a button only when a related record column has a specific value Use a Custom JavaScript EnableRule that performs a WebApi query. Self.Selected.Item.'Parent Account'.'Credit Hold'='Credit Hold (Accounts)'.Yes
    Show a button when a form value matches a complex expression Use a Custom JavaScript EnableRule that performs a WebApi query or uses the provided formContext. StartsWith(Self.Selected.Item.'Account Name',"a")
    Show a button when there are a specific number of related records matching a query Use a Custom JavaScript EnableRule that performs a WebApi query. CountRows(Self.Selected.Item.Contacts)>0

    Summary

    I will come back to this page and update it as new features are unlocked. You can also read more in the official documentation. As you'll see from the tables above, there are some gaps (especially with Enable/Display rules) but I have no doubt that they will be filled 'in the fullness of time'. The ease at which you can create complex Power Fx expressions to perform logic that would have previously required some complex JavaScript is very exciting and will unlock many scenarios that were previously off-limits to low-code app makers.

    @ScottDurow

    [Updated 2 June 2022]

  22. [Update June 2022] The modern command designer is now GA!


    Power Fx command bar buttons in model-driven apps is the latest exciting feature to be released into preview by the Power Platform team! Check out my first look video and Casey’s blog post.

    This post shows you the steps to follow to add a command bar button on a model-driven form to create a related task for the account record and to only show this when the Credit Hold flag is set to No. This would normally require custom JavaScript and the Ribbon Workbench but now can be accomplished with a simple expression!

    1. Open the new Model Driven App editor

    First, we must open the new model-driven app editor to access the command bar editor.

    1. Create a new model-driven app and add the account table.
    2. Open the solution that contains the model-driven app using the preview editor (make.preview.powerapps.com)
    3. Using the context menu on the model-driven app, select Edit -> Edit in preview
    4. This will open the new app designer preview. Eventually, this will be the default experience.

    Open Preview Editor

    2. Edit the command bar

    Once the app designer has opened we can edit the command bar on the account table. We will create a form button to create a new task for the selected account.

    1. Inside the Pages panel, select the Account Pagecontext menu -> Edit command bar (preview).
    2. Select the Main form command bar to edit.
    3. The command bar editor will open.

    Edit Command Bar

    3. Add Power Fx Command Button

    The command bar editor will show all the buttons configured for the account main form. Some of these buttons will not be visible by default but are displayed still in the editor. This is very much like the Ribbon Workbench. The existing buttons are considered V1 buttons and cannot be edited at this time.

    1. Select New command.
    2. In the Command properties panel on the right, set the Label and Icon of the button.

    Note: You can also upload your own svg rather than selecting from the out-of-the-box icons available.

    Add Command

    4. Set Visibility Expression

    This is where Power Fx starts to make an appearance!

    1. In the Visibility section, select Shown on condition from formula that is at the bottom (you may need to scroll down).
    2. Notice the Expression drop-down now shows Visible rather than OnSelect.
    3. Enter the expression:
      Self.Selected.Item.'Credit Hold'='Credit Hold (Accounts)'.Yes

      Note:
      You can also use navigation properties to access related records in these kinds of expressions!
    4. Save and Publish and then close the editor window.

    Setting Visibility

    5. Open Component Library and add a data source

    So that we can add a new task, we must add the Tasks data source connection much like we would in a canvas app.

    1. In the solution editor, select Component libraries and then open the CommandV2 component library that should have been created.
    2. Once the editor has opened, select Data in the left-hand panel.
    3. Select Add data.
    4. Select the Tasks table from the Current environment connector.

    Add Task Datasource

    6. Close Component Library to release the lock

    When you open a component library, a lock is taken out to prevent it from being edited in multiple sessions. We must close the editor to release the lock.

    1. Select File -> Save.
    2. Select Publish -> Publish this version.
    3. Select Close.

    Closing Component Library

    7. Add OnSelect Expression to create a task

    Now we can add the Power Fx expression to create the new task related to the account record.

    1. Open the command bar editor again using Edit command bar (preview) from inside the model-driven app editor.
    2. Select the Main Form again.
    3. Select the Credit Check button.
    4. In the OnSelect expression enter:
      Patch(Tasks,Defaults(Tasks),{Regarding:Self.Selected.Item,Subject:"Credit Check Follow Up"});
      Notify("Credit task created",NotificationType.Success);
    5. Select Save and Publish.
    6. Select Play to open the model-driven app.

    Adding Command

    8....and the result!

    Once the model-driven app opens, you can open an account record and see the Credit Check button appear only when the Credit Hold column is set to Yes.

    Selecting the button will create a new task related to the current record! Notice that the form is automatically refreshed to show the new record created inside the related records.

    Note: If you wanted to make the button appear as soon as Credit Hold is set to Yes, you would need to add a call to refreshRibbon inside the form fields onChange Event.

    To add this functionality using the Ribbon Workbench would have required JavaScript and would be considerably more complex. The new commanding Power FX command buttons unlocks many customizations to low-code app makers!

    There are still some requirements that are not yet possible to implement using the new Power Fx Commanding, where you will need to continue to use the Ribbon Workbench. One example of this is the more complex display/enable rules you could create such as visibility depending on the user's security privileges - but I am hopeful that these gaps will be filled in the 'fullness of time' 😊 

    Watch out for more posts from me on Power Fx commands!

    @ScottDurow

  23. If you are building code components for Power Apps (PCF) you might be using msbuild to build cdsproj projects:

    msbuild /p:configuration=Release

    This works well on windows, and requires either Visual Studio or Build Tools for Visual Studio with the .NET build tools workload installed.

    What about if you didn't want to install Visual Studio, or you were not running on Windows? The good news is that you can still develop and build code components (or run a build inside a non-windows automated build pipeline) using the .NET core equivalent:

    dotnet build -c release

    To get developing cross platform you would use the following:

    1. Power Platform Extension for Visual Studio Code (This is in preview right now, but is a cross platform alternative to the Power Platform CLI MSI installer that only works on windows)
    2. .NET 5.x SDK

    Once you have installed these, you can use both the pac CLI and dotnet build from a terminal right inside VSCode.

    Happy cross-platform PCF developing!

  24. If you are using the latest versions of the PowerApps CLI then much of the implementation now uses the new dotnetcore DataverseServiceClient. You may find that you occasionally get the following error when performing pac pcf operations:

    The request channel timed out while waiting for a reply after 00:02:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.

    Previously we could solve this by adding the configuration  MaxCrmConnectionTimeOutMinutes  - but since the move to the Dataverse Service Client, the key has now changed to  MaxDataverseConnectionTimeOutMinutes. We can see this from the source code in GitHub.

    To increase the timeout on the PowerApps CLI PCF operations to 10 minutes you need to:

    1. Locate the file for the latest version of the Power Apps CLI that will be at a location similar to: C:\Users\YourProfileName\AppData\Local\Microsoft\PowerAppsCLI\Microsoft.PowerApps.CLI.1.6.5

    2. Edit the file \tools\pac.exe.config

    3. Add the following underneath the startup element:

    <appSettings>
      <add key="MaxDataverseConnectionTimeOutMinutes" value="10"/>
    </appSettings>

    Note: The value is in minutes!

    4. Save

    5. Ensure you are using the latest version of the Power Apps CLI by using:

    pac install latest
    pac use latest

    Now you should no longer receive a timeout when using pac pcf push ! 🚀

  25. A hot area of investment from the Dataverse product team in Wave 1 2021 has been the Relevance search experience. https://docs.microsoft.com/en-us/powerapps/user/relevance-search

    Quick Actions

    Part of this new search experience brings the command bar to the inline search results as well as the search results screen.

    What's really cool is that you can customize these command bar buttons using the Ribbon Workbench. The relevance search can have up to 3 buttons visible when you hover over a record, and then an additional 2 actions in the overflow (maximum of 5 command buttons).

    The search experience picks up commands from the HomePage Grid command bar, and this is where we can apply our customizations using the Ribbon Workbench.

    Adding new buttons

    To add a custom button to the Search Experience - both the drop-down and the search results grids, follow these steps:

    1. Create a temporary solution that contains just the entities you wish to add a command button to. Don’t include any additional components for performance reasons (see https://ribbonworkbench.uservoice.com/knowledgebase/articles/169819-speed-up-publishing-of-solutions)
    2. Drag a button onto the Home Command Bar and add your new command.

      Note: Here I am using the Quick JS Smart Button (https://ribbonworkbench.uservoice.com/knowledgebase/articles/896958-smart-buttons) but you can add whatever you want!

    3. To your command, add a new Enable Rule:
      Id:Mscrm.ShowOnQuickAction
      UnCustomised (IsCore):
      True

      Important: The IsCore property tells the Ribbon Workbench that this rule is an out-of-the-box rule that we don’t need to provide an implementation for in our customizations.



      Note: You can also use Mscrm.ShowOnGridAndQuickAction if you want the button to appear both on the Home Page grid AND on the search results.
    1. At the time of writing, it seems that custom SVG icon web resources are not supported and so your button will appear with a blank icon. To get around this you can either leave the modern icon blank (your button will be assigned the default icon) or you can use a Modern icon name from one of the other out-of-the-box icons.

     

    1. Publish and wait (Yes, I’d make it quicker if I could!)

    Removing existing out of the box buttons

    Perhaps you don't want to show an existing out-of-the-box button on the search results, or you want to make space for your own. You can do this using another specific EnableRule Mscrm.ShowOnGrid:

    1. Find the button you want to remove from the quick actions (e.g. Assign Button)
    2. Right Click -> Customize Command
    3. Add the Enable Rule Mscrm.ShowOnGrid and again set ‘IsCore’ to true
      The Mscrm.ShowOnGrid enable rule tells the command bar to only show the command on the home page and not the search results.
    4. Set IsCore to true for all the other out of the button Enable & Display Rules that were added when you customized the command (e.g. Mscrm.AssignSelectedRecord).
    5. Publish!

    The Result!

    Once you've added the new button, and hidden the existing one, you'll see the changes to the command bar after doing a hard refresh on your App in the browser:

    Pretty cool! For more info about the Enable Rules used by Relevance Search, see https://docs.microsoft.com/en-us/power-platform/admin/configure-relevance-search-organization#configure-quick-actions

    Hope this helps!

  26. If you are creating Cloud Flows in Solutions today, you are using Connection References. Although they are listed as ‘Preview’ – there really is not an alternative as when you create a new Cloud Flow – a connection reference is automatically created for you.

    Connection References are essentially a ‘pointer’ to an actual connection. You include the Connection Reference in your solution so that it can be deployed, and then once imported, you can wire up the Connection Reference to point to a real connection. This means that you can deploy solutions with Flows that do not actually know the details of the connection, and without the need to edit the flows after deployment.

    Here is how it all fits together:

     

    The one thing that Connection References brings us is it avoids having to edit a flow after deployment to ‘fix’ connections. Previously, if you had 10 flows, then you would previously have to fix each of the flows. With Connection References, you only have to ‘fix’ the Connection References used by the flows.

     

    You can find all the connection references that do not have an associated connection using the following query:

    <fetch>
      <entity name="connectionreference" >
        <attribute name="connectorid" />
        <attribute name="connectionreferenceid" />
        <attribute name="statecode" />
        <attribute name="connectionid" />
        <filter>
          <condition attribute="connectionid" operator="null" />
        </filter>
      </entity>
    </fetch>

    When you import a solution with Connection Reference into a target environment using the new solution import experience, you will be prompted to link to an existing or create a new connection for any newly imported connection references. If they have previously been imported, then they are simply re-used.

    However, we want to automate our deployments...

    Editing Connection References and Turning on Flows using a Service Principle

    So, what about in an ALM automatic deployment scenario? 

    At this time, however, importing a solution using a Service Principle in ALM (e.g. using the Power Platform Build Tools) leaves your flows turned off since the connection references are not linked to connections.

    You can easily connect your connection references and then turn on a flow programmatically (see at the end of this post for the full PowerShell script):

    # Set the connection on a connection reference:
    Set-CrmRecord -conn $conn -EntityLogicalName connectionreference -Id $connectionreferenceid -Fields @{"connectionid" = $connectorid }
    
    # Turn on a flow
    Set-CrmRecordState -conn $conn -EntityLogicalName workflow -Id $flow.workflowid -StateCode Activated -StatusCode Activated

    …but if you try to do this using a Service Principal, you will get an error similar to:

    Flow client error returned with status code "BadRequest" and details "{"error":{"code":"BapListServicePlansFailed","message":"{\"error\":{\"code\":\"MissingUserDetails\",\"message\":\"The user details for tenant id … and principal id …' doesn't exist

    Suggested Solution

    My current approach to this (until we have official support in the Power Platform Build Tools) is something like this. Imagine the scenario where a feature branch introduced a new Flow, where there previously had been none – let us run through how this works with Connection References.

    1. Adding a new Cloud Flow to the solution

    1. When you add a new Cloud Flow to a Solution, the Connection References that it uses are also added automatically. If you are adding an existing Flow that was created in a different solution, you will need to remember to add the Connection References it uses manually.
    2. Key Point: Connection References do not include any Connection details – they are only a placeholder that will point to an actual Connection via the connectionid attribute.

    2. Solution Unpacked into a new branch

    1. The solution is unpacked and committed to the feature branch.
    2. The Feature branch eventually results in a Pull Request that is then merged into a release branch.
    3. The Connection Reference shows up in the PR unpacked solution:
    4. The new Flow also shows up in the pull request unpacked solution. Notice that the connection reference is referenced via the connectionReferenceLogicalName setting in the Flow json.

    3. Build & Release

    1. When the Pull Request is merged, the Build Pipeline will run automatically.
    2. When the CI Build has run, the Flow will be packed up into the solution.zip – so you can then deploy it to your target environments.

    4. Connection Reference - Set Connections

    1. Once the release has completed – the solution will be deployed.
    2. Key Point: At this stage, the Flow(s) are turned off because the Connection Reference is not yet wired up to a Connection.
    3. Of course, if you were importing this solution by hand, you would be prompted to connect the unconnected connection references.

      This is what the Flow and Connection Reference will look like in the solution explorer:

      Note:
      Connection References always show the Status of 'Off' - even if they are wired to a connection!

    4. The Owner of the Connection Reference and Flow is the Application User SPN that is used by the Power Platform Build Tools
    5. If you try and turn on a flow that uses any connection other than the Current Environment Dataverse connector, you’ll get a message similar to:

    Failed to activate component(s). Flow client error returned with status code "BadRequest" and details "{"error":{"code":"XrmConnectionReferenceMissingConnection","message":"Connection Id missing for connection reference logical name 's365_sharedoffice365_67cb4'."}}".

    5. Turning on Flows 

    1. At this time there is no way of editing a connection reference from inside a managed solution, so you need to create a new solution and add the Managed Connection References to them.
    2. Once inside a new solution, you can edit the Connection References and create a new or select an existing connection. 

    3. This will only need to be done once on the first deployment. Once the connection is created and linked to the connection reference it will remain after further deployments. 
    4. If you have already created the connection, you can programmatically set the connection on the connection reference if you needed using the following (You will need to be impersonating an actual user rather than using the SPN - see below).
      # Set the connection on a connection reference:
      Set-CrmRecord -conn $conn -EntityLogicalName connectionreference -Id $connectionreferenceid -Fields @{"connectionid" = $connectorid }
    5. Note: Interestingly, you can actually turn on a Cloud Flow that only uses the Current Environment connector without actually connecting your connection references – this is done automatically for you. For the purposes of the scenario let’s assume that we also have other connectors is use such as the Office 365 Connector.

    6. Key Point - Turning flows back on after subsequent deployments

    The challenge now is that subsequently, ALM automated deployments to this solution using the Service Principle will turn the flows off again. The connection references will stay connected, but the flows will be off. Furthermore, as mentioned above you can't use the Service Principle to edit connection references or turn flows on so we need to impersonate a real user (I hope this will be fixed in the future). To do this, we can use the Power Apps Admin Powershell scriptlets to get the user who created the connections in use (manually above) and impersonate this user to turn the flows on.

    Here is the full powershell script that you can add to your build or release pipeline:

    $connectionString = 'AuthType=ClientSecret;url=$(BuildToolsUrl);ClientId=$(BuildToolsApplicationId);ClientSecret=$(BuildToolsClientSecret)'
    # Login to PowerApps for the Admin commands
    Write-Host "Login to PowerApps for the Admin commands"
    Install-Module  Microsoft.PowerApps.Administration.PowerShell -RequiredVersion "2.0.105" -Force -Scope CurrentUser
    Add-PowerAppsAccount -TenantID '$(BuildToolsTenantId)' -ApplicationId '$(BuildToolsApplicationId)' -ClientSecret '$(BuildToolsClientSecret)' -Endpoint "prod"
    
    # Login to PowerApps for the Xrm.Data commands
    Write-Host "Login to PowerApps for the Xrm.Data commands"
    Install-Module  Microsoft.Xrm.Data.PowerShell -RequiredVersion "2.8.14" -Force -Scope CurrentUser
    $conn = Get-CrmConnection -ConnectionString $connectionString
    
    # Get the Orgid
    $org = (Get-CrmRecords -conn $conn -EntityLogicalName organization).CrmRecords[0]
    $orgid =$org.organizationid
    
    # Get connection references in the solution that are connected
    Write-Host "Get Connected Connection References"
    $connectionrefFetch = @"
    <fetch>
        <entity name='connectionreference' >
        <attribute name="connectionreferenceid" />
        <attribute name="connectionid" />
        <filter><condition attribute='connectionid' operator='not-null' /></filter>
        <link-entity name='solutioncomponent' from='objectid' to='connectionreferenceid' >
            <link-entity name='solution' from='solutionid' to='solutionid' >
            <filter>
                <condition attribute='uniquename' operator='eq' value='$(BuildToolsSolutionName)' />
            </filter>
            </link-entity>
        </link-entity>
        </entity>
    </fetch>
    "@;
    $connectionsrefs = (Get-CrmRecordsByFetch  -conn $conn -Fetch $connectionrefFetch -Verbose).CrmRecords
    
    # If there are no connection refeferences that are connected then exit
    if ($connectionsrefs.Count -eq 0)
    {
        Write-Host "##vso[task.logissue type=warning]No Connection References that are connected in the solution '$(BuildToolsSolutionName)'"
        Write-Output "No Connection References that are connected in the solution '$(BuildToolsSolutionName)'"
        exit(0)
    }
    
    $existingconnectionreferences = (ConvertTo-Json ($connectionsrefs | Select-Object -Property connectionreferenceid, connectionid)) -replace "`n|`r",""
    Write-Host "##vso[task.setvariable variable=CONNECTION_REFS]$existingconnectionreferences"
    Write-Host "Connection References:$existingconnectionreferences"
    
    # Get the first connection reference connector that is not null and load it to find who it was created by
    $connections = Get-AdminPowerAppConnection -EnvironmentName $conn.EnvironmentId  -Filter $connectionsrefs[0].connectionid
    $user = Get-CrmRecords -conn $conn -EntityLogicalName systemuser -FilterAttribute azureactivedirectoryobjectid -FilterOperator eq -FilterValue $connections[0].CreatedBy.id 
    
    # Create a new Connection to impersonate the creator of the connection reference
    $impersonatedconn = Get-CrmConnection -ConnectionString $connectionString
    $impersonatedconn.OrganizationWebProxyClient.CallerId = $user.CrmRecords[0].systemuserid
    
    # Get the flows that are turned off
    Write-Host "Get Flows that are turned off"
    $fetchFlows = @"
    <fetch>
        <entity name='workflow'>
        <attribute name='category' />
        <attribute name='name' />
        <attribute name='statecode' />
        <filter>
            <condition attribute='category' operator='eq' value='5' />
            <condition attribute='statecode' operator='eq' value='0' />
        </filter>
        <link-entity name='solutioncomponent' from='objectid' to='workflowid'>
            <link-entity name='solution' from='solutionid' to='solutionid'>
            <filter>
                <condition attribute='uniquename' operator='eq' value='$(BuildToolsSolutionName)' />
            </filter>
            </link-entity>
        </link-entity>
        </entity>
    </fetch>
    "@;
    
    $flows = (Get-CrmRecordsByFetch  -conn $conn -Fetch $fetchFlows -Verbose).CrmRecords
    if ($flows.Count -eq 0)
    {
        Write-Host "##vso[task.logissue type=warning]No Flows that are turned off in '$(BuildToolsSolutionName)."
        Write-Output "No Flows that are turned off in '$(BuildToolsSolutionName)'"
        exit(0)
    }
    
    # Turn on flows
    foreach ($flow in $flows){
        Write-Output "Turning on Flow:$(($flow).name)"
        Set-CrmRecordState -conn $impersonatedconn -EntityLogicalName workflow -Id $flow.workflowid -StateCode Activated -StatusCode Activated -Verbose -Debug
    }
    
    
    

    Managing connection details

    Since your pipeline will want to run on release pipelines as well as branch environments, I use variable groups to define the connection details.

    Something like this.

    Note: The name is in the format, branch-environment-<BRANCH NAME>

    So then in a YAML pipeline, you can bring in the details you want to use for the specific branch using:

    variables:
    - group: branch-environment-${{ variables['Build.SourceBranchName'] }}

    When you use the script in a Release pipeline, you can simply add the right Variable Group for environments you are deploying to:

    Summary

    1. When you first deploy your solution with connection references, they must be connected (manually through the Solution Explorer, or programmatically by updating connectionid) before the flows that use them can be turned on.
    2. This connection reference connecting cannot be done by a service principle - the deployment script will need to impersonate a non-application user.
    3. One approach is to use the user that created the connection references to get the user to impersonate - this way you don't need to manually specify the user for each environment. If you have multiple users involved in connection reference authentication, you will likely need to impersonate the user for each connection.
    4. After each subsequent deployment, you will need to turn on the flows again. This also needs to be performed using impersonation.
    5. You can setup variable groups that will be dynamically picked using the current branch (for build pipelines) or the release environment.
    6. I hope at some point, this kind of operation will be supported by the Power Platform Build Tools out of the box.

     

  27. With the recent experimental announcement of the PowerApps Solution Packager, we now have a much better way of managing Canvas Apps in your source code repository. This moves us much closer to a better ALM story for the whole of the Power Platform so that my top 3 principles can be followed:

    1. Everything as code – The single point of truth for all artifacts (solution metadata, apps, code, data, versioning, releases) should be under source control.
    2. Environments as cattle, not pets – When the entire solution is under source control, environments can be built and deleted for specific purposes – e.g. features, experiments, testing. I wrote a recent post on this.
    3. Define your Branching Strategy– A branching strategy is defined as describing how features/releases are represented as branches. Each time a new feature (or group of linked features) is worked on, it should happen in a source code branch. Code can be merged between each branch when ready to be integrated, built and released. Branches normally would have an associated PowerPlatform environment to allow you to work in parallel to other changes without the risk of introducing changes that should not be released. The gitflow branching strategy is a great starting point to use.

    The Power Apps Solution Packager (pasopa) brings us closer to the 'everything as code' mantra - by unpacking a Canvas App into source code files that allow you to merge in changes from other branches and then re-pack. Eventually, this will make its way into the Power Apps CLI and Solution Packager.

    Here are a couple of videos I've done on the subject of the PowerApps Solution Packager:

     

     

     

  28. If you were thinking that Power Apps Canvas Apps and Dataverse for Teams Canvas Apps are just the same – but with a different license and container – well whilst it is mostly true, there is a very big difference:
    Dataverse for Teams uses a completely different set of Out of the Box controls. They are based on the Fluent UI library.
    This post will hopefully save someone the time that I’ve spent investigating why a very common UI design pattern doesn’t work in Dataverse for Teams.

    The Toggle Pattern

    A common pattern in Canvas Apps is to bind a variable to the Default property of a Toggle Button, and then use the OnChange event to fire some code when it is changed. This is a very common solution to the problem that components cannot raise events at the time of writing.
    Imagine a scenario where you have a Component that renders a button, that when selected it should raise an event on the hosting screen.
    The common pattern is to toggle an output property from a custom component, and then bind the output to a variable – that is in turn bound to a toggle button. When the variable is toggled, it then raises the OnChecked event on the toggle button so you can perform the logic you need. This does seem like a hack – but it is the only mechanism I know of to respond to events from inside components.

    I hope that at some point we will see custom events being able to be defined inside components – but for now, the workaround remains.
    So, the app looks something like this:

    Fluent UI Controls not only look different - they behave differently!

    The problem is that inside Dataverse for Teams, the standard controls have been replaced with the new Fluent UI based controls, and with that, there is a subtle difference.

    The default property has been replaced by a new set of properties that are control specific (e.g. Checked, Value, Text, etc). With this change, the change events are only fired with the user initiates the event – and not when the app changes the value.

    So in Dataverse for Teams, the App looks very similar, but with the Checked property rather than Default:

    This results in the OnChecked event not being fired and as such, the pattern no longer works.

    If you look carefully, you'll see, in Dataverse for Teams, the label counter only increments when the toggle button is checked but not when the button is clicked. This is because the OnChecked event is not triggered by the varToggle variable being changed by the component.

    I really love the Fluent UI controls in Dataverse for Teams - especially with the awesome responsive layout controls - but this drawback is very limiting if you are used to writing Power Apps Canvas Apps. I hope that we will see an update soon that will remove this limitation from Dataverse for Teams Apps.

    Work Around

    Update 2021-02-10: There is a workaround to this - you can enable 'classic' controls - this then gives you the choice between using the Fluent UI OR the classic Toggle control. By using the classic control you then get the OnChecked event being raised!

     

  29. One of the most requested features of Model-Driven Apps ‘back in the day’ was to edit the popup dialog boxes that do actions such as Closing Opportunities or Cases. These were ‘special’ dialogs that had a fixed user interface.

    There were a few workarounds that involved either using dialogs (now deprecated) or a custom HTML web resource.

    More recently, the ability to customize the Opportunity Close dialog was introduced (https://docs.microsoft.com/en-us/dynamics365/sales-enterprise/customize-opportunity-close-experience) however this is very limited in what you can actually do.

    Canvas Apps are a great way of creating tailored specific purpose user interfaces and are great for this kind of popup dialog type action. If only there was a way to easily open a Canvas App from a Model-Driven Command Bar. Well, now there is!

    Open Dialog

    Open Dialog Smart Button

    I’ve added a new smart button that allows you to easily provide the URL to the Canvas App to use as a dialog and pass the current record or selected record in a grid.

    Step 1. Create a Canvas App Dialog

    Your Canvas App will be responsible for performing the logic that your users need. The information that is passed to it is in the form of the record Id and logical name parameters. You can grab these values in the Canvas App startup script and then load the record that you need:

    Set(varRecordId, If(
        IsBlank(Param("recordId")),
        GUID("780bb51e-961e-ea11-a810-000d3ab39933"),
        GUID(Param("recordId"))
        ));
    
    Set(varRecordLogicalName, Param("recordLogicalName"));
    
    Set(varSelectedRecord, LookUp(Accounts, Account = varRecordId))

    Replace the GUID with the id of a record you want to use as a test when running inside the Canvas App Studio.

    Any buttons that perform actions on the data or a cancel button that just closes the dialog, simply use the Exit() function:

    // Do some stuff
    Patch(Accounts,varSelectedRecord,{
        'Invoice Date':dpkInvoiceDate.SelectedDate
    });
    Exit();

    The smart button listens for the result of the Exit() function to close the dialog.

    One of the challenges of adding a Canvas App to a Model-Driven app is styling it to look like the out of the box Model-Driven App dialogs. I have created a sample app that you can import and then use as a template - https://github.com/scottdurow/RibbonWorkbench/blob/master/SmartButtonsUCI/SampleDialogSolution_unmanaged.zip

    Step 2. Publish and grab the App Url.

    Publish your Canvas App in a solution, and then grab the App Url from the details. Select the … from the Canvas App and then select ‘Details’

    Get App Url

    Then copy just the Url of the App that is displayed:

    You could create an environment variable to hold this similar to the WebHook smart button - http://develop1.net/public/post/2020/09/11/environment-variables-in-smart-buttons This is because the url to the Canvas App will be different in each environment you deploy to.

    Note: Make sure you share your Canvas App with the users that are going to be using your Model-Driven App! (https://docs.microsoft.com/en-us/powerapps/maker/model-driven-apps/share-embedded-canvas-app

    Step 3. Install the Smart Buttons solution

    You will need the latest smart buttons solution – https://github.com/scottdurow/RibbonWorkbench/releases

    Step 4. Open the Ribbon Workbench and add the buttons

    When you open the Ribbon Workbench for the environment that the Smart Button solution and Canvas App is installed into, you can then drop the ‘Open Dialog’ button on either a Form, SubGrid, or Home Grid.

    The properties for the Smart Button might look something like:

    Note: I've used an environment variable reference in the Dialog Url parameter - but equally, you could just paste the URL of your canvas app in there if you didn't want to deploy to multiple environments such that the app URL would be different.

    And that's it!

    It’s really that simple. Now you will have a dialog that allows you to take actions on records from forms or grids using a Canvas App. The data is then refreshed after the dialog is closed.

    Mobile App Support

    At this time, due to cross-domain restrictions inside the Power Apps Mobile App, this technique will not work. The user will simply be presented with a login message, but the button will not do anything. If you would like to unblock this scenario – please vote this suggestion up!  https://powerusers.microsoft.com/t5/Power-Apps-Ideas/Support-Canvas-App-modal-popup-inside-Model-Driven-Mobile-App/idi-p/704962#M31952

    Let me know how you get on over on GitHub - https://github.com/scottdurow/RibbonWorkbench/issues 

    @ScottDurow

  30. There is a new kid in town! Not long after the Power Apps Build Tools for Azure Dev Ops were released out of beta under the new name of Power Apps Build Tools (https://marketplace.visualstudio.com/items?itemName=microsoft-IsvExpTools.PowerPlatform-BuildTools) the new set of GitHub actions for Power Platform ALM have been released in public preview (https://powerapps.microsoft.com/en-us/blog/github-actions-for-the-power-platform-now-available-in-preview/). They can be used in your workflows today and will be available in the GitHub Marketplace later in the year.

    Since Microsoft acquired GitHub for $7.5 Billion back in 2018 there has been a growing amount of investment – it seems that parity with Azure Dev Ops is inevitable before long. The CI/CD story in the open-source world has been served using products such as Octopus Deploy for a long time, but one of the investments Microsoft have made is in the are of GitHub actions (https://github.blog/2019-08-08-github-actions-now-supports-ci-cd/)

    GitHub Actions for Power Platform ALM

    Actions and Workflows give a yaml build pipeline with a set of hosted build agents. This provides a significant step towards some degree of parity with Azure Pipelines.

    With the public preview of the Power Platform GitHub actions, we can come some way to moving our CI/CD pipeline to GitHub. At this time, not all of the Azure Dev Ops Power Platform Build Tools are supported yet – with the most notable omission being the Solution Checker and environment management tasks.

    Power Platform Build Tools

    GitHub Power Platform Actions

    WhoAmI

    who-am-i

    Power Platform Checker

    ---

    Power Platform Import Solution

    import-solution

    Power Platform Export Solution

    export-solution

    Power Platform Unpack Solution

    unpack-solution

    Power Platform Pack Solution

    pack-solution

    Power Platform Publish Customizations

    ---

    Power Platform Set Solution Version

    ---

    Power Platform Deploy Package

    ---

    Power Platform Create Environment

    ---

    Power Platform Delete Environment

    ---

    Power Platform Backup Environment

    ---

    Power Platform Copy Environment

    ---

    ---

    branch-solution

    ---

    clone-solution


    An interesting addition to the GitHub actions is the branch-solution action which I think is intended to be used when you want a new pro-code or low-code environment to match a GitHub branch so that you can ‘harvest’ the solution xml from any changes automatically. I look forwards to seeing documentation on the best practices surrounding this action.

    There are two missing features that I would really like to see in the actions:

    1. Client Secret Authentication
    2. Cross-Platform Support

    When do we move from Azure Dev Ops then?

    Not yet! Personally, I feel the biggest gap in actions is the maturity around the release management in GitHub actions. Azure Dev Ops allows you to create multi-stage deployments with approval gates that can be driven from the output of a specific build, whereas GitHub actions require you to manage this using release tags and branch merging or external integrations.

    Example

    You can see an example of the new GitHub actions at work in my NetworkView PCF control repo (https://github.com/scottdurow/NetworkViewPCF)

    Each time a pull request is merged into the master branch, the PCF control is built, the solution packaged and a release created.

    Since the solution contains more than just the PCF control (forms too!), I have a folder called solution_package that contains the solution as unpacked by the Solution Packager. After the PCF control is built, a script is then used to copy the bundle.js into the solution package and update the version of the artefacts. Then the solution is built using the microsoft/powerplatform-actions/pack-solution@latest action. I chose to use a node script rather than PowerShell/PowerShell Core so that eventually it will be easier to be cross-platform once the Power Platform tools are also cross-platform.

    You can take a look at the build yaml here - https://github.com/scottdurow/NetworkViewPCF/blob/dev/.github/workflows/build.yml 

    @ScottDurow

  31. A very common request I've had for the Ribbon WorkbenchSmart Button solution is to be able to configure the WebHook/FlowUrl using an Environment Variable. Environment Variables are small pieces of information that can vary between environments without there needing to be customizations update. This way you can have different endpoints for each environment without making customization changes.

    As of Version 1.2.435.1 you can now put an environment variable (or combination of) into the FlowUrl smart button parameter:

    This screenshot assumes you have added an environment variable to your solution with the schema name dev1_FlowUrl

    The Url is in the format {%schemaname%}. Adding the environment variable to the solution would look like:

    The really awesome part of environment variables is that you are promoted to update them when you import to a new environment inside the new Solution Import experience that came with Wave 1 2020.

    If you have any feedback or suggestions for Smart Buttons, please head over to the Github project page.

    @ScottDurow

  32. A situation I see very frequently is where there is a ‘special’ PowerApps environment that holds the master unmanaged customizations. This environment is looked after for fear of losing the ability to deploy updates to production since with managed solutions you can’t re-create your unmanaged environment. Sometimes, a new partner starts working with a customer only to find that they have managed solutions in production with no corresponding unmanaged development environment.

    I’m not getting into the managed/unmanaged debate – but let’s assume that you are following the best practices outlined by the PowerApps team themselves “Managed solutions are used to deploy to any environment that isn't a development environment for that solution”[1]+[2]

    There is a phrase that I often use (adapted from its original use [3]):

    “Treat your environments like cattle, not pets”

    This really resonates with the new PowerApps environment management licensing where you pay for storage and not per-environment. You can create and delete environments (provided you are not over DB storage capacity) with ease.

    If you store your master unmanaged solution in an environment – and only there – then you will start to treat it like a pet. You’ll stroke it and tend to its every need. Soon you’ll spend some much time in pet-care that you’ll be completely reliant on it, but it’ll also be holding you back.

    There is another principle I am very vocal about:

    “Everything as code”

    This is the next logical step from “Infrastructure as code” [4]

    In the ‘everything as code’ world, every single piece of the configuration of your development environment is stored as code in source control, such that you can check-out and build a fully functioning unmanaged development environment that includes:

    1. Solution Customisations as XML
    2. Canvas Apps as JSON
    3. Flows as JSON
    4. Workflows as XAML
    5. Plugins as C#
    6. JS Web resources as TypeScript
    7. Configuration data as Xml/JSON/CSV
    8. Package Deployer Code
    9. Test Proxy Stub Code for external integrations
    10. Scripts to deploy incremental updates from an older version
    11. Scripts to extract a solution into its respective parts to be committed to source control
    12. Scripts to set up a new development environment
      1. Deploy Test Proxy Stub Services
      2. Build, Pack and deploy a solution to a new environment
      3. Deploy Reference Data
      4. Configure Environment Variables for the new environment

    There are still areas of this story that need more investment by the PowerApps teams such as connector connection management and noisy diffs – but even if there are manual steps, the key is that everything is there in source control that is needed. If you lose an environment, it’s not a disaster – it’s not like you have lost your beloved pet.

    The advantages of combining these two principles are that every single time you make a change to any aspect of an environment, it is visible in the changeset and Pull Request

    If you are working on a new feature, the steps you’d take would be:

    1. Create a new branch for the Issue/Bug/User Story
    2. Checkout the branch locally
    3. Create a new development PowerApps environment and deploy to it using the build scripts
    4. Develop the new feature
    5. Use the scripts to extract and unpack the changes
    6. Check that your changeset only contains the changes you are interested in
    7. Commit the changes
    8. Merge your branch into the development/master branch (depending on the branching strategy you are using)
    9. Delete your development environment

    Using this workflow, you can even be working on multiple branches in parallel provided there won’t be any significant merge conflicts when you come to combine the work. Here is an example of a branching strategy for a hotfix and two parallel feature development branches:

    The most common scenario I see where there are merge conflicts are RibbonXml, FormXml, and ViewXml – the editing of both of these elements is now supported – and so you can manage merge conflicts inside your code editor! CanvasApps and Flows are another story – there really isn’t an attractive merge story at this time and so I only allow a single development branch to work on Canvas Apps, Flows, and Workflows at any one time.

    If you think you have pet environments, you can still keep them around until you feel comfortable letting go, but I really recommend starting to herd your environments and get everything extracted as code. You’ll not look back.

    @ScottDurow

    References:

    [1] ALM Basics - https://docs.microsoft.com/en-us/power-platform/alm/basics-alm

    [2] Solution Concepts - https://docs.microsoft.com/en-us/power-platform/alm/solution-concepts-alm

    [3] Pets vs Cattle - http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/

    [4] Infrastructure as Code - https://en.wikipedia.org/wiki/Infrastructure_as_code

    [5]  ALM with the PowerPlatform - https://docs.microsoft.com/en-us/power-platform/alm/

    [6] ALM for Developers - https://docs.microsoft.com/en-us/power-platform/alm/alm-for-developers

    [7] Supported Customization Xml Edits - https://docs.microsoft.com/en-us/power-platform/alm/when-edit-customization-file

    [9] Healthy ALM - https://docs.microsoft.com/en-us/power-platform/alm/implement-healthy-alm

  33. Linters have been around for ages - it all started back in 1978 apparently - but has now become a mainstay of modern JavaScript and TypeScript programming.

    Writing code without a linter is like writing an essay without using spell checker! Sure there may be some super humans who can write their code perfectly without linting - but I’m not one of those!

    Much has been written about linting since 1978 and there are plenty of opinions! For me there are two parts:

    1. Enforcing semantic code rules such as not using var in TypeScript or using let when it could be const because the value doesn’t change. These rules are designed to help you trap bugs as early as possible and enforce best practices.
    2. Formatting rules - such as not mixing tabs and spaces and adding spaces before and after keywords.

    For TypeScript, we can enforce rules using eslint - and automatically format our code using prettier.
    There are a whole raft of style rules that then can be applied for different libraries such as react.

    This post shows you how to setup linting quickly and easily for a TypeScript PCF project that uses React.

    Create your PCF project

    Create your pcf project using your CLI/IDE of choice:
    I use:

    pac pcf init --namespace dev1 --name pcflint --template field
    npm install react react-dom @fluentui/react
    
    yo pcf --force
    

    Install ESLint, Prettier and the plugins

    Prettier is great for formatting your code, but doesn’t really do any of the semantic code checks. So the configuration we are going to create uses prettier as a plugin from within eslint. This means when you run eslint, no only will it warn and attempt to fix semantic issues, it’ll also tidy up the formatting for you using prettier.

    npm install eslint --save-dev
    

    You can use the bootstrapper if you want - but this can lead to a configuration that you don’t really want:

    npx eslint --init
    
    1. Next up is installing prettier (https://prettier.io/docs/en/install.html);
    npm install --save-dev --save-exact prettier
    

    We use the --save-exact as recommended by the project because sometimes formatting rules can change slightly and you don’t suddenly want your source control diffs to include formatting differences.

    1. Now install the plugins and configurations needed for our rules:
    npm install --save-dev @typescript-eslint/eslint-plugin @typescript-eslint/parser eslint-plugin-react eslint-config-prettier eslint-plugin-prettier
    
    1. Next we configure setline to call prettier when it is run (https://prettier.io/docs/en/integrating-with-linters.html) - this uses estlint-plugin-prettier
      Create a file named .eslintrc.json:
    {
        "parser": "@typescript-eslint/parser",
        "env": {
            "browser": true,
            "commonjs": true,
            "es6": true,
            "jest": true,
            "jasmine": true
        },
        "extends": [
            "plugin:@typescript-eslint/recommended",
            "plugin:prettier/recommended",
            "plugin:react/recommended",
            "prettier",
            "prettier/@typescript-eslint",
            "prettier/react"
        ],
        "parserOptions": {
            "project": "./tsconfig.json"
        },
        "settings": {
            "react": {
              "pragma": "React",
              "version": "detect"
            }
          },
        "plugins": [
            "@typescript-eslint",
            "prettier"
        ],
        "rules": {
            "prettier/prettier": "error"
        },
        "overrides": [
            {
              "files": ["*.ts"],
              "rules": {
                "camelcase": [2, { "properties": "never" }]
              }
            }
          ]
    }

    Note:

    1. There is an override rule to allow non-camelcase property names since we often use pascal named SchemaNames from CDS.
    2. There is support for jest and jasmine tests.

    Now configure the prettier rules by creating a file called .prettierrc.json

    {
      "semi": true,
      "trailingComma": "all",
      "singleQuote": false,
      "printWidth": 120,
      "tabWidth": 2,
      "endOfLine":"auto"
    }
     

    Let the magic happen!

    There are two ways to get eslint to do its job:

    1. Run from the command line
    2. Use a VSCode extension.

    Note: Both approaches will require you to have setup eslint and prettier already

    Run from the command line:

    1. You will need to globally install eslint:
    npm install -g eslint
    
    1. After that you can add a script to your package.config:
    "scripts": {
     ...
      "lint": "eslint ./**/*.ts --fix"
    },
    

    Run from inside VSCode

    This is my day-to-day use of eslint.

    1. Install the eslint VSCode extension - https://github.com/Microsoft/vscode-eslint
    2. lint issues will show up via a code-lens - the details show up using Ctrl-.
    3. You can auto-format your code using Alt-SHIFT-P

    I really recommend getting linting into your workflow early on – because you don’t want to enable it later and then find you have 1000’s of issues to wade through!
    @ScottDurow

     
  34. It's been over a year since I last blogged about DateTimes and nearly a decade since I blogged the first time on the subject! CRM DateTimes – so it’s well overdue that I update you on how DateTimes work with PCF.

    My last post on the subject was when the ‘Timezone independent’ and ‘Date Only’ behaviours were introduced -DateTimes - It’s never the last word.

    This made the time zone handling of dates much easier if you needed to store absolute date/times – however, there are always times where you need to store a date that is dependant on the user’s time zone (e.g. date/time a task is completed, etc.)

    In PCF, it would have been nice if the time zone element of the date was handled for us – but unfortunately not!

    There are 3 places where we have to consider datetime behaviours in PCF:

    • Field Controls

      • Inbound dates - When PCF calls updateView()

      • Outbound dates - When PCF calls getOutputs()

    • Dataset Controls - Inbound dates

    Field Controls - Inbound dates

    When the PCF passes our component a date as a bound property to the context via the updateView methods, the date will be provided as a formatted date string and also a raw Date object.

    I have a record with the dateAndTimeField property bound to a DateTime field that has the User Local DateTime behaviour.

    I can get the two values as follows:

    • Raw - parameters.dateAndTimeField.raw

    • Formatted - parameters.dateAndTimeField.formatted

    There are two time zones I can vary, firstly the CDS User Settings (I have it set to GMT+8) and my local browser time zone. In the following table, I vary the browser time zone and keep the CDS time zone constant.

    The formatted date is formatted using my CDS user settings – YYYY/MM/DD HH:mm

    Local Time Zone: GTM GMT-3 GMT+8
    CDS UTC 2020-05-10T04:30:00Z 2020-05-10T04:30:00Z 2020-05-10T04:30:00Z
    Raw 2020-05-10 05:30:00 GMT+0100 2020-05-10 02:30:00 GMT-0200 2020-05-10 12:30:00 GMT+0800
    Formatted 2020/05/10 12:30 2020/05/10 12:30 2020/05/10 12:30

    You’ll notice that the formatted time is still 12:30 because it’s showing as the CDS UTC+8 date. Changing my local time zone shouldn’t change this. However, the Raw date is now showing as 12:30 because it’s converted to my local browser time zone, and what makes it more complex is that Daylight savings is also added - depending on the date in the year. JavaScript dates are awkward like this. Although the date is set to the UTC date by PCF – it is provided in the local time zone.

    So why not use the formatted date?

    To work with the date value (bind it to a calendar control etc.) we need it in the user’s CDS local time zone - that shown by the formatted date. If we are just showing the date and not editing it, then the formatted string is the way to go. However, if we want to edit the date, then we need to convert it to a Date object. This could be done by parsing the Formatted Date but that would require us to understand all the possible date formats that CDS has in the user settings. Instead we can simple apply the following logic:

    1. Convert to UTC to remove the browser timezone offset:
    const localDate = getUtcDate(localDate)
    getUtcDate(localDate: Date) {
        return  new  Date(
            localDate.getUTCFullYear(),
            localDate.getUTCMonth(),
            localDate.getUTCDate(),
            localDate.getUTCHours(),
            localDate.getUTCMinutes(),
        );
    }
     
    1. Apply the user’s time zone offset. This requires access to the user’s time zone settings - luckily they are loaded for us in the PCF context:
    convertDate(value: Date) {
        const offsetMinutes = this.context.userSettings.getTimeZoneOffsetMinutes(value);
        const localDate = addMinutes(value, offsetMinutes);
        return getUtcDate(localDate);
    }
    addMinutes(date: Date, minutes: number): Date {
        return new Date(date.getTime() + minutes * 60000);
    }
     

    This will now give us a Date that represents the correct Datetime in the browser local time zone - and can be used as a normal date!

    Because some dates can be set as time zone independent, we can conditionally run this logic depending on the metadata provided:

    convertToLocalDate(dateProperty: ComponentFramework.PropertyTypes.DateTimeProperty) {
        if (dateProperty.attributes?.Behavior == DateBehavior.UserLocal) {
            return this.convertDate(dateProperty.raw);
        } else {
            return this.getUtcDate(dateProperty.raw);
        }
    }
     

    We still need to convert to UTC even if the date is time zone independent - this is to remove the correction for the browser timezone.

    Fields controls - outbound dates

    Now we have a date time that is corrected for our local browser time zone, we can simply return the Date object from inside the getOutputs().
    So if we wanted to set 12:30 - and our browser timezone is set to GMT-3 (Greenland) - then the date will actually be: 12:30:00 GMT-0200 (West Greenland Summer Time)
    PCF ignores the timezone part of the date and then converts the date to UTC for us.

    NOTE: It does seem odd that we have to convert to local inbound - but not back to UTC outbound.

    Dataset controls - inbound dates

    There are two notable differences when binding datasets to tables in PCF compared to the inbound values in their field counterparts.

    1. Dates that are provided by a dataset control binding are similar in that they are provided in the browser timezone - however they are strings and not Date objects.
    2. There is no information on the UserLocal/Timezone independant behaviour - and so we need to know about this in advance.

    So as before, when binding to a datagrid, it’s easiest to use the formatted value:
    item.getFormattedValue("dateAndTimeField")

    If you need the Date object to edit the value - then you’ll need to convert to the local date as before - but with the added step of converting to a Date object:

    const dateValue = item.getValue("dateAndTimeField");
    const localDate = this.convertDate(dateValue);
     

    This isn’t going to be the last I write on this subject I am sure of it! Anything that involves timezones is always tricky!
    @ScottDurow

  35. One of the challenges with PCF controls is getting them to reflow to the available space that they are stretched to fill the available space. Doing this using standard HTML involves using the flexbox. The really nice aspect of the Fluent UI react library is that it comes with an abstraction of the flexbox called the ‘Stack’.

    The aim of this post is to layout a dataset PCF as follows:

    • Left Panel - A fixed width vertical stack panel that fills 100% of the available space
    • Top Bar - A fixed height top bar that can contain a command bar etc.
    • Footer - A centre aligned footer that can contain status messages etc.
    • Grid - a DetailsList with a sticky headers that occupies 100% of the middle area.

    The main challenges of this exercise are:

    1. Expanding the areas to use 100% of the container space - this is done using a combination of verticalFill and height:100%
    2. Ensure that the DetailsList header row is always visible when scrolling - this is done using the onRenderDetailsHeader event of the DetailsList in combination with Sticky and ScrollablePane
    3. Ensure that the view selector and other command bar overlay appear on top of the stick header.
      This requires a bit of a ‘hack’ in that we have to apply a z-order css rule to the Model Driven overlays for the ViewSelector and Command Bar flyoutRootNode. If this is not applied then flyout menus will show behind the Stick header:

    Here is the React component for the layout:

    /* eslint-disable @typescript-eslint/no-non-null-assertion */
    /* eslint-disable @typescript-eslint/explicit-function-return-type */
    import * as React from "react";
    import {
      Stack,
      ScrollablePane,
      DetailsList,
      TooltipHost,
      IRenderFunction,
      IDetailsColumnRenderTooltipProps,
      IDetailsHeaderProps,
      StickyPositionType,
      Sticky,
      ScrollbarVisibility,
    } from "office-ui-fabric-react";
    
    export class DatasetLayout extends React.Component {
      private onRenderDetailsHeader: IRenderFunction<IDetailsHeaderProps> = (props, defaultRender) => {
        if (!props) {
          return null;
        }
        const onRenderColumnHeaderTooltip: IRenderFunction<IDetailsColumnRenderTooltipProps> = tooltipHostProps => (
          <TooltipHost {...tooltipHostProps} />
        );
        return (
          <Sticky stickyPosition={StickyPositionType.Header} isScrollSynced>
            {defaultRender!({
             ...props,
              onRenderColumnHeaderTooltip,
            })}
          </Sticky>
        );
      };
      private columns = [
        {
          key: "name",
          name: "Name",
          isResizable: true,
          minWidth: 100,
          onRender: (item: string) => {
            return <span>{item}</span>;
          },
        },
      ];
      render() {
        return (
          <>
            <Stack horizontal styles={{ root: { height: "100%" } }}>
              <Stack.Item>
                {/*Left column*/}
                <Stack verticalFill>
                  <Stack.Item
                    verticalFill
                    styles={{
                      root: {
                        textAlign: "left",
                        width: "150px",
                        paddingLeft: "8px",
                        paddingRight: "8px",
                        overflowY: "auto",
                        overflowX: "hidden",
                        height: "100%",
                        background: "#DBADB1",
                      },
                    }}
                  >
                    <Stack>
                      <Stack.Item>Left Item 1</Stack.Item>
                      <Stack.Item>Left Item 2</Stack.Item>
                    </Stack>
                  </Stack.Item>
                </Stack>
              </Stack.Item>
              <Stack.Item styles={{ root: { width: "100%" } }}>
                {/*Right column*/}
                <Stack
                  grow
                  styles={{
                    root: {
                      width: "100%",
                      height: "100%",
                    },
                  }}
                >
                  <Stack.Item verticalFill>
                    <Stack
                      grow
                      styles={{
                        root: {
                          height: "100%",
                          width: "100%",
                          background: "#65A3DB",
                        },
                      }}
                    >
                      <Stack.Item>Top Bar</Stack.Item>
                      <Stack.Item
                        verticalFill
                        styles={{
                          root: {
                            height: "100%",
                            overflowY: "auto",
                            overflowX: "auto",
                          },
                        }}
                      >
                        <div style={{ position: "relative", height: "100%" }}>
                          <ScrollablePane scrollbarVisibility={ScrollbarVisibility.auto}>
                            <DetailsList
                              onRenderDetailsHeader={this.onRenderDetailsHeader}
                              compact={true}
                              items={[...Array(200)].map((_, i) => `Item ${i + 1}`)}
                              columns={this.columns}
                            ></DetailsList>
                          </ScrollablePane>
                        </div>
                      </Stack.Item>
                      <Stack.Item align="center">Footer</Stack.Item>
                    </Stack>
                  </Stack.Item>
                </Stack>
              </Stack.Item>
            </Stack>
          </>
        );
      }
    }
    

    Here is the css:

    div[id^="ViewSelector"]{
        z-index: 20;
    }
    #__flyoutRootNode .flexbox {
        z-index: 20;
    }
    

    Hope this helps!

    @ScottDurow

  36. One of the recent additions to PCF for Canvas Apps is the ability to bind dataset PCF controls to datasets in a Canvas App. A challenge that faces all PCF developers is if their control should support both Model AND Canvas – so with this in mind you need to be aware of the differences in the way that data is paged.

    This post demonstrates how the paging API works in Model and Canvas and highlights the differences. In my tests, I used an entity that had 59 records and spanned 3 pages of 25 records per page.

    loadNextPage/loadPreviousPage

    There are two ways of paging through your data:

    1. Incrementally load the data using loadNextPage
    2. Page the data explicitly using loadExactPage
      In Model Apps, when you call loadNextPage, the next page of data will be added on top of the existing dataset.sortedRecordIds – whereas in Canvas, you will get a reset set of records that will just show the page that you have just loaded.

    This is important if you control aims to load all records incrementally or uses some kind of infinite scrolling mechanism.

    This is how nextPage/previousPage works in Canvas Apps

    This is how nextPage/previousPage works in Model Apps

    Notice how the totalRecordsLoaded increases with each page for Model, but for Canvas it shows only the number of records on that page.
    You might think that using this approach would be more efficient because it uses the fetchXml paging cookie - well from what I can see it doesn't seem to be any different to just specifying the page in the fetchXml - and has the same performance as loadExactPage...

    loadExactPage

    When you want to show a specific page – jumping over other pages without loading them, you can use ‘loadExactPage’. This method is not currently documented – but it is mentioned by the PCF team in the forums
    This method will load the records for the specific page and so dataset.sortedRecordIds will only contain that page – this is the same on both Canvas and Model!

    Notice that if you load a specific page, the hasNextPage and hasPreviousPage is updated to indicate if you can move back or forwards. This would only help when using loadExactPage in Model Apps, because when using loadNextPage in Model Apps, you will never get hasPreviousPage == true because you are loading all the records incrementally rather than a specific page.

    This is how loadExactPage works in Canvas Apps

    This is how loadExactPage works in Model Apps

    Notice total records loaded shows only the number of records in that page.

    totalResultCount

    This property should give you how many records there are in the current dataset – however, in Canvas it only gives you the number of records that have been loaded via the paging methods. If you look at the comparisons above, you’ll see that the Canvas totalResultCount goes up with each page, but in Model, it remains the total record count.
    Interestingly this property is not actually documented – however it’s in the Typescript definitions.

    The Future

    It’s not clear if we will see a completely unified experience between Canvas and Model with PCF controls – but I’ll update this post if anything changes!

  37. Those of you who know me will also know that I am a massive Fiddler fan for removing the need to deploy each time you change your JavaScript source.

    Here are some of my previous blog posts on Fiddler - http://develop1.net/public/search?q=fiddler

    The PowerApps docs now even include instructions on it https://docs.microsoft.com/en-us/powerapps/developer/model-driven-apps/streamline-javascript-development-fiddler-autoresponder

    Developing PCF Canvas Controls

    When developing PCF controls for Canvas Apps, the process is slightly different and includes an extra step.

    1. Add an autoresponder in the format:

    Resources0Controls0<NamesSpace>.<ControlName>.bundle.js?sv=
    E.g. Resources0Controls0Develop1.PCFTester.bundle.js?sv=

    It should look something like:

    2. Since the scripts are served from a different domain to PowerApps, configure Fiddler to add the Access-Control-Allow-Origin header.

    In fiddler, Press Ctrl-R to open the rules editor.

    Locate the OnBeforeResponse function and add:

    if (oSession.oRequest.headers.Exists("Host") && oSession.oRequest.headers["Host"].EndsWith("windows.net")) {
      if (oSession.oResponse.headers.Exists("Access-Control-Allow-Origin")){
        oSession.oResponse.headers["Access-Control-Allow-Origin"] ="*";
      }
      else{
        oSession.oResponse.headers.Add("Access-Control-Allow-Origin","*");
      }
    }

    It should look something like:

     

    When you add in your PCF Component to the Canva App, it should now be loaded from your local file system just like it does with Model Driven Apps. To refresh the component, you will need to exit the app and re-open (rather than just refresh the window in Model Driven Apps).

    Hope this helps,

    @ScottDurow

  38. Back at the end of 2015, Power Apps wasn’t even a thing. My world revolved around Dynamics 365 and the release cadence that was bringing us updates to the platform that were either keeping up with SalesForce or providing greater customizability. Much has changed since then, not least the way that we write rich UI extensions. With this in mind, I have completely re-written my Network View solution to use TypeScript and the Power Apps Component Framework.

    Mobile App Demo

    This version has some notable improvements on the old version:

    • ✅ Shows details of links
    • ✅ Allows including inside the actual form (thanks to PCF)

    There are a few more items TODO to bring parity with the old version:

    • 🔳 Loading Activities
    • 🔳 Showing the users/connection roles for the network
    • 🔳 Support for configurable information cards

    The source can be found at https://github.com/scottdurow/NetworkViewPCF 

    I've not released a pre-compiled solution (yet) - if you would like to test it out, please get in touch!

    @ScottDurow

     

  39. When applying the 2020 release wave 1 you may see a component such as the Dynamics 365 Core Service fail to complete.
    First, you may want to check that you have correctly followed the steps on how to opt-in for 2020 wave 1.

    To determine the issue - navigate to the solution manager in PowerApps and click 'See History'

    This should then show you the failed upgrade component:

    Clicking on the row will give you the details. In my case it was because the solution was blocked due to a previous upgrade being incomplete:

    Solution manifest import: FAILURE: The solution [FieldService] was used in a LayerDesiredOrder clause,
    but it has a pending upgrade.
    Complete the upgrade and try the operation again.

    To resolve this, you will need to navigate to the solution manager and click 'Switch Classic'. Locate the referenced solution that is still pending an upgrade, select it, and then click 'Apply Solution Upgrade'.

    Wait for the upgrade to be applied, then return to the 2020 wave 1 release area in the admin portal, and click 'Retry'

    If you see a subsequent error, you can repeat these steps for the failed solution.

    Hope this helps!

  40. Technology typically leads to polarized opinions. Always has…Vinyl/CD…Betamax/VHS…HD-DVD/Blu-ray… Of course, our minds know that it depends on the detail, but our hearts have preferences based on our experience. This product over that one. This technique over this new one. You like this tool better than theirs because you know and trust it. You do this, don’t you?!

    Imagine you are implementing a new solution for a customer and you are asked to choose between a Flow or a Plugin for a new piece of functionality. If you are a pro-coder, then naturally you will find the Plugin option the most attractive because you trust it – later you might decide it’s over-kill and decide that it can be done using a Flow. If you are a functional consultant who is only too aware of the total cost of ownership of ‘code’ then you’ll try and achieve the functionality with a Flow, but then you might find it becomes too complex and needs a Plugin. You naturally start with a position that you know best. Am I right?!

    We know there are thousands of variables that affect our ultimate decision – different people will end up at different decisions and the ‘side’ you start from might affect the outcome. But one thing is for sure – building software is far from simple!

    The Microsoft Power Platform 'Code or No Code' battle has been bubbling away for at least a year now. It’s an unfortunate mix of sweeping statements about not needing code anymore resulting in passive-aggressive comments from Pro-Coders about how they got you here in the first place.

    Not everyone gets it

    Sara Lagerquist and I did a mock 'fight' at the recent Scottish Summit 2020. We demonstrated the polarised viewpoints in an attempt to make see the futility of it. But not everyone gets it...

    If you’re from the older Model-Driven Apps space, then you’ll be very used to having to make choices between JavaScript or Business Rules, between Workflows or Plugins. But if you’re from the newer ‘Low Code’ Canvas App space, then it’s possible that you don’t see any of this as a problem! Why would you use code when you are told ‘Less Code – More Power’? It’s not even an option – so what’s the big deal? Why would anyone want to argue? But trust me, they do!

    Human nature

    Why is all this happening? Simple, because of human nature. It’s only natural to react to something that threatens our thoughts and ideas with a response that's at best, defensive, or at worst, passive-aggressive. It has nothing to do with technology, or code/no-code. It has everything to do with the ‘tribal’ attitudes that have started to emerge. This problem is no one's fault - but rather an unfortunate side-effect of successful online community building centered around the different parts of the Microsoft Power Platform.

    I'm guilty too!

    I am guilty of this too. I am an enthusiastic evangelist of the PowerPlatform and its no-code/low-code aspects – but still when I see the weaponizing of hashtags like #LessCodeMorePower - I get defensive. I’ve worked hard my entire professional career to get proficient at code – now someone is saying that solutions have more power with less of me? No way!

    I’m sure you can see my knee-jerk reactive is misguided. Being condescending towards code is not the intention of the hashtag – but my human psyche kicks in telling me “I don’t like it”.

    The secret to letting go

    So here’s the secret - the #LessCodeMorePower mantra is actually nothing to do with us! That’s right – it’s not about YOU or ME. It’s about how Microsoft is positioning their product in the market. It's how they are selling more licenses. Nothing has changed – this journey has been going on for a long time – it’s just the latest leap in abstraction. Technology will always move on and change – and that’s why we love being in this industry. Right?

    Now, let’s take a step back. We all have a shared love for the Microsoft Power Platform. Building software solutions is hard. Picking the most appropriate technology is hard. The right decision today may not even be true tomorrow! 

    How do we move forwards?

    Pro-coders: When you see #LessCodeMorePower social media posts – work at avoiding being defensive – don’t respond by protecting your corner. This isn’t a criticism of you – you are just experiencing the force of the Microsoft marketing machine. Microsoft is not saying you are no longer needed or that code can’t create powerful solutions. The Microsoft Power Platform needs code as much as it needs no-code - and in fact, that is one of its strengths over our competitors!

    Low-coder/No-coders: Make sure you use #LessCodeMorePower hashtag appropriately. Be considerate of all perspectives – is it really the right use? Use it to promote specific strengths of the Power Platform but not at the expense of making pro-coders defensive. Don’t just say ‘anyone can write apps’ or ‘it’s simple to develop software’ – put these powerful statements in context! You don’t really believe in those overly simplistic ideals without adding at least some caveats! Promote the platform, but not at the expense of your fellow community members.

    The unbreakable oath

    Overall, let’s all be considerate of the whole spectrum of our software development profession. Pro-Coders, Low-Coders, and No-Coders - encouraging one another rather than creating division. Together, let’s unite and make the Power Platform shine.

    Here is the oath that Saraand I took at #SS2020 – join us!

    I do solemnly swear…on Charles Lamanna’s face…
    To love, honor & respect all those who develop solutions on the Microsoft Power Platform.
    To encourage one another through difficult projects.
    To build mutual respect between no-coders, low-coders, and pro-coders.
    Together, promoting quality through collaboration and cooperation.

    @ScottDurow #ProCodeNoCodeUnite

  41. Continuous Integration and Delivery is somewhat passé these days, but what is often missed is the need for good tests and analysis in your build pipeline. The PowerApps team has been working hard on the Solution Checker over the last year, and it's become an essential part of every PowerApps solution development process. If you have a solution that is going to be put into App Source, you'll need to make sure it passes a special set of rules specifically for App Source solutions.

    This post shows you how to add the Solution Checker to your Build pipeline.

    Step 1 - Application User

    Before you can run the solution checker PowerShell module, you'll need to create an Application User in your Azure Active Directory Tenant. There is a great set of instructions in the PowerApps Solution Checker documentation - https://docs.microsoft.com/en-gb/powershell/powerapps/get-started-powerapps-checker?view=pa-ps-latest

    Step 2- PowerShell Script

    So that our Build Pipeline can run the Solution Checker, we add a PowerShell script to our repo. 

    Note that you'll need to:

    1. Create a secured variable in your pipeline to store the client secret so it can be passed to the script as a parameter.
    2. Update for your Tennant and Application ID 
    3. Update for the location of your solution.zip that you've built in the pipeline. Mine is 
      $env:BUILD_SOURCESDIRECTORY\DeploymentPackage\DeploymentPackage\bin\Release\PkgFolder\Solution.zip

    Your Script should look something like:

    param (
        [string]$clientsecret
     )
    # Requires App User be set up https://docs.microsoft.com/en-gb/powershell/powerapps/get-started-powerapps-checker?view=pa-ps-latest
    $env:TENANTID = "65483ec4-ac1c-4cba-91ca-83d5b0ba6d88"
    $env:APPID = "2fa068dd-7b61-415b-b8b5-c4b5e3d28f61"
    
    $ErrorActionPreference = "Stop"
    install-module Microsoft.PowerApps.Checker.PowerShell -Force -Verbose -Scope CurrentUser
    
    $rulesets = Get-PowerAppsCheckerRulesets
    $rulesetToUse = $rulesets | where Name -NE 'AppSource Certification'
    
    $analyzeResult = Invoke-PowerAppsChecker -Geography UnitedStates -ClientApplicationId "$env:APPID" -TenantId "$env:TENANTID" -Ruleset $rulesetToUse `
        -FileUnderAnalysis "$env:BUILD_SOURCESDIRECTORY\DeploymentPackage\DeploymentPackage\bin\Release\PkgFolder\Solution.zip" `
        -OutputDirectory "$env:BUILD_SOURCESDIRECTORY" `
        -ClientApplicationSecret (ConvertTo-SecureString -AsPlainText -Force -String $clientsecret)
    
    # Unzip and results
    Expand-Archive -LiteralPath "$($analyzeResult.DownloadedResultFiles.Get(0))" -DestinationPath "$env:BUILD_SOURCESDIRECTORY" 
    
    #Rename
    $extractedFile = $($analyzeResult.DownloadedResultFiles.Get(0))
    $extractedFile = $extractedFile -replace ".zip", ".sarif"
    Rename-Item -Path $extractedFile -NewName "PowerAppsCheckerResults.sarif"
    
    If ($analyzeResult.IssueSummary.CriticalIssueCount -ne 0 -or $analyzeResult.IssueSummary.HighIssueCount -ne 0) {
    Write-Error -Message "Critical or High issue in PowerApps Checker" -ErrorAction Stop
    }
    

    You can change the ruleset and add overrides as per https://docs.microsoft.com/en-gb/powershell/module/microsoft.powerapps.checker.powershell/Invoke-PowerAppsChecker?view=pa-ps-latest

    Step 3 - Call and Collect Results in your build pipeline

    I'm assuming that you are using AzureDevOps YAML pipelines. If not, I'd recommend you do it because it makes source control and versioning of your pipelines so much easier.

    I have three tasks for the Solution Checker as follows:

    # PowerAppsChecker
    - task: PowerShell@2
      displayName: Solution Checker
      inputs:
        filePath: 'BuildTools\BuildScripts\SolutionChecker.ps1'
        arguments: '"$(ClientSecret)"'
        errorActionPreference: 'continue'
    
    - task: CopyFiles@2
      displayName: Collect - Solution Checker Results
      inputs:
        Contents: '**/PowerAppsCheckerResults.sarif'
        TargetFolder: '$(Build.ArtifactStagingDirectory)'
    
    - task: PublishBuildArtifacts@1
      displayName: Publish CodeAnalysisLogs
      inputs:
        PathtoPublish: '$(Build.ArtifactStagingDirectory)/PowerAppsCheckerResults.sarif'
        ArtifactName: 'CodeAnalysisLogs'
        publishLocation: 'Container'

    The first task runs the PowerShell script, and the second and third collects the results so that we can report on them.

    To ensure that the $(ClientSecret) parameter is provided, you need to add a pipeline variable for the same:

    Step 4 - Reporting the results

    The Solution Checker outputs the results in a 'Static Analysis Results Interchange Format' (SARIF) which is a standard format. There are various viewers you can use, but I find having the results directly in the pipeline very useful. 

    You will need to install the 'Sarif Viewer Build Tab' - https://marketplace.visualstudio.com/items?itemName=sariftools.sarif-viewer-build-tab

    Once you've got this working, it'll scan your build artifacts for a sarif file and show the results!

     

    So that's it! When you run your pipeline (which I recommend you do every time a new commit is made to the source branch), the solution will be automatically run through the solution checker, and if there are any critical issues, the build will fail.

    If you do find that there are some critical issues that are false positives (which can happen), you can exclude those rules by modifying your script to something like:

    $overrides = New-PowerAppsCheckerRuleLevelOverride -Id 'il-avoid-parallel-plugin' -OverrideLevel Informational
    
    $analyzeResult = Invoke-PowerAppsChecker -RuleLevelOverrides $overrides `
    ...
    

    Hope this helps!

    @ScottDurow

  42. Happy 21st December!

    The chestnuts are roasting, and the snow is falling (somewhere I'm sure). It's that festive time of year again, and with it, a new year is beckoning. We all know that the biggest event of 2020 will be the retiring of the 'classic' user interface in Power Apps and Dynamics 365. To make sure you are ready for this, my gift is an updated version of Smart Buttons that is fully supported on the Unified Interface. It also includes a new smart button 'WebHook' that can be used to call HTTP Triggered Flows.

    What are Smart Buttons?

    Smart Buttons are a feature I introduced into the Ribbon Workbench a while ago to make it easier to add buttons to the Model Driven App Command Bar without needing to create JavaScript Web resources.

    To enable Smart Buttons in your environment, you will need to install the Smart Button Solution and then it will light-up the Smart Buttons area in the Ribbon Workbench. 

    There are 4 Smart Buttons at the moment (but you could easily create your own if you wanted!):

    • Run Workflow: Create a workflow short cut and then optionally run code when it has finished. Run Workflow can be added to Forms or Grids.
    • Run WebHook: Create a button to run a WebHook (such as an HTTP Flow). Run WebHook can be added to Forms or Grids.
    • Run Report: Create a report short-cut button on forms.
    • Quick JS: Add a quick snippet of JavaScript to run on a button without creating separate web resources. Think of this as the 'low code' way of adding Command Buttons!

    Quick JS

    Megan has used this Smart Button before and asked me if it can support the formContext way of accessing attribute values rather than the deprecated Xrm.Page. Well, the good news is that it now can!

    You could add some JavaScript to set a value on the form and then save and close it:

    context.getAttribute("dev1_submission_date").setValue(Date());
    context.data.entity.save("saveandclose");

    In the Ribbon Workbench this is easy to do:

    Once you've published, you now have a button to run this 'low code' on the form:

    Literally you could use this for infinite possibilities where you need to make a small change to the form before saving it - just when a user clicks a button. You could even trigger a workflow or a Flow on the change of the value!

    Run Workflow

    The Run Workflow button has had a makeover too - it now gives much better feedback when running workflows (both sync and async) and you can run some simple JavaScript if there is a problem:

    The Workflow that this is running simply updates a field on the record with the current date:

    Once you've published, this looks like:

    You can see that now the grid is automatically refreshed for you too! This button can also be added to forms or subgrids on forms.

    Run WebHook

    If you have a Flow that is initiated by an HTTP request, you can use this Smart Button to call the Flow on a list of records. Imagine you had a Flow that has a 'When a HTTP request is received'. You can copy the HTTP Post url and provide the input JSON to receive an id string value of the record it is being run on.

    As you can see, this Flow simply updates the account record and then returns OK.

    Inside the Ribbon Workbench, you can then add the WebHook smart button:

    Notice the Url is pasted in from the Flow definition. Eventually, once Environment Variables have come out of preview, I will update this to receive an environment variable schema name so that you can vary the URL with different deployments. That said, I also hope that this kind of functionality will become supported natively by the Flow integration with Model Driven Apps so that we can programmatically run a Flow from a Command Button in a fully supported way. Until then, once you've published you'll be able to run the flow on multiple records:

    Again, once the Flow has been run, the grid is refreshed. This button can also be included on Sub Grids on forms or the form command bar it's self.

    A little bit of DevOps

    When I first wrote the Smart Buttons solution, I set it up in Azure DevOps to automatically build and pack into a solution. This made it so much easier when I came to do this update. Doing DevOps right from the beginning really pays dividends later on! You can head over to GitHub to check out the code which is now written entirely in TypeScript and uses gulp and spkl to do the packing (If you are into that kind of thing!).

    Well, there you have it - hopefully, this will help you with the move to the UCI if you are already using Smart Buttons, and if you are not then you might find a need for them in your next Demo or when needing to quickly create Command Bar short cuts. If you are upgrading from the old version, it will mostly work with and in-place update, but you will need to add the extra two parameters on the Run Workflow smart button. The easiest would be to remove the old button and re-add it. Oh yes, and the Run Dialog smart button is no longer included because they are not part of the UCI!

    >> You can grab the updated Smart Button solution from github too <<

    Merry Christmas to one and all! ❤

    @ScottDurow

  43. Yesterday we announced our new product, SalesSpark, the Sales Engagement platform built natively upon the PowerPlatform 🚀 I've been working on this product for the last few months and have been really impressed with what the Power Apps Component Framework (PCF) can do for Model Driven Power Apps. In the past, the only way to extend Apps was to include custom HTML Web-resources. My open-source project SparkleXrmmade this easier by including libraries for building grids, and controls that acted like the out of the box controls. With the availability of PCF, the landscape has shifted and so will the direction of SparkleXrm.

    To build SalesSpark we have used the power of the office UI fabric which is built upon React. Just like SparkleXRM, we use the MVVM pattern to create separation between UI rendering logic and the ViewModel logic. 

    In this post, I wanted to share just a few features of SalesSpark that I'm really happy with! 😊

    PCF means breaking free from IFRAMEs!

    At the heart of SalesSpark are Sequences - these are a set of steps that act as your 'Virtual Assistant' when engaging with prospects. SalesSpark connects to your mailbox, and sends and replies to emails directly inside Office 365. We had to build a Sequence Designer that allows adding emails using templates. One of the user experience patterns that has always been impossible when using Html Web-resources was the popup editor. This was because you were never allowed to interact with the DOM. Since the PCF team now support the office UI fabric, those constraints have gone away, allowing us to create a really cool sequence editor experience:

    PCF allows Drag and Drop!

    These days, everyone expects things to be drag and dropable! This again has always been a challenge with 'classic' HTML Web-resources. With PCF we were able to create a variety of drag and drop user experiences:

    Not only can you drag and drop the sequence steps, but you can also add in attachments to emails. The attachments can be 'traditional' email attachments or cloud download attachments that allow you to monitor who has opened them from your email. Also, notice how the email can be created without saving it, the attachments are then uploaded when you are ready to send or you save the email.

    PCF is great for Visualizations

    In addition to the user experience during data entry, PCF is great for introducing visualizations that make sense for the data you are monitoring. With SalesSpark, when you add contacts to a Sequence, you then want to monitor how things are progressing. We made the sequence editor not only allow you to build sequences but also monitor the progress - allowing you to make changes as it runs.

    PCF and the Data Grid!

    I think the most exciting part of PCF for me is that it allows extending the native Power Apps experience rather than replacing it. With HTML Web-resources, once you were there, you had to do everything. Using PCF fields on a form means that you don't have to worry about the record lifecycle or navigation. Adding a PCF control to a view means you get all the command bar, data loading and paging for 'free'.

    The SalesSpark data grid control implements lots of additional features to extend the native data grids. You get infinite scrolling and grouping, as well as custom filtering experience.

     

    Chart Filtering

    And of course, because it's a Grid as far as Power Apps is concerned - you can use the Chart filtering - here I am using a Chart to filter the list to show contacts that have no stage set on them so that I can add them to a Sequence:

    I hope you'll agree that the PCF unlocks so much potential in Power App Model-Driven Apps that we simply couldn't access before!

    Watch this space for some more exciting things to come! 🚀
    Learn more about SalesSpark

    @ScottDurow

    P.S. If you've any questions about the PCF, just head over to the PCF forumswhere you'll often find me hanging out with other like-minded PCF developers like TanguyAndrew, and Natraj - and what's more the Microsoft PCF product team is always available to answer those really tough questions!

  44. It is wonderful to see so many PCF controls being built by the community.  This post is a call-to-action for all PCF builders - it's time to make sure your PCF component handles read-only and field-level security! The good news is that it's really easy to do. There isn't much in the documentation about this subject, so I hope this will be of help.

    Read-only or Masked?

    In your index.ts, you first need to determine if your control should be read-only or masked. It will be read-only if the whole form is read-only or the control is marked as read-only in the form properties. It can also be read-only if Field Level Security is enabled. Masked fields are where the user doesn't have access to read the field due to the Field Security Profile settings. Typically masked fields are shown as *****.

    // If the form is diabled because it is inactive or the user doesn't have access
    // isControlDisabled is set to true
    let readOnly = this._context.mode.isControlDisabled;
    // When a field has FLS enabled, the security property on the attribute parameter is set
    let masked = false;
    if (this._context.parameters.picklistField.security) {
      readOnly = readOnly || !this._context.parameters.picklistField.security.editable;
      masked = !this._context.parameters.picklistField.security.readable;
    }

    Pass the flags to your control

    I use React for my control development and so this makes it really easy to pass the details into the component. You'll then need to ensure your control is disabled or masked when instructed to.

    ReactDOM.render(
      React.createElement(PicklistControl, {
        value: this._selectedValue,
        options: options,
        readonly: readOnly,
        masked: masked,
        onChange: this.onChange,
      }),
      this._container,
    );

     

    Testing the result!

    Here I have a simple picklist PCF control. It is associated with two Optionset fields. One normal, and one with Field Level Security:

    The 'Secured Optionset' field is masked because the associated Field Security Profile has 'No' on the 'Read' setting. This causes the readable property to be false.

    If we toggle this to 'Yes' the field will be readable, but not editable because 'Update' is set to 'No':

    If we then set Update to 'Yes' we can then edit both fields:

    Finally, let's deactivate the whole record. This will then show both fields as read-only - irrespective of the Field Security!

    You can see that the record is read-only by the banner at the top of the record:

    Call to action!

    If you have any PCF controls out there, it's time to re-visit them and check they handle read-only and Field Level Security settings.

    @ScottDurow

  45. The road from Classic Workflows to Flows has been a long one. Microsoft has been committed to bringing parity to Flow when compared to Classic Workflows. We are almost there but this is only half the story because there is so much more you can do with Flows compared to Classic Workflows. Transaction support is one of those features that Synchronous Workflows inherently supported because they ran inside the execution pipeline, but Asynchronous Workflows left you to tidy up manually if something went wrong halfway through a run. This often led to using Actions to perform specific actions inside a transaction, but wouldn't it be cool if we didn't need to do this? Read on!

    Note: Even though the product that was formally known as Microsoft Flow is now called Power Automate, Flows are still called Flows!

    So what's a transaction?

    At the risk of teaching you to suck eggs, Transactions simply put are a way of executing multiple operations, where if one fails, they all 'roll back' as if they never happened. The 'changeset' of operations is said to be 'atomic' which means that until the transaction is 'committed', no one else can see the records that are created/updated/deleted inside the transaction scope.

    Imagine a scenario, where a system needs to transfer a booking from one flight to another where both flights are in very high demand:

    1. ✅ The system cancels the customers current booking
    2. ❌ The system books the new flight, but this fails because the flight is now full
    3. ❌ The system tries to re-book the previous canceled flight, but someone has already taken the seat
    4. 😢 The customer is left with NO flight 

    What about a different order of events where the system goes offline halfway through:

    1. ✅ The system books the new flight
    2. ❌ The system cancels the previous flight, but this fails because the system is unavailable
    3. ❌ The system tries to cancel the flight just booked in step 1 because the customer now has two flights, this fails because the system is unavailable
    4. 😱 The customer now has TWO flights!

    In both of these situations, without transaction support, we are left having to perform complex 'manual transaction compensation'.  The topic of transactions is fairly complex, and there are lots of other topics such as locking and distributed systems, but simply put, transactions make database consistency easier to manage!

    How do Flows now support CDS transactions?

    Transactions are called 'changesets' in a Flow. This is a feature that was announced as part of the Wave 2 changes - and it's just landed!

    To use changesets, you will need to be using the CDS Current Environment Connector:

    Once you have inserted the changeset, you can add the actions that will be part of the transaction. These can only be Create, Update and Delete CDS actions.

    In this case, I am going to need to query CDS to get the new flight details, and the details of the booking to cancel. To test, I'll use a flow button that can accept parameters:

    We next use the List Records CDS action to get the booking details. We use the Booking Reference to populate the query parameter:

    Top tip:You can use Jonas Rapp's awesome FetchXml builder to build Flow Queries. Sara Largerquist has a great post on how to easily do this.

    Now that we've got the records we need, we can create the new booking and cancel the previous in a single transaction:

    If anything fails in this transaction, nothing will be applied to the CDS database. If I run the flow twice for the same booking reference, the booking is already cancelled and so both actions will fail:

    Notice how the 1st action is 'Skipped' because the 2nd action failed. This is the magic of transactions!

    My Flow Pledge

    I solemnly swear that I will never write another Classic Asynchronous Workflow!

    How about you?

    @ScottDurow

     

  46. You might have seen the important announcement from the PowerPlatform team about how there are new API limits to be in effect from the 1st October 2019.

    We have been somewhat spoilt in past years with very little throttling on how much we can use the API, but now in order to encourage fair usage there will be a limit on how many times you can call the PowerPlatform API in a 24hr period, depending on the license you have:

    User licenses Number of API requests / 24 hours
    Dynamics 365 Enterprise applications 20,000
    Dynamics 365 Professional 10,000
    Dynamics 365 Team Member 5,000
    PowerApps per user plan  5,000 
    Microsoft Flow per user plan  5,000 
    Office licenses (that include PowerApps/Microsoft Flow)  2,000 

    (taken from https://docs.microsoft.com/en-us/power-platform/admin/api-request-limits-allocations#what-is-a-microsoft-power-platform-request)

    There is a section in the article that calls out what a PowerPlatform Requestis and it seems to be quite clear that it's any Connector calls, Flow Step Actions, and CDS API CRUD type calls.

    How does this affect Model-Driven Apps and WebApi calls?

    One of many improvements of the Unified interface over the legacy Web UI is that it is much less 'chatty' to the server - but there are still ~10 calls for a simple contact form load once metadata has been loaded and cached.

    If I filter the network requests to '/api/data/v9' when opening a Unified Client Contact form with no customisations, I get 17 requests:

    On examination of these requests, there are some requests that will not count as an API Call:

    • 304 Not modified requests - where data is cached (4)
    • Calls to 'special' actions such as  'GetClientMetadata' which will not count as an API call (2)
    • Calls to UpdateRecentItems (1)

    This leaves 10 calls to the Web API endpoint, all of which will count towards the API limit. It's worth noting that the $batch calls only count as a single API Call even though they can contain multiple requests.

    What does this mean in 'Real Terms'?

    Let's assume the following:

    • A user has a Dynamics 365 Professional license giving them 10,000 API calls/day
    • There are no Flows, CanvasApps, Workflows, Plugins that are running under their user identity
    • There are no customisations made to the standard 'Sales Hub' contact form.
    • There are no other ISV products installed

    Assuming this, the user would be able to open approximately ~1000 contact records before they hit the API limit.

    This equates to opening ~2 records a minute assuming that the user is opening records constantly for 8 hours straight! 🤣

    The good news is that Users will not be blocked if they exceed the limit, the environment administrator will be notified so that they can take action and perhaps purchase an API Limit add-on (details of which are yet to be published but I'll update this post when they are).

    Custom vs First Party

    The key takeaway here is that the new limits do not differentiate between custom calls to the WebApi made by the out of the box code and custom code. 

    Any calls your custom JavaScript makes to Xrm.WebApi.* from inside your Model-Drive Apps will count as an API call alongside the 10 calls we see above.

    Call to action!

    What does this mean for Model Driven App developers? Well, it's fairly clear that the new API limits are not overly generous, but shouldn't pose too much of an issue for normal users as long as you ensure that you minimize the custom API calls that you make from inside your Model-Driven code. The good news is that the JavaScript Xrm.WebApi already implements etag cache support for you - you can read my blog post about how it'll help you keep your API calls down!

    Note: I will update this post if I hear of any changes that come down the line from this announcement. 

  47. I was very privileged to be asked to speak at the first D365UG meeting in Bristol. If you didn't manage to make it, Joel did a fantastic job of recording it so you can watch now!

  48. If you are coming to the PowerPlatform World Tour in London on the 28th August I'll see you there. Come and see my session on how CDS changes the way we think about building Apps!

    Today I'm really busy on an exciting new PowerPlatform project, so just a quick post!.

    If you've recently updated Windows 10 to .NET Framework 4.8, you might find when running tools like spkl that you'll get the following exception:

    Unable to initialize the native configuration support external to the web worker process (HRESULT=0x80040154).
    nativerd.dll must be in %windir%\system32\inetsrv

    To resolve this you simply need to open Control Panel -> Programs -> Project and Features -> Turn Windows features on or off

    Under .NET Framework 4.8 Advanced Services, turn on WCF Services -> HTTP Activation 

    Click OK and everything should work again!

     

  49. There are times where you'd just like to quickly know what's going on in your CDS instance in 'real time' without filling up your Plugin Trace Log.

    Here is a neat way of enabling logging using webhooks:

    1. Goto Webhook.site (or alternative) and copy your webhooks URL:


    2. Open the Plugin Registration Tool and select Register -> Register New WebHook
    3. Enter a name (it doesn't matter what it is) and paste in the Endpoint URL you copied in step 1.
    4. Register Steps on the messages you are interested (e.g. Retrieve, RetrieveMultiple, etc.) you can even put in filtering entities and attributes if you are interested in specific cases.
    5. Use your CDS instance and watch the messages show in more or less real time!
    6. When you are finished, you can simply disable the steps or delete the webhook.

    Hope this helps! 🚀

  50. *Well ok it's not for sale - but I've got your attention! 😂 Here's my point - it wasn't much over a year ago that it was for sale! Using the waybackmachine and whois history you can see the development of the domain which eventually was bought by Microsoft seemly only in the last year or so. PowerPlatformUG.com was only registered in July 2018. Before Microsoft started using this name, the Power Platform was something to do with the Utility Power sector! Let's consider that - the buzz that is happening around the Power Platform is barely a year old but it's now one of the most talked about topics amongst the Business Applications community. Wow!

    At the end of March, I had the pleasure of speaking at and attending the Power Platform summit in Amsterdam. The name seems great - but in reality, it was somewhat aspirational and folks actually registered to attend individual summits. The event was effectively an amalgamation of eXtreme365, the PowerBI, Dynamics365, PowerApps & Flow User Groups. Each speaker had their own background and product/technology focus. Hang on though - isn't that what the PowerPlatform is? A collection of technologies that all come from different backgrounds? Well sure that's where it's come from - but Microsoft are betting on it becoming something that is much more than that.

    So what is the PowerPlatform anyway?

    James Philips, VP of Business Apps at Microsoft, describes the PowerPlatform in his blog post from just a couple of months ago as a platform to "Analyze, Act, and Automate". He goes on to say that "We do this with Power BI, PowerApps, and Flow, all working together atop your data to help EVERYONE, from the CEO to the front-line workers, drive the business with data."

    I don't believe that there was ever a decision whilst Microsoft was working on Dynamics 365, Flow, PowerApps & Power BI to consciously build them in a way that could be unified, but over time it's become clear that the opportunity to combine these technologies together into a democratized digital transformation strategy was huge. Suddenly through this unifying strategy, the harvesting of code from Dynamics 365 to create the Common Data Service (CDS) Microsoft has been catapulted as the leader in the low code sector as defined by the Forester Wave report recently reported by Charles Lamanna. I had the honor of meeting Charles at the recent MVP Summit and I was struck by his sense of vision and ambition to truly revolutionize the area of Business Applications.

    Here is my diagram of how the PowerPlatform looks like today:

    Do you remember when Dynamics CRM rebranded to Dynamics 365 Customer Engagement? We were all rather confused because the change was in name only. This time it's not just a re-brand, with the PowerPlatform the change is for real! There is both a strategy and technology shift that we've not seen in Business Apps before. It is allowing both Pro-Devs, Functional Consultants & Information workers alike to collaborate together so that technology can be both governed and also productive at the same time. Those two aspects have traditionally been at odds with each other.

    Let's look at the google search term stats for the last couple of years:

     

    It's clear that there is a recent increase in activity around Power Platform but it's still tiny compared to other keywords like PowerApps. The Power Platform was launched ever so softly into the wild that it's become somewhat of an enigma with people talking about it but without really knowing where it came from or what it meant. With the Common Data Model announcement at Ignite 2018 (Open Data Initiative collaboration between Microsoft, Adobe and SAP) there was a feeling that the Power Platform was more a philosophy than a product. Maybe just a collective term to talk about Microsoft's collection of technologies that had all been gaining traction in the market, underpinned by some strategic collaborations. 

    That was then. Today, Microsoft is clearly pushing the platform more as a way of enabling digital democratization within organizations, with their strapline:

    "Empower everyone to innovate with one connected app platform"

    From a technology perspective, the Power Platform is the unification of Flow, PowerApps & PowerBI - all underpinned by the Connector ecosystem and the Common Data Service (CDS) that was 'harvested' from the Dynamics 365 for Customer Engagement XRM Platform. Furthermore, it's important to understand that PowerApps is not what it used to be in the days where it was closely coupled with SharePoint. Nor is Flow for that matter. They are now firmly underpinned by the Common Data Service. The unification of these technologies along with the tight integration with Office 365 and Azure makes the PowerPlatform such more than just the sum of its parts.

    The Power Platform means something very special in the digital transformation space - it is about the democratization of App Building within a consistent and powerful governed platform. 

    Is Dynamics 365 Dead?

    In the latter half of last year, http://admin.dynamics.com started to redirect to https://admin.powerplatform.microsoft.com/ - so does that mean that Dynamics is no longer a thing? Not at all. Dynamics 365 now is referred to as 'first party' apps in that it is a set of apps built on the PowerPlatform by Microsoft (check out the top layer in my diagram above). Businesses are free to build their own to compliment or even replicate the Dynamics 365 apps if they wish. As Microsoft invest in the Power Platform adding more enterprise-grade features such as AI and analytics, these first party apps grow in their capability, making the value proposition even greater in the buy vs build decision. 

    However, the transformation is not yet complete - take a look at powerplatform.com still redirects to https://dynamics.microsoft.com/en-gb/microsoft-power-platform/ 

    ...and there's more - the strap-line is "Unlock the potential of Dynamics 365 and Office 365 faster than you ever thought possible."

    Dynamics 365 is still very much alive! 

  51. This year I'm presenting on two topics at the D365 UG European Summit. It's going to be a busy couple of weeks! Next week I'll be hanging out with my MVP friends in Seattle whilst learning about the future of the PowerPlatform and Dynamics 365 from the product team. The following week (27-29 March 2019) I'll be in Amsterdam for the summit. Here are the details of my sessions:

    Learn to Convert Your Model Driven App Customisations from JavaScript into TypeScript and Decrease the Total Cost of Ownership

    We are in the age of low-code-no-code – so where does JavaScript fit into this brave new world? Maybe you suffer from complex JavaScript that is too fragile to refactor, or maybe you have so much JavaScript you are not sure what is being used and what isn’t. Join Scott to learn how to tame your JavaScript customisations by converting them in to TypeScript. You’ll see how it’s less effort than you thought and how you can unlock the befits of Unit Test and Refactorability.

    I'm really excited about this session. I'll be putting on my full geek to show you lots of sample code and where to find more.

    Power Platform Demystified (revisited)

    You’ve heard about Canvas Apps, Model Driven Apps, CDS for Apps, Power BI and Flow You’re perhaps already using them – but how do they all fit together to form the Power Platform. Join Scott to learn about the amazing journey of how we got from a product called Microsoft CRM to the unique platform that underpins Dynamics 365. You’ll learn about how each part works together so that you can make the right decision when choosing between them as well as they key licensing implications

    If you are a member of the D365 UG UK you might have seen me do a similar session - but this time I'll be talking about the latest features that is fast making the PowerPlatform something very, very special indeed...

    What is the D365 UG European Summit?

    If you've not been to summit before, it has the same vibe as a local chapter meeting by amped x1000! There is high quality content from experts and end-users as well as the chance to network, learn and collaborate with other like minded people!

    You can register using one of the following links:

    If you are going to be there - I'll be on the medics desk (most likely wearing a white coat) at regular points throughout the 3 days if you'd like to come and chat!

  52. Way back in 2011 I blogged about the behaviour of DateTimes in Dynamics CRM (as it was then!). I titled the post 'the last word?' but of course, it's never the last word when it comes to a technology that is always moving forward.

    This post aims to explain where we are today with Date & Times fields inside the Common Data Service for Applications (CDS) and PowerApps.

    User Local vs. Time Zone Independent

    In my previous post, I described the challenges of storing absolute dates such as dates of birth. These dates don't change depending on which timezone you are in. Since then, the PowerPlatform now supports 'Time Zone Independent' dates that will always show the date that they are entered as.

    If you choose DateTime as the field type you can then select from 3 'behaviours':

    This table summarises the differences between these 3 behaviours:

    Field Time

    Behaviour

    Affected by User Time Zone in PowerApps?

    Time Stored in CDS?

    CDS WebApi Read/Write uses time zone?

    Can be change once set?

    Date

    User Local

    ✅*

    The time element is set to 00:00 minus the user's time zone offset.

    Always UTC

    Can change to Date Only or Time zone Independent

    Date

    Date Only

    Date

    Time zone independent

    Always 00:00 irrespective of time zone

    Date & Time

    Time zone independent

    Time is set to whatever is entered by the user with no adjustments.

    Date & Time

    User Local

    ✅*

    The time element is set to time entered minus the user's time zone offset.

    Always UTC

    Can change to Time zone Independent only

    *Model Driven Apps use the user's time zone settings. Canvas Apps use the local machine's time zone.

    What's the difference between Date (Date Only) and Date (Time zone Independent)?

    Given that Date fields should not show a time, why then do we have both a Date Only and Time Zone Independent behaviour for these types of fields. It's not clear why there is a distinction, but the effect is that web service only returns the Date element for Date (Date Only) fields and for Date (Time Zone independent) fields 00:00 is always returned irrespective of the time zone.

    In a model-driven app the fields look like:

    The WebApi returns 00:00:00Z for the Time zone independent field but not the Date Only field. The formatted values are however identical!

    I can't think of any reason why this might be useful other than if there some client behaviour that couldn't deal with date only fields and always needed a time element.

    Date Time (User Local) Field Example:

    Here is a worked example of the standard behaviour in Date Time User Local fields:

     

    Calculation

    Worked Example

    Time Zone Offset User 1

    𝑎

    UTC +10:00 (Brisbane)

    Time Zone Offset User 2

    𝑏

    UTC -10:00 (Hawaii)

    Time Entered by User 1

    𝑥

    20-Jan 14:00

    Stored in CDS as UTC

    𝑥𝑎

    20-Jan 04:00 (14:00-10:00 = 4:00)

    Shown in App to User 2

    𝑥𝑎 + 𝑏

    19-Jan 18:30 (14:00 - 10:00 + (-10:00) = 18:00)

    Notice how user 2 sees the date as 19th Jan even though user 1 entered it as 20th Jan.

    Date Only (User Local) Field Example:

    For Date only User Local fields, the behaviour is the same except the time is set to 00:00 when entering the date. Here is a worked example:

     

    Calculation

    Worked Example

    Time Zone Offset User 1

    𝑎

    UTC +10:00 (Brisbane)

    Time Zone Offset User 2

    𝑏

    UTC -10:00 (Hawaii)

    Time Entered by User 1

    𝑥

    20-Jan-19 00:00

    Stored in CDS as UTC

    𝑥𝑎

    19-Jan 04:00 (00:00-10:00 = 14:00)

    Shown in App to User 2

    𝑥𝑎 + 𝑏

    19-Jan 04:00 (00:00 - 10:00 + (-10:00) = 04:00)

    Notice here that even though the field is set to Date only it is still affected by the local user's time zone and so the Date shows as the 19th for User 2.

    All other field types

    For Time zone independent and Date only fields the calculations are simple – the date time returned is the same as entered irrespective of time zone.

     

    Calculation

    Worked Example

    Time Zone Offset User 1

    𝑎

    UTC +10:00 (Brisbane)

    Time Zone Offset User 2

    𝑏

    UTC -10:00 (Hawaii)

    Time Entered by User 1

    𝑥

    20-Jan-19 14:00

    Stored in CDS the same as entered

    𝑥

    20-Jan-19 14:00

    Shown in App to User 2

    𝑥

    20-Jan-19 14:00

    Model Driven Apps

    The behaviour in Model Driven Apps in the UI is simple as shown below (in the same order as the table above).

    Canvas Apps

    If you build a Canvas app that includes these fields it will look like:

    Current issues with the CDS Connector for Canvas Apps:

    1. There is an issue with the Date only User Local behaviour where it shows the time element.
    1. The formatting of the dates will not honour the formatting of the user in their CDS user settings. You will need to manually handle formatting using the CanvasApps field formatting:
    1. The DateTimeZone.Local will use the user's local machine's time zone rather than their CDS user settings time zone and so currently you'll need to manually compensate for this since it could lead to a different date/time being shown in the Model Driven App compared to the Canvas App if the two time zones are different.

    These issues will be fixed in a future release of the CDS connector.

    WebApi Date Times

    When you query, create or update date time fields using the WebApi, remember to always set the value in UTC and compensate for any time zone offsets manually since it will not use the user's time zone at all.

    Changing Behaviour

    As you can see in the table above, if you have User Local Fields you can choose to change to Date only or Time Zone independent fields which is a one-way process. This does not affect the current values in the database (which will be UTC). New fields will correctly be stored, but you may find that existing values will now show incorrectly because they will be the UTC value original stored in the database. To correct this, you will need to write a conversion program using the ConvertDateAndTimeBehaviorRequest message.

    You can find a sample written in c# to change the behaviour here- https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/org-service/sample-convert-date-time-behavior

    Important: There is a cautionary note here in that you must open and re-save any workflows, business rules, calculated field and rollup field after changing the behaviour of the field.

    Read more

    There is good documentation on the Common Data Service DateTime fields at https://docs.microsoft.com/en-us/powerapps/maker/common-data-service/behavior-format-date-time-field.

    Information about changing date time behaviour - https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/behavior-format-date-time-attribute#convert-behavior-of-existing-date-and-time-values-in-the-database 

  53. The Xrm.WebApi client-side SDK has been around for a while now, but you may still be using a hand-built HTTP request to call the WebApi from JavaScript/TypeScript.

    ETag magic

    Normally when you query the WebApi for a specific record you'll always get a JSON response back that contains the entity field values at the time of the query.

    If your code queries the same record using the WebApi many times then this can introduce overhead that will slow down your code. To combat this, we often introduce elaborate caching schemes but this leads to the challenge of keeping the cache current.

    The good news is that the Xrm.WebApi SDK already implements a cache for us inside the retreiveRecord call using the ETag.

    Consider a call to retrieveRecord as follows:

    Xrm.WebApi.retrieveRecord("account","<guid>","?$select=name,parentaccountid")
    .then(function(a){console.log(a);})

    The first call will retrieve the record from the server including the ETag value

    {
        "@odata.context": "https://org.crm11.dynamics.com/api/data/v9.0/$metadata#accounts(name,parentaccountid)/$entity",
        "@odata.etag": "W/\"4400496\"",
        "name": "Sample Account Mon, 31 Dec 2018 10:36:56 GMT",
        "statecode@OData.Community.Display.V1.FormattedValue": "Active",
        "statecode": 0,
        "accountid": "120703f7-e70c-e911-a8c2-0022480173bb",
        "merged@OData.Community.Display.V1.FormattedValue": "No",
        "merged": false
    }

    The @odata.etag is then used to build a cache of the response that is dependant on the fields that are retrieved.

    When you next query for the same record with the same $select attributes the client SDK will send the value of the Etag in the request header:

    If-None-Match: W/"4400496"

    If the record has not been modified since then the server will return:

    HTTP/1.1 304 Not Modified

    This indicates that the client side SDK can then reuse the same record that was retrieved previously.

    Since it would be quite complex to implement this feature in your hand-build HTTP requests, this is indeed a good reason to use the Xrm.WebApi SDK!

    Happy 2019! 😊

  54. One of the most underused tools in Dynamics and CDS development teams the myriad of those available is the Microsoft.Xrm.Data.PowerShell library by the ever helpful Sean McNellis. If you have to perform repetitive tasks then there is nothing easier but with the unfamiliar nature of PowerShell for those of us that write C# or JavaScript on a daily basis, it's often avoided.

    This post is a call to action - consider it as an option for the following reasons:

    1. You can quickly convert Excel spreadsheets into a PowerShell script to perform repetitive tasks such as adding roles to users or entities to solutions.
    2. Easily create reusable scripts that are parameterized without the complexity of a user interface.
    3. Easily automate build tasks and run them over and over again with no chance of human error
    4. Creating scripts to give to other people to run as their user account when you don't have access to the target environment

    Recently I needed to add a load of entities to a solution which can be quite cumbersome using the classic solution UI and the PowerApps solution manager doesn't allow you to add entities without their sub-components yet - PowerShell to the rescue.

    # Shows how to add entities to a solution
    
    Set-ExecutionPolicy –ExecutionPolicy RemoteSigned –Scope CurrentUser
    Install-Module Microsoft.Xrm.Data.PowerShell -Scope CurrentUser
    Import-Module Microsoft.Xrm.Data.Powershell
    $conn = Connect-CrmOnlineDiscovery -InteractiveMode
    
    Function Add-SolutionComponent
    {
        param
        (
            [string]$solutionuniquename,
            [Int32]$componenttype,
            [Guid]$componentid
        )
    
        # See https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/gg327422(v%3Dcrm.8) 
        $addrequest = new-object Microsoft.Crm.Sdk.Messages.AddSolutionComponentRequest
        $addrequest.AddRequiredComponents = 0
        $addrequest.ComponentType = $componenttype #1=Entity
        $addrequest.DoNotIncludeSubcomponents = 1
        $addrequest.ComponentId = $componentid
        $addrequest.SolutionUniqueName = $solutionuniquename
        $response= $conn.ExecuteCrmOrganizationRequest($addrequest)
    
    }
    
    Function Add-EntitiesToSolution
    {
        param
        (
            [string]$solutionuniquename,
            [string[]]$addentities
        )
    
        Write-Host "Checking that solution exists '$solutionuniquename'"
        $solution = Get-CrmRecords -conn $conn -EntityLogicalName solution -FilterAttribute uniquename -FilterOperator eq -FilterValue $solutionuniquename -Fields solutionid
        $solutionid =  $solution.CrmRecords[0].solutionid
    
        Write-Host "Querying metdata to get entity id"
        $entities = Get-CrmEntityAllMetadata -conn $conn -EntityFilters Entity -OnlyPublished $false
    
        # Filter by the entities to add
        foreach($entity in $entities | ? {$_.LogicalName -in $addentities})
        {
            $logicalName = $entity.LogicalName
            $count = (Get-CrmRecordsCount -conn $conn -EntityLogicalName $logicalName -WarningAction SilentlyContinue)
            Write-Host "Adding $logicalName"
            Add-SolutionComponent -solutionuniquename $solutionuniquename -componenttype 1 -componentid $entity.MetadataId
        }
    }
    
    # Add lead, account and contact to the solution TestSolution
    Add-EntitiesToSolution -solutionuniquename "TestSolution" -addentities "lead","account","contact"

    So there you have it! I've picked this scenario because it shows some common things you'll need to use regularly:

    1. Querying for records
      Get-CrmRecords
    2. Executing SDK Messages
      ExecuteCrmOrganizationRequest
    3. Iterating and filtering collections
      foreach($entity in $entities | ? {$_.LogicalName -in $addentities})

    So there you have it. Hopefully, you'll consider using PowerShell if you've not already!

    You can find lots more samples on Sean's git hub samples repo.

  55. Ever since Microsoft CRM moved online and Plugin sandboxing became mandatory, you'll have likely come up against the challenge of using third party assemblies.

    Sand-boxed Plugins cannot access any third-party assemblies and so if you use a NuGet package such as the Microsoft.SharePoint.Client libraries or Newtonsoft's Json.Net then you may have considered or even used ILMergeto embed a copy of the assembly into your compiled Plugin. You may even have used ILMerge to include your own 'common' libraries into your Plugins - even though you could have included the source code. To put it simply - don't do this!

    ILMerge is *not* supported!

    This is not like the managed/unmanaged solutions or JavaScript vs Business Rules debate. The simple fact is that using ILMerge with Plugins is not supported by Dynamics 365 CE/CDS for Apps.

    There is a very old blog post on msdn from back in 2010 about ILMerge that contains this statement from the Dynamics team:

    "This post has been edited to reflect that Dynamics CRM does not support ILMerge. It isn't blocked, but it isn't supported, as an option for referencing custom assemblies."

    If you do decide to use ILMerge then be warned then you are in dangerous waters! If there is a problem with your environment that is plugin deployment related then the likely answer from Microsoft support is that you'll need to remove your use of ILMerge before your issue can be resolved.

    Don't bring me problems, bring me solutions!

    One of the most common reasons for using ILMerge I see is when using Newtonsoft's Json.NET. There are many code snippets out there that use this library to parse JSON it to an object graph. Consider the following code for de-serialising Json from the Nest API into a c# object graph:

    var nestData = JsonConvert.DeserializeObject<NestData>(json);
    public class NestData
    {
        public Devices devices { get; set; }
        public Dictionary<string, Structure> structures { get; set; }
    }

    The good news is that since .NET 4.0 we've had pretty much the same control over deserialising JSON using the standard class libraries:

    using (var stream = new MemoryStream(Encoding.UTF8.GetBytes(json)))
    {
        DataContractJsonSerializerSettings settings = new DataContractJsonSerializerSettings()
        {
            UseSimpleDictionaryFormat = true
        };
    
        DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(NestData), settings);
        var nestResponse = (NestData)ser.ReadObject(stream);
    }
    
    [DataContract]
    public class NestData
    {
        [DataMember]
        public Devices devices;
        [DataMember]
        public Dictionary<string, Structure> structures;
    
    }
    

    Other libraries that are not included in the .NET Framework (e.g. Microsoft.SharePoint.Client) shouldn't be used. If you can't include the source code in your Plugin, then consider using a loosely coupled Microservices approach to manage your integrations. This way you'll have fully supported lightweight plugins that can offload the heavy lifting outside of the sandbox worker processes.

    💡 Keep those Plugins small and lightweight! 💡

    Photo by Hafidh Satyanto on Unsplash

  56. As promised, I'm posting the code up for PowerFlappy! This post aims to give c# and JavaScript developers some tips on creating Canvas Apps.

    To install, simply open web.powerapps.com, select Create an App, then select Open. It works really well on your mobile. I hope to see someone playing it on the tube on Monday!

    Canvas Apps are function driven. Very much like Excel, they operate on values and tables of values using functions that accept these values as parameters.

    You can bind controls to data so that it will change as the underlying data changes. It is like the data-binding principle you get with KnockoutJS and Angular etc.

    Here are some common patterns that you'll need to use if you are writing Canvas Apps if you are a c# developer.

    …semicolons and new lines

    If you are a c# or JavaScript developer, then you are used to semicolons. It's practically subconscious; Canvas Apps follows suit but with one exception – the last line of a code segment!

    This is rather troublesome when you add more lines because you'll need to remember to put a semicolon on the previous line before starting. To get around this by always adding 'true' to the end of a code segment:

    This way, you can add new lines above 'true' and you'll not have to worry about not adding semicolons for the last line.

    Oh, and you are going to find quickly that just like in Excel you need to use SHIFT-RETURN to get a new line – just like in Excel!

    Variable

    You can set global variables in any function. They don't have to be defined before hand - simply use:

    Set(myVariable, "FooBar")

    Collections

    It's easy to add array style data using a JSON style format:

    // Initialise a new collection
    ClearCollect(collection, {i:1, x:100,y:100},{i:2, x:200,y:100},{i:3,x:300,y:100},{i:4,x:300,y:100});
    // Add a new item to the collection
    Collect(collection, {i:5,x:100});

    Timers

    Timers are a great way of making your code run at a specific interval. In a game, this would be typically be the main game loop. Since it's easy to have code spread all over your PowerApp – I find it useful to keep the majority of my code in one place.

    Set your timer as follows:

    You can then add the code to the Timer OnStartTimer event which will run every 100 milliseconds. Put it in the OnStart rather than the OnEnd to make it run immediately.

    Think in Functions

    If you had a collection of values, and you wanted to take an action on each of them based on their value, in c# you would commonly use something like:

    foreach (var item in collection) {
    item.x = item.x - 10;
    if (item.x < 0) {
    
            item.x = item.x + 1000;
            }
    }

    In Canvas Apps you can't do this kind of sequential logic, you have to think in functions like you would in a cell of Excel:

    UpdateIf(collection, true, {x:x-10}); // Update all items
    
    UpdateIf(collection, x<0, {x:x+1000}); // Update only those items where x<0


    This will filter the items that match the criteria and then update them accordingly. It kind of turns the foreach loop inside out!

    Indexers

    If you have a collection of items, it's quite normal to want to retrieve a value at a specific index. In C#:

    var item = collection[4];

    In Canvas Apps:

    Set(item, Last(FirstN(collection,4)))

    This clearly isn't optimal for large collections, so I hope there is a better option and will report if I find one. Canvas Apps compile into JavaScript - and collections are stored as simple object arrays with the Last and FirstN functions just operating on those array. You could use the Lookup function, but this ends up taking loner since it uses a delegate function to find the value rather than a simple array index.

    Sprites

    Canvas Apps has the concept of Galleries that allows you to create a template that is repeated for each data row. You might think that you could use this to create lots of sprites to use in your game – but unfortunately, the gallery control is limited in that you cannot move the controls outside of bounds of the screen and it's actually very slow at rendering.

    To overcome this I used a set of 'generic' images that I then bound to a sprite collection. This allows me to set the values in the collections to whatever I need and the sprites will move and change size/image as required due to the data binding.

    To create the side scroller, I simply scroll the sprites left using an UpdateIf as describe above. When it comes to graphics performance - it's a good idea to keep things a simple as possible.

    Level Data

    A trick that I've been using for years is to use Excel to write my code for me. If there is some data that you can use to generate code, then why write it manually?

    For the levels, I created a simple Excel Spreadsheet that visualises the levels – and then created the collection data in the format {x:<top ground position>,y:<lower ground position>,l:<level up indicator>}

    I guess I could create a level designer in a PowerApp too!

    Performance

    One of the challenges I had was keeping the scrolling smooth for the sprites. There were two things I found key to this:

    1. Only move sprites in increments of whole pixels. If you start to set sprites using fractions of pixels the rounding effect will create jitter
    2. Keep the timer interval higher than the time it takes to process each tick event.

    Overall Maintainability

    A trade off you get with Canvas Apps for the 'low-code' is that you don't get the easy to follow code files in traditional programming. Here are some tips to keep your App maintainable:

    1. Avoid having lots of code squirrelled away in many different events and property functions. For example, rather than having lots of timer events I have just a single game loop that does 90% of the functionality.
    2. Avoid functions in control property bindings, bind to a variable and set it to the value you need along with any calculations and conditions.
    3. Name your controls and variables consistently so that you can quickly identify what you need when using intellisense.

    Well, there you have it. Happy Power Flapping - be sure to tweet me your highest score @ScottDurow.

  57. Silly title I know – couldn't help myself! The significance of this topic is certainly not silly - it could give your app trouble free scalability or … well not.

    Delegable queries in Canvas Apps

    A delegable query is simply a query that offloads the heavy lifting to the connector source rather than downloading all the data and doing the processing locally in your Canvas App.

    A good example is when using the filter command. You can associate a gallery to the Accounts entity via the command:

    Filter(Accounts, AccountSearchText.Text in name)

    This will result in a fetchxml query condition sent to CDS of:

    <filter type="and">
        <condition attribute="name" operator="like" value="%contoso%" />
    </filter>

    You can also use:

    Filter(Accounts, StartsWith(name,AccountSearchText.Text))

    Which will give the fetchxml filter:

    <filter type="and">
        <condition attribute="name" operator="like" value="Contoso%" />
    </filter>

    All these queries are delegable and are the most optimal for large datasets.

    Consider however, if you use a non-delegable predicate such as:

    Filter(Accounts, Len(name)>5)

    This will result in the yellow-triangle-of-doom in the Canvas App designer with a tool tip saying:

    "Delegation warning. The highlighted part of this formula might not work correctly with column "name" on large data sets…"

    This is because Len is not in the list of supported delegable predicates – you can see the complete list in docs.microsoft.com (I've submitted a pull request to update this list because CDS and Dynamics 365 connectors actually have much better support than when first released)

    Optimising for large queries

    If you do need to perform one of these non-delegable predicates in your filter, you can also combine the filter with an initial query that pulls down a smaller subset using a predicate that is supported. To do this you need to nest the filters:

    Filter(
        Filter(Accounts, AccountSearchText.Text in name),
        Len(name)>5
    )

    Delegable Queries in Flow

    If you browse the Dynamics 365 templates available for Flow, you'll see a flow called 'Add notes to a Dynamics CRM contact'

    This flow basically allows you to search for a contact via a flow button and add a note to the matching contacts. The flow is simply:

    • Query Contacts
    • For each contact
      • If the first name and last name match those entered when triggering the flow
        • Create a note

    Seems straightforward until you consider delegable queries. If you have thousands of records, flow will attempt to download them all and loop through them one by one since the conditions are not delegable to the initial query.

    You'll see the effect of this if you test the flow and look at the number of records in the Apply to each. There are 29 records in this CDS instance and all are returned by the query since the query isn't delegated.

    This solution is definitely not scalable, and I'm surprised it's in the list of templates offered! To make the query delegable, the connector must be edited to include an OData query:

    This query will then scale as the number of contacts grows in your database because the heavy lifting of the query is delegated to the CDS database. These kind of performance considerations are important when building apps that will scale with the data-source. PowerApps makes it super easy to build amazing user experiences, but we sure to keep an eye on the App Checker ( ) since it's full of suggestions that'll keep your app running smoothly.

    Further reading:

  58. In part 2 of this series, we looked at debugging our TypeScript after it has been converted from JavaScript. When deploying JavaScript to Dynamics in production, you'll want to ensure that the file is as small as possible. We can do this by 'uglyfying' or 'minifying' our script using gulp. It's also desirable to be able to use multiple TypeScript source files, but compile into a single JavaScript file.

    Multiple Source Files compiled into one

    1. Open the tsconfig.json and add the following to the compilerOptions section:
      "outFile": "out/DependantOptionSet.js"
    2. Re build the solution so that the new config is picked up and then make a small change to the TypeScript file - you should now see a new folder out if you refresh the Solution Explorer.
    3. The great part about this is you can now start to split your TypeScript code into multiple source files, remembering that you'll need use the export keyword on the classes to allow them to be used across separate source files.

      TypeScript will automatically order the files as required in the output to ensure that JavaScript can be parsed in the browser.

    Minifying our output

    1. Open the command line from the project using Alt-Space.
    2. Install gulp and related tasks using:
      npm install gulp --save-dev 
      npm install gulp-uglify --save-dev 
      npm install gulp-watch --save-dev
    3. Gulp watch is used to monitor the source file for changes and uglify when it changes.
    4. Create a file of type 'Gulp Configuration File' (search the file types in the add-new dialog) in the root of the project called 'gulpfile.js' and add the following code:
      var gulp = require('gulp');
      var watch = require('gulp-watch');
      var uglify = require('gulp-uglify');
      
      gulp.task('build', function () {
      gulp.src(['./out/SDK.DependentOptionSet.js'])
      .pipe(uglify())
      .pipe(gulp.dest('./WebResources/js'));
      });
    5. Open Tools->Task Runner Explorer
    6. Click 'Refresh on the task runner explorer.
      This should now show you the build task:
    7. Right click on the build task and click Run. This will create a minified version of your output Javascript in the webresources folder.

    8. In the last part of this series, we looked at debugging using Fiddler. Since we've moved our compiled JavaScript, we now need to adjust our fiddler auto-responder to point to the out folder so we can still debug.

      REGEX:(?insx).+\/sdk_\/js\/(?'fname'[^?]*.js)
      C:\Users\Administrator\source\repos\StartUsingTypeScript\StartUsingTypeScript\src\${fname}
    9. We also need to update the auto responder for the source maps since they are now listed in the map file as relative to the out folder rather than absolute paths:
      REGEX:(?insx).+\/sdk_\/src\/(?'fname'[^?]*.ts)
      C:\Users\Administrator\source\repos\StartUsingTypeScript\StartUsingTypeScript\src\${fname}
    10. We can now add a watch task to automatically build the minified version when the source file changes. Add the following to the gulpfile.js:
      gulp.task('watch', function () {
      gulp.watch('./out/*.js', ['build']);
      });
    11. We can manually run the watch to start monitoring the out file – but also configure to automatically start when the project opens.
      Right Click on the watch task -> Bindings -> Project Open

    You can download the code from this part if you want to compare your code to mine.In the next part I'll show you how to create some Unit Tests for our TypeScript code.

  59. Yes it's that time of the year again when we get to see the new features planned in the next major release of Dynamics 365 CE/PowerApps. I've already said how excited I about the changes to the release strategy announced earlier this month and now we can see what's on the roadmap.

    There are over 250 pages of content spanning all the Dynamics 365 products, but here are my top 5 features that I'll be blogging about in the coming months:

    1. Dynamics 365 AI for Sales app
      We've all been there where someone asks for 'AI' but what they really need is an if/then statement - but this feature promises to offer natural language processing Q&A and predictive lead/opportunity scoring. Can't wait to see what new sales scenarios this will support. The great thing about this technology is that it will be ready to use rather than needing expensive development work using services such as Azure Cognitive Services.
    2. Increased Unified Client coverage
      We are going to see a tonne of new features added in the Unified Client, closing the gap between the 'classic' web UI. Notably, Advanced Find, Run Workflow, Grid Filtering, Service Administration
    3. Dependent optionsets in the Unified Interface
      Proving the point that new features will come to the UCI and not the 'classic' web UI - we now have the ability to create dependent option sets in the UCI.
    4. Custom controls in Business Process Flows
      The Custom Control Framework promises to be amazing, but this enhancement opens up many new scenarios in the Unified Client
    5. Faster Model Driven Apps!
      Performance improvements have been traditionally overlooked in new releases - but the already snappy UCI will be even faster and reliable!
    6. SharePoint Documents available on Portals
      This has been an ask for a long time and now we can surface documents stored in SharePoint through the Dynamics integration already available today.
    7. admin.powerapps.com get's solution support - no more 'classic' solution explorer!
      Whilst the default CDS Solution will still be there - we can now use solutions to manage our CDS configuration, exporting from one environment to another. This is massive because the new solution management experience includes a new WSYSIWYG form designer and so much quicker than the 'classic' solution explorer.
    8. Embedding Canvas Apps and Flows in solutions
      Application Lifecycle management with Model Driven Apps is already mature with it's use of solutions, where Canvas Apps and Flow has been lagging behind. Now we will see the ability to include these along side Model Driven Apps in solutions, and export from one environment to another. The ability to run flows and embed Canvas Apps inside a Model Driven App now means that we can start to really use these features, safe in the knowledge that they can be deployed to UAT/PROD from DEV.
    9. Unified experience for Flow, PowerApps and CDS
      These three complimentary products will be unified so that they are managed in the same experience. This can't come soon enough because I've found many people confused about how they fit together.
    10. Solution Checker
      Lastly, this new feature is interesting and offers advice on use of legacy features that will be deprecated in the future.

    Go check out the October 2018 release notes now - https://aka.ms/businessappsreleasenotes 

  60. In the first part of this series, we looked at the simple steps to convert the SDK Sample Dependant OptionSets into TypeScript so you can see that it's not about 're-writing' your code. So far, we've seen how to get the TypeScript compiler to compile our old JavaScript. This part is going to look at deploying and debugging the TypeScript code.

    Deploying the webresources

    1. For the DependantOptionSet code to work we need to also create a config file that defines which child optionsets are filtered by which parent values, so create an xml file in the js folder called AccountOptionSetConfig.xml:
      <DependentOptionSetConfig entity="account" >
       <ParentField id="address1_shippingmethodcode"
                    label="Shipping method">
        <DependentField id="address1_freighttermscode"
                        label="Freight Terms" />
        <Option value="2"
                label="DHL">
         <ShowOption value="1"
                     label="FOB" />
         <ShowOption value="2"
                     label="No Charge" />
        </Option>
        <Option value="3"
                label="FedEx">
         <ShowOption value="1"
                     label="FOB" />
        </Option>
       </ParentField>
      </DependentOptionSetConfig>
      You solution should look like the following:

    2. Next we need to enable deploying the webresources. In the previous part we installed spkl and added the DependentOptionSet webresources. If you've not done this, I suggest you go and review Part 1.
    3. Adding the config xml with spkl is really easy, simply add the new file to the spkl.json file:
      {
      "uniquename": "sdk_/js/AccountOptionSetConfig.xml",
      "file": "js\\AccountOptionSetConfig.xml",
      "description": ""
      }
    4. You can now deploy by running the spkl\deploy-webresources.bat at the command-line.
      Since we installed 'Open Command Line' in the previous part, we can simply select the spkl folder in Visual Studio and press Alt-Space to open the command prompt!

    Testing inside Dynamics

    Now that we've deployed the webresources, we can configure it for use on the Account form.

    1. Open the Account Form inside Dynamics Form designer. I find this easiest to do from your App in the designer:
    2. Select Form Properties and add the DependentOptionSet.js to the Form Libraries:
    3. Add a new OnLoad Event Handler that calls the SDK.DependentOptionSet.init function with the Parameters (including the quotes) "sdk_/js/AccountOptionSetConfig.xml" 
    4. Locate the Shipping Method field on the form, and add a new On Change handler calling SDK.DependentOptionSet.filterDependentField with the parameters "address1_shippingmethodcode","address1_freighttermscode"
    5. These steps are simply necessary to wire up the code to the form events. If you save the form and publish you should now see the Shipping Method drop down filter the Freight Terms based on the configuration xml.
       

    Debugging TypeScript with Fiddler

    It is inevitable that whilst you are refactoring your JavaScript into TypeScript you will need to debug your code. This will involve making changes and re-testing. Deploying the code files up to the Dynamics server with each change in order that we can test will take up valuable time. Instead, whilst we developer we can use a 'trick' that uses Fiddler to redirect the requested webresources to the local version of the file instead of the version on the server. This allows us to make changes and simply refresh the browser to get the new version. I blogged about this trick back in 2014, but here is are the steps to set it up:

    1. Install Fiddler2 from https://www.telerik.com/download/fiddler Run Fiddler.Select Tools->Options->HTTPS and select Decrypt HTTPS traffic.
    2. Click Yeswhen prompted to Trust the Fiddler Root certificate, and then Yes to each subsequent dialog.
    3. Click OK on the Options Dialog. This allows Fiddler to decrypt the HTTPS traffic between the browser and the server so that it can intercept it, log and respond with a different file where needed.
    4. In Fiddler select Rules->Performance->Disable Caching. This will ensure that files are not stored locally in the browser cache, but downloaded with each page refresh to pick up the latest version
    5. In Fiddler Select the AutoResponder tab and select 'Enable rules' and 'Unmatched requests passthrough'
    6. Click Add Rule and enter:
      REGEX:(?insx).+\/sdk_\/js\/(?'fname'[^?]*.js)
      C:\Users\Administrator\source\repos\StartUsingTypeScript\StartUsingTypeScript\src\${fname}

      Note: Adjust the path to your own project location

    7. Click Save
    8. You should see a grey highlighted request for the files that match – which is now redirecting to your local file.
    9. When you refresh your page, you can now make changes to your local TypeScript which will recompile to a local js file and picked up without a re-deploy and re-publish.

    Debugging TypeScript with Source Maps

    Since we are no longer writing JavaScript directly, we need a way of stepping through our TypeScript code when debugging rather than the generated code. Luckily most debuggers have support for source maps that provide an index between the original source and the generated source. To enable this, we'll need to let the browser know where to load the TypeScript that is referenced by the .map files. The easiest way to do this is with another Fiddler autoresponder rules:

    1. In Fiddler AutoResponders, use Add Rule
    2. Set the Rule to be:
      REGEX:(?insx).+\/sdk_\/js\/C:\/(?'fname'[^?]*)
      C:/${fname}
    3. This now will pick up the locations in the source map and redirect to your local folder.
      https://org.crm11.dynamics.com/%7b636672751920000751%7d/webresources/sdk_/js/C:/Users/Administrator/source/repos/StartUsingTypeScript/src/SDK.DependentOptionSet.ts

      is redirected to

      C:/Users/Administrator/source/repos/StartUsingTypeScript/src/SDK.DependentOptionSet.ts

    4. You can now put a debugger statement in your TypeScript and you'll see the browser step through the original TypeScript code rather than the actual JavaScript.

     

    So there you have it – you have all the same features as you did when writing JavaScript - but with all the advantages of TypeScript.

    In the next part we'll look at how we can use gulp to create a minified version of the JavaScript for deployment.

  61. TypeScript isn't really a new language, it's a way of writing code in the next generation of JavaScript before it's fully supported by all browsers. By the time that browser support for ES6 is there, we'll be all writing TypeScript using ES9 features that 'transpile' down to ES6 code! Of course, there are cool features of the typescript compiler and related tools that are nothing to do with the JavaScript language standard, but you get the idea!

    The key point is that TypeScript is a superset of JavaScript. JavaScript is TypeScript without all the new features and strict compiler checks. For this reason, TypeScript is nothing to be afraid of if you are already developing in JavaScript. You can convert your JavaScript into TypeScript with very little effort and then start to make use of the new features available in ES6 gradually over time.

    This 2 part post is going to take the DependantOptionset.js JavaScript from the olds SDK samples and walk through converting it to TypeScript in the hope that it will help you see how easy it is to start using TypeScript!

    Step 1 – Setup up your TypeScript project

    I'm going to use Visual Studio 2017 because I like the fact that I can develop C# Plugins and TypeScript in the same IDE – but you could equally use VSCode.

    1. Download and install Node.js from https://nodejs.org/en/
    2. Select the 'Current' build and install.
    3. Open Visual Studio and install the 'Open Command Line' extension using Tools->Extensions and Updates…
    4. Search for 'Open Command Line' and select Download
    5. Restart Visual Studio to install the extension
    6. Select New -> Project
    7. Pick ASP.NET Empty Web Site
    8. Select Add->Add New Item…
    9. Select TypeScript config file
    10. Add the compileOnSave option:
      {
          "compilerOptions": {
              "noImplicitAny": false,
              "noEmitOnError": true,
              "removeComments": false,
              "sourceMap": true,
              "target": "es5"
          },
        "compileOnSave":  true,
        "exclude": [
          "node_modules",
          "wwwroot"
        ]
      }
    11. Add a folder WebResources with a subfolder js, and then place the SDK.DependentOptionSet.js file in that folder (you can pick up the file from the old SDK sample)
    12. I use spkl to deploy Web Resources, so use the NuGet Package Manager (Tools-> NuGet Package Manager -> Package Manager Console) to run
      Install-Package spkl
    13. Edit the spkl.json file to point to the SDK.DependentOptionSet.js file:
      {
        "webresources": [
          {
            "profile": "default,debug",
            "root": "Webresources/",
            "files": [
              {
                  "uniquename": "sdk_/js/SDK.DependentOptionSet.js",
                  "file": "js\\SDK.DependentOptionSet.js",
                  "description": ""
              }
            ]
          }
        ]
      }
    14. Select the Web Site Project in the Solution Explorer and use the Alt-Space shortcut to open the command line
    15. Enter the following command on the command prompt that popups up:
      npm init
    16. Accept all the defaults
    17. You should now see a packages.json file in your project folder.
    18. To install the TypeScript definitions for working with the Dynamics client-side SDK, On the same console window enter:
      npm install @types/xrm --save-dev
    19. You will now have a node_modules folder in your project directory.

     

    Step 2 – Updates to allow JavaScript to compile as TypeScript

    Now you've got your project ready, we can start to use TypeScript.

    1. Copy the SDK.DependentOptionSet.js to a new folder named src and rename to be .ts rather than .js
    2. If you open the TypeScript file you'll start to see some errors that the TypeScript compiled has found since it is adding some stricter checks than ES5 JavaScript:

    3. Let's sort out the namespaces – in ES5 JavaScript, there was no concept of modules or namespaces so we had to manually contruct them – in this case we are creating a namespace of SDK with effectively a type called DependentOptionSet. We can convert this to the following TypeScript:
      namespace SDK {
          export class DependentOptionSet {
              static init(webResourceName) {
      Notice how the class is marked as 'export' so that it will be accessible to other TypeScript code, and the function pointer field init has be converted into a static method.
    4. We can repeat this for all the function pointer fields on the DependentOptionSet class and do the same for the Util class.

    The outline will look like:

    Step 3 – Attribute Strong Typing

    1. If you look at the methods now you'll start to notice that there are some more errors that the typescript compiled needs help with. The first we'll look at is there is an error on getValue(). This is because the typing for the attribute collection isn't what is expected.

    2. In order that the attribute variables are typed correctly, we change:
      Xrm.Page.data.entity.attributes.get(parentField)
      to
      Xrm.Page.getAttribute<Xrm.Attributes.OptionSetAttribute>(childField)
       
    3. The same should be repeated for the line:
      Xrm.Page.data.entity.attributes.get(parent)
    4. Repeat this for both ParentField and ChildField
    5. Next, we need to deal with TypeScript's type expectations:

    6. JavaScript allows fields to be defined on objects without any predefined typing, but TypeScript requires that we define the type at the point of assignment to the mapping variable.
      Consequently, the mapping type needs to be defined as follows:
      var mapping = {
          parent : ParentField.getAttribute("id"),
          dependent : SDK.Util.selectSingleNode(ParentField, "DependentField").getAttribute("id"),
          options : []
      };
      The same technique needs to be then repeated for the option and optionToShow variables.

    Step 4 – Class level fields

    The next issue that is highlighted by the compiler is that JavaScript allows assigning field level variables without prior definition.

    1. We must add the config field variable into the DependentOptionSet class definition:
      export class DependentOptionSet {
          static config = [];
    2. Now that there are no more compile errors, you should start to see the JavaScript generated:
     

    Step 5 – Taking it one step further

    You can now start to turn on stricter checks. The most common is adding to your tsconfig.json under compilerOptions:

    "noImplicitAny": true,
    1. You'll now start to see errors in your TypeScript where there is no type inferable from the code:
    2. In this case we need to add some additional types for the Xml Http types that the browser use, so edit your tsconfig.json and the following to the compilerOptions:
      "lib": [ "dom","es5" ]
    3. You can now change the signature of completeInitialization to:
      static completeInitialization(xhr : XMLHttpRequest) {
    4. We'll also need to sort out other type inference issues such as:
    5. We can add the following type definitions
      namespace SDK {
          class Option {
              value: string;
              showOptions: string[]
          }
       
      
          class Mapping {
              parent: string;
              dependent: string;
              options: Option[]
          }
    6. Finally, the code should look like:
    namespace SDK {
        class Option {
            value: number;
            text?: string;
            showOptions?: Xrm.OptionSetValue[]
        }
     
    
        class Mapping {
            parent: string;
            dependent: string;
            options: Option[]
        }
     
    
        export class DependentOptionSet {
            static config: Mapping[] = [];
            static init(webResourceName : string) {
                //Retrieve the XML Web Resource specified by the parameter passed
                var clientURL = Xrm.Page.context.getClientUrl();
     
    
                var pathToWR = clientURL + "/WebResources/" + webResourceName;
                var xhr = new XMLHttpRequest();
                xhr.open("GET", pathToWR, true);
                xhr.setRequestHeader("Content-Type", "text/xml");
                xhr.onreadystatechange = function () { SDK.DependentOptionSet.completeInitialization(xhr); };
                xhr.send();
            }
     
    
            static completeInitialization(xhr : XMLHttpRequest) {
                if (xhr.readyState == 4 /* complete */) {
                    if (xhr.status == 200) {
                        xhr.onreadystatechange = null; //avoids memory leaks
                        var JSConfig: Mapping[] = [];
                        var ParentFields = xhr.responseXML.documentElement.getElementsByTagName("ParentField");
                        for (var i = 0; i < ParentFields.length; i++) {
                            var ParentField = ParentFields[i];
                            var mapping : Mapping = {
                                parent : ParentField.getAttribute("id"),
                                dependent : SDK.Util.selectSingleNode(ParentField, "DependentField").getAttribute("id"),
                                options : []
                            };
     
    
                            var options = SDK.Util.selectNodes(ParentField, "Option");
                            for (var a = 0; a < options.length; a++) {
                                var option : Option = {
                                    value: parseInt(options[a].getAttribute("value")),
                                    showOptions: [],
                                };
                                var optionsToShow = SDK.Util.selectNodes(options[a], "ShowOption");
                                for (var b = 0; b < optionsToShow.length; b++) {
                                    var optionToShow : Xrm.OptionSetValue = {
                                        value: parseInt(optionsToShow[b].getAttribute("value")),
                                        text: optionsToShow[b].getAttribute("label")
                                    };
                                    option.showOptions.push(optionToShow)
                                }
                                mapping.options.push(option);
                            }
                            JSConfig.push(mapping);
                        }
                        //Attach the configuration object to DependentOptionSet
                        //so it will be available for the OnChange events 
                        SDK.DependentOptionSet.config = JSConfig;
                        //Fire the onchange event for the mapped optionset fields
                        // so that the dependent fields are filtered for the current values.
                        for (var depOptionSet in SDK.DependentOptionSet.config) {
                            var parent = SDK.DependentOptionSet.config[depOptionSet].parent;
                            Xrm.Page.getAttribute(parent).fireOnChange();
                        }
                    }
                }
            }
     
    
            // This is the function set on the onchange event for 
            // parent fields
            static filterDependentField(parentField: string, childField : string) {
                for (var depOptionSet in SDK.DependentOptionSet.config) {
                    var DependentOptionSet = SDK.DependentOptionSet.config[depOptionSet];
                    /* Match the parameters to the correct dependent optionset mapping*/
                    if ((DependentOptionSet.parent == parentField) && (DependentOptionSet.dependent == childField)) {
                        /* Get references to the related fields*/
                        var ParentField = Xrm.Page.getAttribute<Xrm.Attributes.OptionSetAttribute>(parentField);
                        var ChildField = Xrm.Page.getAttribute<Xrm.Attributes.OptionSetAttribute>(childField);
                        /* Capture the current value of the child field*/
                        var CurrentChildFieldValue = ChildField.getValue();
                        /* If the parent field is null the Child field can be set to null */
                        if (ParentField.getValue() == null) {
                            ChildField.setValue(null);
                            ChildField.setSubmitMode("always");
                            ChildField.fireOnChange();
     
    
                            // Any attribute may have any number of controls
                            // So disable each instance
                            var controls = ChildField.controls.get()
     
    
                            for (var ctrl in controls) {
                                controls[ctrl].setDisabled(true);
                            }
                            return;
                        }
     
    
                        for (var os in DependentOptionSet.options) {
                            var Options = DependentOptionSet.options[os];
                            var optionsToShow = Options.showOptions;
                            /* Find the Options that corresponds to the value of the parent field. */
                            if (ParentField.getValue() == Options.value) {
                                var controls = ChildField.controls.get();
                                /*Enable the field and set the options*/
                                for (var ctrl in controls) {
                                    controls[ctrl].setDisabled(false);
                                    controls[ctrl].clearOptions();
     
    
                                    for (var option in optionsToShow) {
                                        controls[ctrl].addOption(optionsToShow[option]);
                                    }
     
    
                                }
                                /*Check whether the current value is valid*/
                                var bCurrentValueIsValid = false;
                                var ChildFieldOptions = optionsToShow;
     
    
                                for (var validOptionIndex in ChildFieldOptions) {
                                    var OptionDataValue = ChildFieldOptions[validOptionIndex].value;
     
    
                                    if (CurrentChildFieldValue == OptionDataValue) {
                                        bCurrentValueIsValid = true;
                                        break;
                                    }
                                }
                                /*
                                If the value is valid, set it.
                                If not, set the child field to null
                                */
                                if (bCurrentValueIsValid) {
                                    ChildField.setValue(CurrentChildFieldValue);
                                }
                                else {
                                    ChildField.setValue(null);
                                }
                                ChildField.setSubmitMode("always");
                                ChildField.fireOnChange();
                                break;
                            }
                        }
                    }
                }
            }
        }
     
    
        export class Util {
            //Helper methods to merge differences between browsers for this sample
            static selectSingleNode(node: Element, elementName : string) {
                if ((<any>node).selectSingleNode) {
                    return <Element>(<any>node).selectSingleNode(elementName);
                }
                else {
                    return node.getElementsByTagName(elementName)[0];
                }
            }
     
    
            static selectNodes(node: Element, elementName : string) {
                if ((<any>node).selectNodes) {
                    return <NodeListOf<Element>>(<any>node).selectNodes(elementName);
                }
                else {
                    return node.getElementsByTagName(elementName);
                }
            }
        }
     
    
    }

    The differences between the original JavaScript and the TypeScript are mostly structural with some additional strongly typing:

    The resulting differences between the original JavaScript and the final compiled TypeScript are also minimal:

    Of course, the key difference is now that we have strong type checking through type inference. Type inference is your friend – the compiler knows best!

    In the next part, I'll show you how to deploy and debug this TypeScript using source maps.

    Check out this excellent video on how to use TypeScript with Dynamics CE 

    >> Read Part 2!

  62. The new Common Data Service for Apps promises to deliver the Xrm platform that we've been after for - well since forever! Today it was announced that it's no longer in Preview and has made it to General Availability (GA)!

    Of course, with any GA announcement there is also a raft of licensing questions – the most important one for me was around how we can license the CDS for Application and build Model-Driven Apps without licensing for the Dynamics 365 1st Party Apps such as Sales & Servicing.

    With the licensing documentation released today, Plan 2 appears to be that XRM type licensing, allowing building of Model Driven Apps (Dynamics Apps rather than Canvas Apps) with features such as real-time workflows and plugins.

    The official license page shows a full breakdown of the license features, but below I've extracted the most important aspects with respect to building Model Driven Apps that we know as Dynamics 365 App Modules.

    You can see that Plan 2 allows us to build Dynamics Apps without actually having a D365 license – with the exception of this new term 'restricted entities'. Conveniently, there is a list of which entities these are and the associated Dynamics 365 license required. The main take away is that you need a Customer Service license for full access to entities such as case/kbarticle/SLA and Project Service/Field Service license for the unified scheduling entities – with Plan 2 these restricted entities are read-only. This makes sense, as these entities are required by special functionality that is specific to those 1st party Apps that can only be accessed as part of a Dynamics 365 license.

    It's worth noting that you can build Model Driven Apps with Plan 1 – but you can only use Async Workflows and Business Rules inside the CDS.

    Why not learn more here:

    Watch my video about how the Common Data Service for Apps is the new XRM Platform:

  63. I've recently got back from talking about the Unified Client at D365UG in Dublin. It was an awesome week with lots of exciting information about PowerApps and the Common Data Model for Applications. Whilst partaking in the local hospitality (with a pint of Guinness) I was discussing the Dynamics 365 form designer with fellow MVP Leon Tribe and about how there are so many options available but with many of them being legacy and having no effect in the Unified Client. I promised to publish my 'cheat-sheet' of the options and their applicability, so here you are Leon!

    Each of the properties are listed as being applicable to the Web UI Only, UCI Only, or for both.

     

    Availability

    Notes

    Field Properties - Official Documentation on Field Properties

    Composite Name

    Web Only

     I'm not a fan of the composite fields - so I really don't miss them.

    Composite Address

    Web Only

     

    Display Label on the form

    Both

     

    Field is read-only

    Both

     

    Turn off automatic resolution in field

    Web Only

    UCI doesn't auto-resolve fields when tabbing away

    Disable most recently used items for this field

    UCI Only

    See notes below

    Visible by default

    Both

    Fields can be shown/hidden using business rules/JavaScript

    Available on phone

    UCI Only

    Web UI does not render on mobile

    Default Lookup View

    Both

    Web - Default is only used in the popup dialog

    Lookup View Selector Filtering

    Both

    Lookup view always included in filtered list in UCI

    Lookup View Columns

    Both

    Web UI only shows 3 columns inline

    UCI shows 2 columns until expand arrow selected

    Display Search box in Lookup dialog

    Web Only

    UCI doesn't show lookup dialog

    Inline View Search match highlighting

    UCI Only

     See notes below

    Only show records where...

    Both

     

    Tabs - Official Documentation on Tab Properties

    Show the label of this tab on the Form

    Web Only

    Tabs always have labels on the UCI since they are really actual tabs!

    Expand this tab by default

    Web Only

    Tabs do not collapse in the UCI

    Visible by Default

    Both

    UCI and Web both allow dynamic show/hide of tabs using JavaScript

    Tab Layout

    Both

    Tabs can have multiple columns in the UCI and Web UI

    Tab Column Width

    Both

    Tab column width can be controlled in both the UCI and Web UI

    Tab Columns

    Both

     

    Sections - Official Documentation on Section Properties

    Section Columns

    Both

    Multi-Columned Sections will wrap in UCI with the space available. See below.

    Available on Phone

    UCI Only Web UI is not rendered on the phone.

    Field Label Alignment

    Web Only

    UCI - Labels are always centred

    Field Label Position

    Web Only

    UCI - Labels will move to above the fields automatically when space is limited

    Field Label Width

    Web Only

    Field Label width is a constant in the UCI

    Show the label of this section on the Form

    Both

     

    Show a line at top of the section

    No effect

    V9 Web and UCI Sections have boxes around them

    Sub Grid - Official Documentation on Sub Grid Properties

    Display Search Box

    Both

     

    Display label on the Form

    No effect

     

    Panel header colour

    Web Only

     

    Show Chart Only

    Web Only

     

    Display Chart Selection

    No effect

     

    Display Index

    Both

     

    Automatically expand to use available space

    Both

    Sub-grids will expand in height until they reach the maximum records set and then start to page.

    Header/Footer

    Header Layout

    Both

    UCI will show a single row with overflow

    Footer Layout

    Both

    UCI will show on a single line with overflow

     

    Recently used records

    The recently used records list in lookups are interesting since they are only shown in the UCI. They are especially useful for lookups like 'Regarding'.

    Search Lookup Result Highlighting

    The UCI has an awesome feature of highlighting your search term in the in-line lookup control. One of the advantages of the UCI is that it works across all devices and so does not show a pop-up dialog like the Web UI does.

    Lookup Columns

    In the Web UI, you can only see up to 3 columns in the in-line lookup control – where the UCI shows 2 until you click the expand arrow, where all the view columns are shown.

    Tab & Section Columns

    The UCI allows full control over the columns in a section, however, unlike the Web UI it is responsive and will wrap the fields and sections underneath each other.

    Let's look at an example where we have a form design as shown below:

    When resizing the UCI to various form factors, the tab will dynamically re-arrange it's self as follows:

    Full width- all the columns and sections are shown as they are laid out in the form designer:

    Narrower – wrapping section columns underneath each other:

    Narrowest – wrapping sections underneath each other:

    The Web UI will simply truncate the fields. This difference is one of the most important aspects of the UCI.

    Header/Footers

    The headers and footers in the UCI render very differently to the Web UI, where the fields that do not fit are added to the overflow flyout much the same as the Command Bar.

    Header overflow flyout:

    The fields that don't fit in the single header row are shown in the popup flyout:

     

    Footer Overflow flyout:

    The fields that don't fit into the footer single row are shown as a popup flyout.

    Well, that's it. Overall the UCI honours most properties in the form designer where applicable and it does a great job of being responsive enough to show on both mobile and desktop form factors.

  64. If you've used or read about the Unified Client, there are some notable omissions from the command bar. This is mostly because the unified client is being developed and enhanced still and I suspect that the effort is being focused on stability and performance (both of which are very good already).

    The most notable omissions are:

    1. Advanced Find – Advanced Find has not been implemented yet in the UCI – but in the meantime, you can add the old Web UI advanced find to the UCI using my solution that I've already posted about.
    2. Run Workflow – there is no run workflow button on forms or views at the moment. In this post, I'll show you how to add buttons to run specific workflows using Ribbon Workbench Smart Buttons
    3. Run Report – as for Run Workflow in this post I'll show you how to add a run report button.
    4. Run Dialog – I am not entirely sure that Dialogs will ever make their way to the Unified Client because they are deprecated.

    What are Smart Buttons?

    When you start the Ribbon Workbench, it scans for any smart button enabled solutions and displays them in your toolbox. Smart Buttons are essentially small pre-defined templates that can be added to your ribbon customisations and are defined by a smart button manifest in an installed solution. You can read more about them in my post explaining Smart Buttons in more detail.

    Installing the 'starter' Smart Button Solution

    Once you have installed the Ribbon Workbench, you can download and install my Smart Button Solution. You can obviously define your own smart buttons – but my solution provides some of the most common buttons that customisations need – namely Run Report and Run Workflow.

    1. Download the solution from https://github.com/scottdurow/RibbonWorkbench/releases
    2. Install the solution into Dynamics by importing it into the Dynamics Administration Solutions Area
    3. Create a solution that contains just the entity that you want to add smart buttons to
    4. Load the solution into the Ribbon Workbench

    Running Workflows in the UCI

    Once you have the smart buttons solution installed, you'll see them in the Ribbon Workbench toolbox:

    Adding a Run Workflow and Run Report Button

    You can simply drag the buttons from the smart button toolbox into the entity ribbons. You will be presented with a configuration box when you drop them onto the design surface:

    The newly created buttons will appear on your design surface and also appear as buttons and commands in the Solution Elements

    Ensuring that the Run Workflow and Run Report Buttons show in the Unified Client

    The key to showing the smart buttons in the Unified Client is the Enable and Display Rules. Currently, the Smart Button templates are set up for the Web UI – in the future, I will adapt these for the Unified Client, but during this transitional period it is up to you to decide if you want them to appear or not by managing which rules are included.

    To enable the 'Run Workflow' and 'Run Report' buttons you need to remove the display/enable rules from their respective commands:

    • Mscrm.HideOnModern
    • Mscrm.RunWorkflowPrimary
    • Mscrm.ReadReport

    You do this by selecting the command in the solution elements panel, and then right-click 'Remove from Command'

    You can then publish the solution and the buttons will appear in the UCI! You can also set some nice SVG images in the 'Modern Image' property of each button in the Ribbon Workbench by creating some SVG Webresources to use.

    Probably the biggest difference when developing on the UCI is that you need to take a mobile/tablet-centric approach to your customisations. Whilst the Run Workflow button will work on the Mobile/Tablet App – the Advanced Find and the Run Report buttons use a user interface that was never designed for this type of use and will only work when using the UCI via a web browser.

  65. If you've used the new Version 9 Unified Interface so far then you'll know that there is no advanced find button yet. I am sure that it won't be long until this feature is added in some form and indeed much of the time the Relevance Search finds what you need quickly.

    That said, I thought I'd demonstrate how to add command buttons to the top bar in the Unified Interface by using the example of adding an Advanced Find button similar to the Web Client.

    What is the Global Command Bar?

    The Unified Client has a similar looking top bar to the Web Client except it doesn't include the site map since it's moved to the left-hand navigation.

    We can now add command buttons to this top bar, to the right of the + button.

    How to add a Global Command Button

    1. Create a new solution and add the Application Ribbon using

    2. Add a new JavaScript web resource containing the following code:

    var UCIAdvancedFind = {
        open: function(){
            var advFindUrl = Xrm.Utility.getGlobalContext().getClientUrl() + "/main.aspx?pagetype=advancedfind";
            Xrm.Navigation.openUrl(advFindUrl,{width:900,height:600})
        },
        isDesktop: function() {
            return Xrm.Utility.getGlobalContext().client.getFormFactor()==1;
    
        }
    };
    

    The isDesktop function is used on an EnableRule to ensure that the Advanced Find button doesn't show on the phone/tablet client because it will not work.

    3. Add a new SVG icon to the solution to be used on the Global Command Button.

    4. Load up the new solution in the Ribbon Workbench and locate the Mscrm.GlobalTab group in the Home Command Bar:

    5. Drag a button into the Mscrm.GlobalTab isv area:

    6. Create an Enable Rule that calls some custom JavaScript:

    7. Create a command that is linked to the button that calls some JavaScript and has the Enable Rule:

    Once this is published it'll show up in the top bar:

    You can also add flyout menus to this bar. If there isn't enough room the Global Command Bar will show the overflow drop down menu like a flyout button.

    Remember – this is only available on the UCI – which is awesome by the way!

    You can download the managed solution for this button:

    UCIAdvancedFind_1_0_managed.zip (3.60 kb)

  66. I recently found an issue with Dynamics 365 Version 9 where a call to ITracingService.Trace(message) from inside a plugin caused the following exception:

    System.MissingMethodException: Method not found: '!!0[] System.Array.Empty()'
    

    Or

    System.MissingMethodException: Method not found:
    'System.String System.String.Format(System.IFormatProvider, System.String, System.Object)'.
     

    To resolve, I simply needed to replace it with TracingService.Trace(message,null);

    This issue is caused by compiling the Plugin using the 4.6.2 version of the .NET framework - so by compiling against 4.5.2 the exception no longer occurs.

    Hope this helps!

     

  67. Recently I've been getting asked a great deal about how to perform non-interactive authentication with the Dynamics 365 WebApi in a server to server authentication scenario. The most common scenario is that you have an external server application that needs to access the Dynamics 365 WebApi.

    The good news is that it's easy using Application Users. Here is a short video showing you how.

    https://www.youtube.com/watch?v=Td7Bk3IXJ9s

    The code in the video is as follows:

    public static async Task Auth()
    {
        string api = "https://org.crm11.dynamics.com/api/data/v9.0";
    
        AuthenticationParameters ap = AuthenticationParameters.CreateFromResourceUrlAsync(
                    new Uri(api)).Result;
    
        var creds = new ClientCredential("ApplicationID", "ClientSecret");
    
        AuthenticationContext authContext = new AuthenticationContext(ap.Authority);
        var token = authContext.AcquireTokenAsync(ap.Resource, creds).Result.AccessToken;
    
        using (HttpClient httpClient = new HttpClient())
        {
            httpClient.Timeout = new TimeSpan(0, 2, 0);
            httpClient.DefaultRequestHeaders.Authorization =
                new AuthenticationHeaderValue("Bearer", token);
    
            HttpResponseMessage response = await httpClient.GetAsync(api + "/contacts?$top=1");
        }
    }
    
    
    

     

    Hope this helps!  

  68. I recently blogged about the introduction of the script dependancies dialog in Version 9 where you can define the scripts that are needed by another. Although it does not solve the asynchronous loading issue for forms, it makes it simpler to add scripts to form since the dependencies will automatically be added for us.

    Up until now, there has been a common pattern when adding script to Ribbon Commands where the dependancies were added with a function of 'isNaN'. It didn't have to be isNaN, but that is the most popular 'no operation' function call.

    With the introduction of the script dependencies, you only need to include the reference to ClientCommands.js and the ClientCommon.js will be loaded automatically for you first before the command is called.

    Awesome – we no longer need the isNaN approach that always felt like a 'hack'.

  69. A long time ago, Dynamics CRM introduced the concept of asynchronous loading of form web resources – this created a challenge when scripts depend on other scripts to be loaded first (e.g. inheritance or using a common type system library during script loading).

    Version 9 has introduced an interesting feature where you can define the dependencies that a specific script has on other scripts.

    Imagine you had 3 scripts

    • C.js requires B.js to load
    • B.js requires A.js to load

    You can now define these dependencies in the web resources dialog:

    I was hoping that by defining this dependency graph, the runtime would load them in the correct order like a module loader would – but having run some test the execution order still depending on the download speed and size of the script.

    Script load execution order C - B - A

    Script load execution order A - B - C

    Conclusion

    The Web resource dependency feature is awesome when you have many resources that are required during form events at runtime (e.g. onload, onchange etc.) You can simply add the single script into the form and the other dependencies will be loaded for you.

    At this time, it's not a solution for where you need those dependencies during script load execution.

  70. You've seen the executionContext in the event registration dialog and you might even have used it on occasion. Well with the release of Dynamic 365 Customer Engagement Version 9, it has been elevated to be the replacement for Xrm.Page.

    The document describing the replacement for Xrm.Page details it as ExecutionContext.getFormContext – due to the capitalisation of ExecutionContext it implies that this is a global object, when in fact it must be passed as a parameter to the event handler by checking 'Pass execution context as first parameter' checkbox.

    Oddly - It's still unchecked by default given its importance!

    So why the change?

    Imagine that we want to create a client side event on the Contact entity that picks up the parent account's telephone number and populates the contact's telephone when 'Use Account Phone' is set to Yes. We add the event code to both the form field on change and the editable grid on change for the 'Use Account Phone' field.

    If we were to use the 'old' Xrm.Page.getAttribute method –it would work on the form but it wouldn't work within the grid on change event handler.

    This is where the executionContext shines – it can provide a consistent way of getting to the current entity context irrespective of where the event is being fired from (form or grid).

    Show me the code!

    The following event handler is written using typescript – but it's essentially the same in JavaScript without the type declarations.

    The important bit is that the executionContext is defined an argument to the event handler and attribute values are retrieved from the context returned by it's getFormContext() method.

    static onUseCompanyPhoneOnChanged(executionContext: Xrm.Page.EventContext) {
        var formContext = executionContext.getFormContext();
        const company = formContext.data.entity.attributes.get<Xrm.Page.LookupAttribute>("parentcustomerid");
        const usePhone =
            formContext.data.entity.attributes.get<Xrm.Page.BooleanAttribute>(dev1_useaccounttelephone);
        const parentcustomeridValue = company.getValue();
    
        // If usePhone then set the phone from the parent customer
        if (usePhone.getValue() &&
            parentcustomeridValue != null &&
            parentcustomeridValue[0].entityType === "account") {
            const accountid = parentcustomeridValue[0].id;
            Xrm.WebApi.retrieveRecord("account", accountid, "?$select=telephone1")
                .then(result => {
                    formContext.data.entity.attributes.get("telephone1").setValue(result["telephone1"]);
                });
        }
    }
    

    Some Additional Notes:

    1. All the attributes that are used in the event must be in the subgrid row (parent customer attribute in this case).
    2. You can access the parent form container attributes usingparent.Xrm.Page.getAttribute("name").getValue()

    Xrm.Page still works in Version 9 but it's a good idea to start thinking about giving executionContext the attention it deserves!

    Hope this helps!

  71. One of the strangest part of the Dynamics CRM WebApi is the pluralisation of the entity names.

    In the old OData endpoint, the entity set name was <EntityLogicalName>Set – however in the OData 4.0 endpoing, the Logical Name is pluralised by using a simplistic set of rules which often results in the incorrect plural name being picked.

    This introduced a conundrum – Performance vs. correctness. Do we query the metadata for the Entity Set name at runtime – or use a duplicate set of over simplified rules in our JavaScript?

    The New Version 9 Client Side API

    The good news is that with version 9, the Xrm Api now supports:

    
    Xrm.Utility.getEntitySetName("contact")
    
    

    This will return "contacts" and so we can safely use this without worrying if the plural name is correct or not or indeed if it changes in the future.

    UPDATE: As it turns out - this method isn't actually documented and so we have to use the getEntityMetadata function to be fully supported - see https://docs.microsoft.com/en-us/dynamics365/customer-engagement/developer/clientapi/reference/xrm-utility/getentitymetadata

    You can use it as follows:

    Xrm.Utility.getEntityMetadata("lead",["EntitySetName"])
    .then(function(m){
    console.log(m.EntitySetName);
    })

     

    Hope this helps!

     

     

  72. It's a long time since I've used the old SharePoint list component and for the most part, I've not missed it. Server to Server integration is slick and just works.

    That said, the one thing that I do miss is support for folders - but whilst testing the new 9.0 Enterprise Edition I've noticed that folder support has been added in this latest release!

    I was so excited I just had to share a little video of what it looks like

    Fodlers are back

    Maybe in the release after this, we'll get support for content types and metadata properties!

     

  73. I’ve published version 1.0.9 of spkl to NuGet - this adds the following new features:

    1. Global optionset enum generation for early bound classes.
    2. Solution Packager support

    Global Optionset enum generation

    This was a tricky one due to the CrmSvcUtil not making it easy to prevent multiple enums being output where a global optionset is used, but you can now add the following to your spkl.json early bound section to generate global optionset enums.

    {
      "earlyboundtypes": [
        {
         ...
          "generateOptionsetEnums": true,
         ...
        }
      ]
    }
    

    In a future update, I’ll add the ability to filter out the enums to only those used.

    Solution Packager Support

    The solution packager allows you to manage your Dynamics metadata inside a Visual Studio project by extracting the solution into separate xml files. When you need to combine multiple updates from code comments, you can then use the packager to re-combine and import into Dynamics. To configure the solution packager task you can add the following to your spkl.json

     /*
      The solutions section defines a solution that can be extracted to individual xml files to make
      versioning of Dynamics metadata (entities, attributes etc) easier
      */
      "solutions": [
        {
          "profile": "default,debug",
          /*
          The unique name of the solution to extract, unpack, pack and import
          */
          "solution_uniquename": "spkltestsolution",
          /*
          The relative folder path to store the extracted solution metadata xml files
          */
          "packagepath": "package",
          /*
          Set to 'true' to increment the minor version number before importing from the xml files
          */
          "increment_on_import": false
        }
      ]
    

    There are two .bat files provided that will call:

    spkl unpack

    This will extract the solution specifed in the spkl.json into the packagepath as multiple xml files

    spkl import

    This will re-pack the xml files and import into Dynamics - optionally increasing the version number of the solution to account for the new build.

  74. As you probably know by now, when you create Business Process Flows in 8.2+ you'll get a new custom entity that is used to store running instances (if not then read my post on the new Business Process Flow entities).

    When your orgs are upgraded to 8.2 from a previous version then the business process flow entities will be created automatically for you during the upgrade. They are named according to the format:

    new_BPF_<ProcessId>

    Notice that the prefix is new_. This bothered me when I first saw it because if you create a Business Process Flow as part of a solution then the format will be:

    <SolutionPrefix>_BPF_<ProcessId>

    Here lies the problem. If you import a pre-8.2 solution into an 8.2 org, then the Business Process Flows will be prefixed with the solution prefix – but if the solution is in-place upgraded then they will be prefixed with new.

    Why is this a problem?

    Once you've upgraded the pre-8.2 org to 8.2 then the Business Process Flows will stay named as new_ and included in the solution. When you then import an update to the target org – the names will conflict with each other and you'll get the error:

    "This process cannot be imported because it cannot be updated or does not have a unique name."

    Source 8.1 Org
    Solution with myprefix_

    Empty 8.2 Org

    Export

    Import

     

    BPF entity created - myprefix_BPF_xxx

    Upgraded to 8.2

    BPF entity created - new_BPF_xxx

     

    Export

    Import

     

    "This process cannot be imported because it cannot be updated or does not have a unique name."

    new_BPF_xxxconflicts with myprefix_BPF_xxx

     

    How to solve

    Unfortunately, there isn't an easy way out of this situation. There are two choices:

    1. If you have data in the target org that you want to keep – you'll need to recreate the BPFs in the source org so that they have the myprefix_ - you can do this by following the steps here - https://support.microsoft.com/en-us/help/4020021/after-updating-to-dynamics-365-mismatched-business-process-flow-entity
    2. If you are not worried about data in the target org you can delete those BPFs and re-import the solution exported from the upgraded 8.2 source org.

    The good news is that this will only happen to those of you who have source and target orgs upgraded at different times – if you upgrade your DEV/UAT/PROD at the same time you'll get BPFs entities all prefixed with new_

    @ScottDurow

  75. This is the third video in a series showing you how to quickly setup VSTS Continuous Integration with spkl.

    Watch in youtube

    1. Learn more about the spkl task runner

    2. Learn how to deploy plugins with the spkl task runnner

    3. Learn how to deploy webresources with the spkl task runnner

  76. This is the second video in a series showing you how get up and running with spkl with no fuss!

    Watch in youtube

    1. Learn more about the spkl task runner

    2. Learn how to deploy plugins with the spkl task runnner

  77. Following from my last blog post on the spkl Task Runner, this is the first video in a series showing you how get up and running with spkl with no fuss!

     

  78. Why?

    I've used the Dynamics Developer Toolkit since it was first released by MCS for CRM4! I love the functionality it brings however the latest version is still in beta, it isn't supported on VS2017 and there isn't a date when it's likely to be either (yes, you can hack it to make it work but that's not the point J).

    Rather than using an add-in Visual Studio project type, I've been attracted by the VS Code style simple project approach and so I decided to create a 'no-frills' alternative that uses a simple json config file (and that can be used in VS2017).

    What?

    1. Deploy Plugins & Workflow Activities - Uses reflection to read plugin registration information directly from the assembly. This has the advantage that the plugin configuration is in the same file as the code. You can use the 'instrument' task to pull down the plugin configuration from Dynamics and add the metadata to your classes if you already have an existing project.
    2. Deploy Web Resources – deploy webresources from file locations defined in the spkl.json configuration. You can use the 'get-webresources' task to create the spkl.json if you already have webresources deployed.
    3. Generate Early Bound Types – Uses the spkl.json to define the entities to generate each time the task is run to make the process repeatable.
    4. Profile management – An optional profile can be supplied to select a different set of configuration from spkl.json. E.g. debug and release build profiles.

    How?

    Let's assume you have a project in the following structure:

    Solution
        |-Webresources
        |    |-html
        |    |    |-HtmlPage.htm
        |    |-js
        |    |    |-Somefile.js
        |-Plugins
        |    |-MyPlugin.cs
        |-Workflows
        |    |-MyWorkflowActivity.cs
    

    On both the Plugin and Workflows project, Run the following from the Nuget Console:

    Import-Package spkl

    This will add the spkl to the packages folder and the metadata CrmPluginConfigurationAttribute.cs that is used to mark up your classes so that spkl can deploy them. Some simple batch files are also included that you can use to get started.

    If you already have plugins deployed, you can run the following command line in the context of the Plugins folder:

    spkl instrument

    This will prompt you for a Dynamics Connection, and then search for any deployed plugins and their matching .cs file. If the MyPlugin.cs plugin is already deployed it might end up with the following Attribute metadata:

    [CrmPluginRegistration("Create","account",
        StageEnum.PreValidation,ExecutionModeEnum.Synchronous,
        "name,address1_line1", "Create Step",1,IsolationModeEnum.Sandbox,
        Description ="Description",
        UnSecureConfiguration = "Some config")]

    A spkl.json file will be created in the project directly similar to:

    {
      "plugins": [
        {
          "solution": "Test",
          "assemblypath": "bin\\Debug"
        }
      ]
    }
    

    If you now build your plugins, you can then run the following to deploy

    spkl plugins

    You can run instrument for the workflow project using the same technique which will result in code similar to the following being added to your workflow activity classes:

    [CrmPluginRegistration(
            "WorkflowActivity", "FriendlyName","Description",
            "Group Name",IsolationModeEnum.Sandbox)]
    

    …and then run the following to deploy:

    spkl workflow          

    To get any currently deployed webresources matched to your project files you can run the following from the Webresource project folder:

    spkl get-webresources /s:new           

        Where new is the solution prefix you've used

    This will create a spkl.json similar to the following:

    {
      "webresources": [
        {
          "root": "",
          "files": [
            {
              "uniquename": "new_/js/somefile.js",
              "file": "js\\somefile.js",
              "description": ""
            },
            {
              "uniquename": "new_/html/HtmlPage.htm",
              "file": "html\\HtmlPage.htm",
              "description": ""
            }
          ]
        }
      ]
    }
    

    You can then deploy using:

    spkl webresources

    Profiles

    For Debug/Release builds you can define multiple profiles that can be triggered using the /p:<profilename> parameter.

    {
      "plugins": [
        {
          "profile": "default,debug",
          "assemblypath": "bin\\Debug"
        },
        {
          "profile": "release",
          "solution": "Test",
          "assemblypath": " bin\\Release"
        }
      ]
    
    }
    

    The default profile will be used if no /p: parameter is supplied. You can specify a profile using:

    spkl plugins /p:release        

    Referencing a specific assembly rather than searching the folder

    If you have multiple plugins in a single deployment folder and you just want to deploy one, you can explicitly provide the path rather than using the folder search. E.g.

    {
      "plugins": [
        {
          "assemblypath": "bin\\Debug\MyPlugin.dll"
    

    Adding to a solution

    If you'd like to automatically add the items deployed to a solution after deployment you can use:

    {
      "webresources": [
        {
          "root": "",
          "solution": "Test",

    Combining spkl.json

    Perhaps you want to have a single spkl.json rather than multiple ones per project. You can simply add them all together:

    {
      "webresources": […],
      "plugins": […]
    }
    

    Multiple project deployments

    Since the spkl.json configuration files are searched from the current folder, you can deploy multiple plugins/webresources using a single spkl call from a root folder.

    I'll be updating the github documentation page as things move forwards.

  79. You probably already know that I'm a big fan of the Data Export Service. The single fact of having a 'near real time' replica of your data in a SQL Azure Database to query in any way you want is simply amazing.

    Today I came across an interesting limitation with Calculated Fields. Although Calculated Fields are created in the Dynamics database as SQL Server Computed Columns, they are output in the Replica Database fields as standard fields.

    This has a rather inconvenient side-effect when you have calculated fields that are linked to either date/time or a related record. Since the Azure Replica sync is event based, when a related record is updated there is no corresponding event on the referencing record that contains the calculated field therefore it does not get updated. Likewise, if a calculated field changes depending on the date/time then there is no event that triggers the azure replica to be updated. This means that although calculated fields maybe correct at the time the record was created, subsequent updates can make the field become stale and inaccurate.

    Lesson learned - you cannot guarantee the accuracy of calculated fields in the Azure Replica if they contain:

    1. The Now() function
    2. A related record field (e.g. accountid.name)

    Interestingly, calculated fields that use data on the same record do get updated, so the event integration must do a compare of any calculated fields to see if they have changed.

    @ScottDurow

  80. The new business process flow designer in Dynamics 365 is lovely! However, I'm not going to talk about that since it's rightly had lots of love by others already.

    For me the biggest change in Dynamics 365 is the fact that running Business Process Flows (BPFs) are now stored as entity records. Instance details are no longer held as fields on the associated record. I first visited this topic back in the CRM2013 days with the introductions of Business Process Flows where I described how to programmatically change the process.

    Previously when a BPF was started, all of the state about the position was held on the record it was run on was stored in fields on the record itself:

    • Process Id: The ID of the BPF running
    • Stage Id: The ID of the BPF step that was active
    • Traversed Path: A comma separated string listing the GUIDs of current path of steps taken through the BPF. This is to support BPFs with branching logic.

    With the new Dynamics 365 BPFs, each process activated is automatically has an entity created that looks just like any other custom entity. The information about the processes running and any record is now stored as instances of this entity with a N:1 relationship to the parent record and any subsequent related entities. This BPF entity has similar attributes that were stored on the parent entity, but with the following additions:

    • Active Stage Id: The ID of the BPF step that is active – replaces the Stage Id attribute.
    • Activate Stage Started On: The Date Time that the current step was started on – this allows calculation of the amount of time it has been active for
    • State & Status: Each BPF Instance has its own state that allows finishing and abandoning before other BPF are run.

       

    In addition to making migration of data with running BPFs a little easier - this approach has the following advantages:

    1. You can control access to BPFs using standard entity role privileges
    2. You can have multiple BPFs running on the same record
    3. You can see how long the current stage has been active for
    4. You can Abandon/Finish a BPF

    BPF Privileges

    Prior to Dynamics365, you would have controlled which roles could access your BPF using the Business Process Flow Role Check list.     In Dynamics 365 when you click the 'Enable Security Roles' button your BPF you are presented with a list of Roles that you can open up and define access in the 'Business Process Flow' tab:

    Multiple BPFs on the same record

    Switching BPFs no longer overwrites the previous active step – meaning that you can 'switch' back to a previously started BPF and it will carry on from the same place. This means that BPFs can run in parallel on the same record.

    • If a user does not have access to the running BPF they will see the next running BPF in the list (that they have access to).
    • If the user has no access to any BPF that is active – then no BPF is shown at all.
    • If user has read only access to the BPF that is running, then they can see it, but not change the active step.
    • When a new record is created, the first BPF that the user has create privileges on is automatically started.

    When you use the Switch Process dialog, you can now see if the Business Process Flow is already running, who started it and when it was run.

    NOTE: Because the roles reference the BPF entities – you must also include the system generated BPF entities in any solution you intend to export and import into another system.

    Active Step timer

    Now that we have the ability to store addition data on the running BPF instance, we have the time that the current step was started on. This also means that when switching between processes, we can see the time spent in each step in parallel running BPFs.

    Abandon/Finish

    Since each BPF has its own state fields, a business process can be marked as Finished – or Abandoned at which point it becomes greyed out and read only.

    When you 'Abandon' or 'Finish' a BPF it is moved into the 'Archived' section of the 'Switch Process' dialog.

    NOTE: You might think that this means that you could then run the BPF a second time, but in-fact it can only have a single instance per BPF – and you must 'Reactivate' it to use it again.

    • Reactivating an Abandoned BPF will start at the previously active step
    • Reactivating a Finished BPF will start it from the beginning again.

    Example

    Imagine your business has a sales process that requires an approval by a Sales Manager. At a specific step in that sales process you could run a workflow to start a parallel BPF that only the Sales Managers have access to. When they view the record, making the Approval BPF higher in the ordered list of BPFS will mean that they will see the Approval BPF instead of the main Sales Process. They can then advance the steps to 'Approved' and mark as Finished. This could then in turn start another Workflow that updates a field on the Opportunity. Using this technique in combination with Field Level Security gives a rather neat solution for custom approval processes.

    When I first saw this change I admit I was rather nervous because it was such a big system change. I've now done a number of upgrades to Dynamics 365 and the issues I found have all been resolved. I'm really starting to like the new possibilities that Parallel BPFs brings to Dynamics 365.

    @ScottDurow

  81. There is one certainty in the world and that is that things don't stay the same! In the Dynamics 365 world, this is no exception, with new features and SDK features being released with a pleasing regularity. Writing 'revisited' posts has become somewhat of a regular thing these days.

    In my previous post on this subject back in 2013 we looked at how you could use a connection dialog or connection strings to get a service reference from the Microsoft.Xrm.Client library and how it can be used in a thread safe way.

    Microsoft.Xrm.Tooling

    For a while now there has been a replacement for the Microsoft.Xrm.Client library – the Microsoft.Xrm.Tooling library. It can be installed from NuGet using:

    Install-Package Microsoft.CrmSdk.XrmTooling.CoreAssembly
    

    When you use the CrmServerLoginControl, the user interface should look very familiar because it's the same that is used in all the SDK tools such that Plugin Registration Tool.

    The sample in the SDK shows how to use this WPF control.

    The WPF control works slightly differently to the Xrm.Client ShowDialog() method – since it gives you much more flexibility over how the dialog should behave and allows embedding inside your WPF application rather than always having a popup dialog.

    Connection Strings

    Like the dialog, the Xrm.Tooling also has a new version of the connection string management – the new CrmServiceClientaccepts a connection string in the constructor. You can see examples of these connection strings in the SDK.

    CrmServiceClient crmSvc = new CrmServiceClient(ConfigurationManager.ConnectionStrings["Xrm"].ConnectionString);
    

    For Dynamics 365 online, the connection would be:

    <connectionStrings>
        <add name="Xrm" connectionString="AuthType=Office365;Username=jsmith@contoso.onmicrosoft.com; Password=passcode;Url=https://contoso.crm.dynamics.com" />
    </connectionStrings>
    

    Thread Safety

    The key to understanding performance and thread safety of calling the Organization Service is the difference between the client proxy and the WCF channel. As described by the 'Improve service channel allocation performance' topic from the best practice entry in the SDK, the channel should be reused because creating it involves time consuming metadata download and user authentication.

    The old Microsoft.Xrm.Client was thread safe and would automatically reuse the WCF channel that was already authenticated. The Xrm.Tooling CrmServiceClient is no exception. You can create a new instance of CrmServiceClient and existing service channels will be reused if one is available on that thread. Any calls the same service channel will be locked to prevent thread issues.

    To demonstrate this, I first used the following code that ensures that a single CrmServiceClient is created per thread.

    Parallel.For(1, numberOfRequests,
        new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
        () =>
        {
            // This is run for each thread
            var client = new CrmServiceClient(username,
                   CrmServiceClient.MakeSecureString(password),
                   "EMEA",
                   orgname,
                   useUniqueInstance: false,
                   useSsl: false,
                   isOffice365: true);
    
            return client;
        },
        (index, loopState, client) =>
        {
            // Make a large request that takes a bit of time
            QueryExpression accounts = new QueryExpression("account")
            {
                ColumnSet = new ColumnSet(true)
            };
            client.RetrieveMultiple(accounts);
            return client;
        },
        (client) =>
        {
        });
    

    With a Degree of Parallelism of 4 (the number of threads that can be executing in parallel) and a request count of 200, there will be a single CrmServiceClient created for each thread and the fiddler trace looks like this:

    Now to prove that the CrmServiceClient handles thread concurrency automatically, I moved the instantiation into loop so that every request would create a new client:

    Parallel.For(1, numberOfRequests,
        new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
        (index) =>
        {
            // This is run for every request
            var client = new CrmServiceClient(username,
                   CrmServiceClient.MakeSecureString(password),
                   "EMEA",
                   orgname,
                   useUniqueInstance: false,
                   useSsl: false,
                   isOffice365: true);
            // Make a large request that takes a bit of time
            QueryExpression accounts = new QueryExpression("account")
            {
                ColumnSet = new ColumnSet(true)
            };
            client.RetrieveMultiple(accounts);
        });
    

    Running this still shows a very similar trace in fiddler:

    This proves that the CrmServiceClient is caching the service channel and returning a pre-authenticated version per thread.

    In contrast to this, if we set the useUniqueInstance property to true on the CrmServiceClient constructor, we get the following trace in fiddler:

    So now each request is re-running the channel authentication for each query – far from optimal!

    The nice thing about the Xrm.Tooling library is that it is used exclusively throughout the SDK – where the old Xrm.Client was an satellite library that came from the legacy ADX portal libraries.

    Thanks to my friend and fellow MVP Guido Preite for nudging me to write this post!

    @ScottDurow

  82. Dynamics 365 has brought with it a new and amazing feature called the 'Relationship Assistant'. It is part of a preview feature (unsupported and US only) called 'Relationship Insights' which promises to bring some amazing productivity tools to the Dynamics 365 platform.

    Relationship Assistant shows actionable cards in both the web client and mobile client using a mix of both plain old filter conditions and machine learning.

    Machine Learning Cards

    One of the most exciting part of the Relationship Assistant is the use of machine learning to examine the contents of your emails and predict what you need to do next:

    Customer Question Card

    Issue Detected Card

    'Plain old query' Cards

    Whilst the machine learning aspects may be out of our reach to us mere mortals at this time, the cards that are based on simpler filter conditions such as 'Due Today' and 'Meeting Today' are items that can be easily shown in a dashboard without this preview feature. Here are some examples of information that can be gained from simple date queries:

    Due Today Card

    Meeting Today Card

    Missed Close Date Card

    (Images taken from the Relationship Assistant Card reference - https://www.microsoft.com/en-us/dynamics/crm-customer-center/preview-feature-action-cards-reference.aspx)

    Create your own 'Relationship Assistant' Dashboard

    The main challenge with producing information shown above is the date aspect to the query. We can easily show a single set of records that use the 'Next X days' type of operator, but you could not easily use 'todays' date in a dashboard chart – at least not until CRM2015 introduced calculated fields. Now it is rather easy to produce a dashboard similar to the following:

The key feature of dashboards is that they are can be tailored to show your own data which can be drilled into to show the underlying records. This is comparable to the 'actionable' aspect of the relationship assistant where you could drill into the tasks due to 'today' and open them to work upon.

Notice the field 'Due' that can have the value 'Completed', 'Due Next 7 Days', 'Due Today', 'Overdue', or 'Scheduled'. This field isn't stored as a persistent field in the database, but instead it is a calculated field so there are no nightly jobs or workflows required to update a field based on the current date.

Adding a 'Due Status' field to an Activity Entity

  1. Create a solution with the Activity Entity that you want to add the 'Due Status' field to
  2. Create a new field called 'Due Diff' – this will give us a field that shows the number of days before/after the activity due date.
  3. Click 'Edit' and type the expression
    DiffInDays(scheduledstart, Now())
    Note: This assumes that this is an Appointment and you want to use the scheduledstart date to control the due date.
  4. Add a new global Option Set that holds the possible values for the Due status
  5. Create a new Calculated Option Set field called 'Due' on the Activity record. Use the Existing Option Set created above.
  6. Click 'Edit' on the Calculated Field type and add the following logic:
  7. Create a chart something like:
  8. Publish and add the charts to a dashboard!

Of course other more complex options exist but with all the excitement and awesomeness of Machine Learning it is important to remember that we can achieve great things with just the right kind of queries, charts and dashboards!

Hope this helps!

  • Happy 2017! This new year promises to be really exciting in the world of Dynamics 365. The spring release is going to be a big one for developers and I'm really looking forwards to it.

    In the meantime, I've released a new beta version of the Ribbon Workbench that includes the following features:

    1. Copy and Paste of Commands, Buttons & Enable/Disable Rules
    2. Free text JSON Clipboard – allowing you to see what is on the clipboard and potentially copy/paste between instances and even make text changes before pasting.
    3. Full support for Smart Buttons

    You can grab the latest beta of the Ribbon Workbench that support smart buttons from https://www.develop1.net/public/rwb/ribbonworkbench.aspx#DOWNLOAD

    Before I go into more detail about the JSON Clipboard, I wanted to tell you about Smart Buttons because I'm really excited about the possibilities that they may bring.

    So what are Smart Buttons?

    Smart Buttons are a concept I've been toying with since back in the Silverlight days of the Ribbon Workbench. They basically are buttons in the toolbox that add a predefined template of Ribbon Customisations that can be parameterised and contain references to prebuilt JavaScript web resources.

    When you add a standard button using the Ribbon Workbench you get to set the label, image and then reference a command that can in turn reference some JavaScript that you create separately.

    With a smart button, the JavaScript is already built for you and already installed in your Dynamics 365 organisation. Any solution that contains a smart button pre-build JavaScript must also contains a Smart Button manifest file that tells the Ribbon Workbench what's available and defines the templates it exposes.

    A typical uses of Smart Buttons are:

    Developers

    1. Install a Smart Button solution in Development Org (e.g. the Develop1 Smart Button Solution described later)
    2. Install Ribbon Workbench in Development Org
    3. Add Smart Button to Development Org using Ribbon Workbench
    4. Deploy Smart Button solution to Pre-Production/Production – The Ribbon Workbench would not need to be installed
    5. Deploy solution from Development (containing smart button configuration)

    Lead Developers

    1. Create a standard way of performing certain actions from Ribbon Buttons based on common library code they have created.
    2. Create a smart button manifest webresource and include that in their solution as well.
    3. Instruct Developers to use Smart buttons in the Ribbon Workbench rather than adding ribbon customisations manually.

    Third Party ISV's

    1. Create an ISV solution that allows functionality to be invoked in custom entities (I will be publishing a smart button manifest for my network view solution so you can add visualise buttons to any entity).
    2. Rather than publishing instructions on how to add buttons manually, create a smart button manifest that is picked up by the Ribbon Workbench
    3. When installed on the ISV customer's organisation, the Ribbon Workbench scans the organisation for any smart button manifest files and picks up the ISV's custom buttons that are available for use.

    I have built a simple Smart Button solution as a preview of this feature that provides the following:

    • Run Report – Allows you to select a report and create a short cut on any record form.
    • Run Workflow – Allows you to select a workflow and create a short cut to run it on a record form or sub-grid. You can also specify a call-back function to call when it is completed so that you can refresh the form or perform some other action. This is one of the most requested items on the Ribbon Workbench's uservoice site.
    • Run Dialog – This is similar to the Run Workflow button but starts a Dialog instead. Useful for creating your own custom 'Close Opportunity' forms.    
    • Quick JS – Sometimes you want to run some simple JavaScript (such as setting a drop down value and saving the form) when you click a command bar button. Quick JS allows you to simply create a button and directly specify the JavaScript without creating a separate Webresource to hold the code.


    You can download this preview Smart Button solution and give it a try from github: https://github.com/scottdurow/RibbonWorkbench/releases

    Creating your own Smart Buttons!

    In the next post I will describe the copy and paste JSON clipboard and how to use this to create a smart button manifest for your own solutions.

     

  • If you've moved to Dynamics CRM/365 Online then the likelihood is that you've come up against the limitation of not being able to query the SQL database directly to perform more complex reporting or for custom integrations. Many on premises deployments rely on querying the backend databases and in the past this has been a blocker to moving to the cloud – or at least it has meant a complex and costly integration to copy the data from Dynamics 365 to a on prem SQL database.

    The introduction of the Data Export Service is a real game changer with the possibility to replicate your data from Dynamics CRM/365 online to an Azure SQL database in your own Azure Subscription. Once you have your data in a SQL Database you can then using PowerBI, integrate with other systems and create a data warehouse. I've found that the speed of the replication is impressive, being minutes/seconds and not hours.

    There are a number of perquisites to enabling this which you can read about in msdn: https://technet.microsoft.com/en-us/library/mt744592.aspx

    • Azure Active Directory linked to Office 365
    • Azure SQL Database and user with correct permissions
    • Azure KeyVault created (using PowerShell script provided)
    • Dynamics CRM Online 8.1 or later
    • Data Export Service solution installed from App Source
    • Change tracking enabled for custom entities you want to sync
    • You must be a System Administrator to create the export profiles

    The PowerShell script requires that you install the Azure cmdlets – see https://docs.microsoft.com/en-gb/powershell/azureps-cmdlets-docs/

    Here is a video that demonstrates this new service and how to set it up

  • What is great to see with Dynamics 365 is the concept of the Application User. This means that subscribing systems can be connected to external systems using a 'Service Account' style user rather than the previous technique of providing username and passwords.

    One of the knock on effects of this is the new 'Application User' form on the User record. I've had some questions from people who see this form by default. You simply need to change the form back the standard 'User' form:

    If you try to save the user record on this form you will receive the error 'You must provide a value for Application ID' since this is a mandatory field.

    If you want to learn more about the Multi-tenant server to server application user integration scenarios that are used by App Source, then check out the following article:

    https://msdn.microsoft.com/en-us/library/mt790170.aspx

  • This is question on the lips of many after the recent Dynamics 365 anouncements.

    Watch my short video to see how Dynamics CRM sits alongside the Common Data Model: https://youtu.be/fYIPXx9zjj8

  • Those that read regularly my blog and follow my work with Sparkle XRM will know I'm a massive fan of using Fiddler to debug JavaScript. One of the most productive 'superpowers' that Fiddler gives us is the ability to change JavaScript on the disk and not have to upload/publish – we can simply refresh the form and the new script will be used.

    The Interactive Service Hub (ISH) was first introduced in CRM2016 and has been improved with more support for customisations in CRM2016 Update 1.

    I see the purpose of the ISH at this stage is not to replace the main User Interface but rather as a testing ground for the principle of bringing the MoCA mobile/tablet native client platform to the web client. I think of it similar to the introduction of the Polaris UI back in CRM2011 – there are many similarities in that they both only support a limited set of entities and have limited customisations features. The main difference is that the ISH is being incrementally improved with each release, where the Polaris UI was more of a throw away proof of concept. At this stage the ISH is only supporting 'case' oriented operations but I'm sure it'll eventually graduate to support all Sales, Service and Marketing features.

    So why the new approach the UI?

    Surely it would be better to improve the existing UI incrementally rather than replace it?

    One of the key drivers for the Dynamics CRM Team over the last few releases has been 'configure once deploy everywhere'. This allows us to configure business rules that can be run on all devices/platforms reliably without having to perform separate testing and perhaps re-write to target different clients. The maintenance of having multiple user interface platforms is considerable so it's a natural step to try and achieve some degree of convergence between the mobile/tablet/web/outlook interfaces.

    A little background on how the ISH loads metadata

    I think we are all fairly comfortable with the normal Web 2.0 paradigm of loading resources. This is where with each operation the client requests an html page and then the browser requests all the additional resources (JavaScript, CSS etc. ) that are referenced by that page. JavaScript can then make additional XHR/Ajax requests to the server to display further dynamic content. The CRM2016 UI is very similar on this front as can be seen below. I documented the CRM2013 script loading sequence which hasn't significantly changed even in CRM2016.

  • Page Load Sequence Diagram
  • <p>Each time you open the web client, the homepage.aspx or Main.aspx has to request the metadata for the specific resource (view or form) and then combine it with the requested data. Although there is browser and server side caching in place, this is still costly in terms of the requests and rendering overhead of the browser. The 'turbo forms' update in CRM2015 Update 1 has really helped with the speed of this since it minimises the resources that requested with each navigation however fundamentally it is still limited by the page per browser request architecture.
    

    ISH works very differently…

    The ISH is more what we would call a 'single page application'. The sequence is very different in that there is an initial download of metadata and then subsequently all user interactions only request the actual data using the Organization.svc and OrganizationData.svc.

  • New Page Load Sequence Diagram
  • </p><p>This single page approach has the advantage that it makes navigation super slick but with the rather annoying drawback that there is an initial wait each time the ISH is opened where the metadata changes are checked. The first time you open the ISH all the metadata is downloaded but from then on only the differences from the last open are downloaded. If there haven't been any changes then it's <strong>super quick</strong> because all the metadata is stored in the <a href="https://www.w3.org/TR/IndexedDB/" target="_blank">browsers indexed Database</a> but if you've done a publish then the next open can take a while. Furthermore, the new metadata won't be downloaded until you close and re-open the ISH - this is different to the Web 2.0 UI and can lead to the client working with stale metadata for a time. The Indexed Database is one of the significant differences between and HTML5 single page app and a more traditional Web 2.0 architecture.
    

    Note: For now the ISH mostly uses the SOAP/Xml based Organization.svc rather than the new JSON based Web API.

    The speed of the metadata sync can be helped further by using the 'Prepare Client Customizations' button on the solution since this will pre-prepare the download package rather than waiting for the first person to open the ISH to detect the changes in the metadata. The difference between the MoCA client and the ISH is that the MoCA asks if the user wants to download the updates – presumably because you may be on a low bandwidth connection.

    So where does that leave us with respect to JavaScript debugging?

    If you've been keeping up so far (you have right?) then you'll realise that because the metadata (this includes JavaScript) will be all stored in the browser Indexed DB and not relying on the browser cache. As a result, we can't simply prevent the files from being cached and download the latest version with each page load as we used to do with Fiddler. We're back with the uncomfortable debug cycle of having to make a change to a JavaScript web resource, upload it to CRM, publish, close and re-start the ISH - urgh!

    To preserve our collective sanity, I've created a little debug utility solution that you can use to clear the cache of specific web resources so that you can quickly make changes to JavaScript on your local disk and then reload it in the ISH without doing a full publish cycle. Here is how:

    1. Install the latest build of SparkleXRM
    2. Install the Interactive Service Hub Debug Helper Solution
    3. Setup Fiddler's Auto Responder to point to your local webresource file as per my instructions.
    4. Start the ISH to load your JavaScript
    5. Make a local change to your JavaScript
    6. Open the ISH Debug Utility Solution configuration page and enter the name of your script name, then click 'Refresh JavaScript Webresource'Note: You can enter only part of the webresource name and it will use a regular expression to match.
    7. Use Ctrl-F5 on your ISH Page and when re-loaded the Web Resource will use the new version since the debug utility has forced a new download and updated the Indexed DB storage.

    Sweet – but what about the MoCA client?

    Obviously this technique is not going to work for mobile client running on an iPad, iPhone etc. The good news is that you can run the MoCA client in the Chrome browser in the same way you can run the ISH – just navigate to:

    <crmserver>/nga/main.htm?org=<orgname>&server=<crmserver>

    Note: You must be pre-authenticated for this to work.

    OnPrem

    http://dev03/nga/main.htm?org=Contoso&server=http://dev03/Contoso

    OnPrem IFD

    https://myorg.contoso.com/nga/main.htm?org=myorg&server=https:// myorg.contoso.com

    Online

    https://myorg.crm4.dynamics.com/nga/main.htm?org=myorg&server=https:// myorg.crm4.dynamics.com  

    Since the ISH and the MoCA client are build using the same platform you can now use the ISH Debug Helper from the same browser session to perform the same script refresh! This is actually an excellent way of testing out your Scripts on the MoCA client! For more information, check out the comments in this tip of the day.

    Looking forwards to the future

    I'd really encourage you to check out the ISH and use the New CRM Suggestions site to record anything you find that you would like to see in subsequent releases. Whilst I suspect that the existing 'refreshed' UI will be available for some releases to come, it is likely at some point to become the new 'legacy' UI and with on-going investment being made in the ISH style UI.

    In part 2 we'll look at some limitation of the ISH and how to get around them.

    Any comments, just tweet me! @ScottDurow

  • If you have tried to install Project Service recently you might have found that it's disappeared from the 'preferred solution' list in the Office 365 admin portal. So where has it gone?!

    Although the CRM Online Help hasn't yet been updated to reflect the fact, it has now moved to the recently release and very exciting AppSource!

    On the 11th of July Michael Kushinsky of Microsoft helpfully posted in the dynamics community the new instructions on how to install and upgrade from a trial installation – and there will be an official blog post about it soon.

    I thought I would quickly show you how easy it is to use App Source to install PSA:

    1. Select Settings -> Dynamics Marketplace

    2. Search for 'Project Service' and click 'Try'

    3. Accept the T&Cs (you always read them in full right!)

    4. Wait for the solution to install and you're off!

    I can't wait to see AppSource grow and mature!

  • Since the release of the Ribbon Workbench 2016 I am in the process of updating the documentation to reflect the new user interface.

    This video series on mastering the Ribbon Workbench 2016 will take you through from installing to performing advanced customisations.

    Part 1 - Downloading & Installing

    Part 2 - User Interface Overview

    Part 3 - Hide Actions

    Part 4 - Moving Managed Buttons to a Flyout Menu

    Part 5 - Hiding buttons conditional to the form context

    More to follow!...

  • I'm pleased to announce that in addition to the managed solution that you can install inside Dynamics CRM, the Ribbon Workbench 2016 is also available in the XrmToolbox (if you hadn't already noticed!).

    When you open the XrmToolbox you will see that there is the Ribbon Workbench available for download in the plugin store.

    Keep checking out the store because tools are being added regularly by some great plugin authors.

    Thank you to all those who are helping to beta-test, I have been really encouraged by your comments and suggestions. The re-write of the Ribbon Workbench (to remove its dependency on Silverlight) and the XrmToolbox version has been on my 'to-do' list for much longer than I would have liked and so I'm particularly pleased with this release.

    The new Ribbon Workbench 2016 solution installs alongside the older version – so if you have an upgraded org you might get them all sitting there on your command bar. 

    I'm keeping there on one of my organisations for posterity but you can safely uninstall older versions without losing any of your customisations.

    Here is a version compatibility matrix for users of the older versions:

    Name

    Ribbon Workbench

    Ribbon Workbench 2013

    Ribbon Workbench 2016

    Icon

    Latest Version

    1.0.1.9

    2.0.1.3

    3.0.16

    Requires Silverlight?

    Yes

    Yes

     

    XrmToolbox Version?

       

    Yes

    CRM 2011
    (Inc. UR12+)

    Supported

       

    CRM 2013
    (Inc. SP1+)

     

    Supported

     

    CRM 2015
    (Inc. Update 1+)

     

    Supported

    Supported

    CRM 2016
    (Inc. Update 1+)

     

     

    Supported

    Supported

  • Ever since Microsoft's announcement in 2005 of the 'Dynamics' brand, the strategy (code named 'Project Green') to homogenise the range of ERP products together into a single technology has been all but been forgotten. This has largely been due to Microsoft's investments in 'mash-up' technologies such as Power BI and integration platforms such as Power Apps and Microsoft Flow. The concept of 'metamodeling' your Enterprise to make integration and consolidated reporting easier is clearly not a new concept but now with these technologies and the prevalence of open APIs it has become a reality that's available to practically any organisation. The investment in the 2016 release of Dynamics NAV to integrate more natively with Dynamics CRM Online was a step in the right direction to bring the products closer together and with the recent Dynamics 365 announcement it seems like there is a concrete commitment to use Power BI, Power Apps and Microsoft Flow to make the whole greater than the sum of the parts.

    The idea of having a single system to drives every single process within a business may seem like nirvana but in reality would become so complex, hard to maintain and inflexible to business changes (such as mergers and acquisitions) that it would quickly prove to be as undesirable as having no system at all. In a previous life as a Microsoft BizTalk Consultant the Service Orientated Architecture was the perceived anti-dote to the monolithic systems of the 1990's. Unfortunately, the fact that the systems being integrated together naturally assumed that they were the only application you ever needed to use and had closed APIs, integration was not only difficult and expensive but also prone to failure.

    I really welcome Microsoft's Dynamics 365 strategy and hope it will make the Business Solutions sit comfortably within the promised 'common data model' with ease so that we can also include other non-Dynamics parts of the business into the mix. I suspect that the individual product brands will remain intact as products in their own right (On-Premises deployments are not covered by Dynamics 365) and so much of the investment will be as applicable to third-party integration as it will be for the Dynamics brand applications. I really hope that we are starting to see this 'platform first' approach that the Azure team have always taken under Scott Guthrie's leadership now being applied to the Dynamics Business Solutions.

    Why not check out the technologies that are making Dynamics 365 possible:

     

  • CRMUG EC Logo

    If you are attending the CRMUG European Congress 2016 in Stuttgart, come and join me in the following sessions:

    Calculate This!(Administrative/ Technical Track)Anxious? Indecisive? Do you find it hard to decide on which of the many calculated field options to use when customising Dynamics CRM?

    Lie back and relax as I perform psychoanalysis on the various possibilities with an overview of writing plugins, workflows, portable business logic and using Calculated and Rollup fields.

    "xRM Factor" Panel(Technical Track)

    Let the CRM extensibility games begin. This fun-filled topic embraces a panel of Dynamics CRM users sharing their favourite xRM tool and why they think so. The audience will vote on the top two "xRM Factor" winners who will share their xRM tool or shortcut in a deeper education session to follow shortly.

    Data Migration Tools & Best Practices(Upgrading Track)

    From 3rd party tools to best practices of data migration, join me as we examine planning, upgrading, and overall migration best practices to ensure data quality is at the forefront for your Dynamics CRM deployment.

     

    I'm looking forwards to spending an action packed 2 days with like-minded Dynamics CRM folk!

    @ScottDurow

  • A couple of weeks ago I had both the privilege and a most enjoyable hour on CRM Audio chatting with George, Joel and Shawn about the Ribbon Workbench and SparkleXRM. You'll have heard me mention that I'd be posting details on how to get involved with the Beta version of the new Ribbon Workbench 2016 that's written using HTML and JavaScript rather than Silverlight – so here it is!

    I've had a fruitful relationship with Silverlight over the years and it has been the enabler in many successful rich client Dynamics CRM customisations but things have moved on! In July 2015 the time had come to say goodbye in part because there was no Silverlight in Windows 10's Edge Browser. My main blocker for writing pure HTML and JavaScript Web Resources in the past had always been one a lack of productivity tooling, but that had moved on as well not least because of SparkleXRM, my framework for building rich user interface Dynamics CRM web resources. The Ribbon Workbench 2016 is written using SparkleXRM (although it comes pre-packaged in the solution) and if I'm honest I think one of my drivers originally for working so hard on that project was the inevitability of having to re-write the Ribbon Workbench in HTML one day. Without the framework it would have been a bridge-too-far, but as it happened I was pleasantly surprised as to how easy the conversion process went and I am really pleased with how it's turned out. Here are some highlights…

    Drag and Drop Flyout Editing

    Drag and Drop Flyout Editing

    Delete Undo

    Undo

    Drag and Drop Command Editing

    Drag Drop Command Editing

    Can you help with Beta Testing?

    You can download the beta version by signing up to beta test today!

    Please report issues and bugs via UserVoice! Thank you!

  • In previous articles in this series we've talked about the differences between Server Side Sync and the old List Component. Since I published the first articles, a new MSDN article on the topic has been posted which I thought would be good to signpost folks to => Important considerations for server-based SharePoint integration.

    One of the topics that has come up recently for people using Server Side Sync to SharePoint is the 5000 item limit of Document Libraries which has led to a bit of panic amongst some so I thought I'd dispel the rumours!    

    Here are the facts about the Throttling Limitation

    1) You can see how many items you have in a Document Library by opening the site in SharePoint and selecting 'Site Content' from the left hand navigation menu. The number of items will show below the Document Library name – this includes documents and folders.

    2) If you have more than 5000 items you can still use Server Side Integration with SharePoint provided that you only use the default sort order of the document view in CRM.

    The default sort is by Location Ascending. You can change this to sort Descending but if you change the sort to any other column you will receive the error "Throttling limit is exceeded by the operation"

    3) If you have a record with only 2 documents in the associated document locations folder you will still not be able to sort by any other column other than Location if the root Document Library has more than 5000 items overall.

    3) If the user clicks on 'Open SharePoint' then they will be able to do all the sorting they need since the limitation is not experienced by the native SharePoint interface – only the CRM document lists.

    I find that this sort limitation is not an issue because I encourage users to use SharePoint freely due to its rich interface. Don't try and hide SharePoint from them because it's important to understand the way in which documents are stored and the additional features that SharePoint has to offer. The documents view in CRM is simply to be used as a quick reference for documents associated with the CRM Record.

    Hope this helps!

  • In the constant struggle to improve data quality it is common to avoid using free-text fields in favour of select fields. This approach has the advantage of ensuring that data is entered consistently such that it can easily be searched and reported upon.

    There are a number of choices of approaches to implementing select fields in Dynamics CRM. This post aims to provide all the information you need to make an informed choice of which to use on a field by field basis. The options are:

    • Option-Set Field - stored in the database as an integer value which is rendered in the user interface as a dropdown list. The labels can be translated into multiple languages depending on the user's language selection.
    • Lookup Field - a search field that is stored in the database as a GUID ID reference to another entity record. The Lookup field can only search a single entity type.
    • Auto Complete Field - a 'free text' field that has an auto complete registered using JavaScript to ensure that text is entered in a consistent form. The term 'autocomplete' might be a bit misleading since the field is not automatically completed but instead you must select the suggested item from the list. This is a new feature in CRM 2016 that you can read more about in the preview SDK.
    The following table provides an overview of the aspects that this post discusses for each option:
    • Number of items in list – The larger the list and the likelihood that it will grow, the more this becomes important.
    • Filtering based on user's business unit - This is especially important where you have different values that apply to different parts of the business and so the list of options must be trimmed to suit.
    • Adding new Items - Ease of adding values frequently by business users.
    • Removing values – Ease of removing values without affecting existing records that are linked to those values.
    • Multi-Language – Having options translated to the user's selected language
    • Dependant/Filtered options - This is important where you have one select field that is used to filter another such as country/city pairs.
    • Additional information stored against each option - This is important if you have information that you need to store about the selected item such as the ISO code of a country.
    • Mapping between entities -Is the option on multiple entity types? Often the same list of items is added as a field in multiple places and multiple entities. This can be important when using workflows/attribute maps to copy values between different entity types.
    • Number of select fields - The more select fields you have across all your entities, the more important this becomes.
    • Filters, Reports and Advanced Find- When creating advanced finds and views, a user should be able to select from a list of values rather than type free text.
    • Configure once, deploy everywhere – One key design goal should be that once a field is configured, it should be easily used across the web, outlook, tablet and phone clients.

    Option-Set Fields

    Option-Sets are the default starting point for select fields.

    Number of items in list (Option-sets)

    No more than ~100 for performance reasons. All items are downloaded into the user interface which will cause performance problems for large lists – especially where there are lots of option-sets on the same form.

    Filtering based on user's business unit (Option-sets)

    Requires JavaScript to filter items dynamically based on the user's role/business unit.

    Ease of adding values frequently by business users (Option-sets)

    Option-Sets require a metadata configuration change and a re-publish that would usually be done out of hours by an administrator. It is best practice to do this on a development environment and then import a solution into production. Adding new values to the list isn't something that can be done by business users.

    Removing values over time (Option-sets)

    Removing items causes data loss in old records. Items can be removed using JavaScript to preserve old records, but Advanced Find will still show the values.

    Multi-Language Options (Option-sets)

    Each option-set item can be translated into multiple languages.

    If you need to have the select field multi-language then an option-set is probably your best choice unless it is going to be a long list, in which case you'll need to make a compromise.

    Dependant/Filtered options (Option-sets)

    Requires JavaScript to filter options.

    Additional information stored against each option (Option-sets)

    It is not possible to store additional data other than the label and integer value of the option-set. You would need to store somewhere else in a lookup table format.

    Mapping between entities (Option-sets)

    Use a global option-set that can be defined once and used by multiple option-set fields.

    Number of select fields (Option-sets)

    You can have as many select fields as your entity forms will allow. The more fields you have the slower the form will load and save. 

    Search/Filtering (Option-sets)

    Option-sets are always presented as a drop down in advanced fine and view filters.

    Configure once, deploy everywhere (Option-sets)

    Works across all clients including phone and tablet native apps.

    Option-sets are the most 'native' choice for select fields and will work in all deployment types without much consideration.

     

    Lookup Fields with Custom Entity

    Lookup fields allow selecting a single reference to a custom entity using a free text search.

    Number of items in list (Lookup)

    Unlimited list items subject to database size. Since all list items are not downloaded to the user interface (unlike option-sets) the user can search quickly for the item they need.

    Filtering based on user's business unit (Lookup)

    Security Roles can be used in combination with a user owned lookup entity so that lookup records are visible to subset of users.

    Ease of adding values frequently by business users (Lookup)

    New records can easily be added by users using the 'Add New' link. Control over who can add new items can be done using Security Roles.

    If you need business users to add values regularly then a Lookup field is a good choice. The Configuration Migration tool in the SDK can be used to easily move values between environments.

     

    Removing values over time (Lookup)

    Old items can be easily deactivated and will no longer show in lookup fields (including in advanced finds) however existing records will retain their selected value (unlike when option-set items are removed).

    If you need to make changes constantly to the list and remove items without affecting previous records then a lookup field is most likely your best choice.

     

    Multi-Language Options (Lookup)

    Not possible without complex plugin server side code to dynamically return the name in the current user's language.

    Dependant/Filtered options (Lookup)

    Lookup filtering options can be added in the form field properties or can be added via JavaScript for more complex scenarios.

    Lookups are the easiest and quickest to setup dependant lists without writing code. This filtering will also work on tablet/mobile clients without further consideration.

  • Learn about filtering lookups using the form field properties
  • Learn about dynamic JavaScript filtering of lookups
  • Additional information stored against each option (Lookup)

    Additional information can be stored as attributes on the lookup entity records. Lookup views can show up to 2 additional attributes within the inline lookup control.

    If you are integrating with another system that requires a special foreign key to be provided, lookup entities are good way of storing this key.

     

    Mapping between entities (Lookup)

    Lookups can easily be mapped between records using attribute maps/workflows or calculated fields.

    Number of select fields (Lookup)

    CRM Online is limited to 300 custom entities.

    This is an important consideration and it's unlikely to be a good idea to use Lookup entities for all of your select fields.
    If you are using CRM online you'll likely have to always use a combination of lookups and option-sets due to the limit of 300 custom entities. Don't take the decision to make all your select fields as lookups.

     

    Search/Filtering (Lookup)

    Lookups are presented as search fields in Advanced Find and Filters.

    Configure once, deploy everywhere (Lookup)

    Works across all clients including phone and tablet native apps. If working offline however, all lookup values may not be available.

    Text Field Auto Completes (CRM 2016)

    Autocompletes are a free text field with an on key press event added to show an autocomplete. The great thing about autocompletes is that they can show icons and additional action links.See below for an example of how to use autocompletes in Javascript.

    Number of items in list (Autocomplete)

    An autocomplete field can only show as many items as you return at a time but you'll want to put a limit for performance reasons.

    If you need the select field to be more like a combo-box where users can type their own values or select from predefined items then autocomplete text fields are a good choice.

     

    Filtering based on user's business unit (Autocomplete)

    You can add any search filtering you need using JavaScript.

     

    Ease of adding values frequently by business users (Autocomplete)

    If the autocomplete is using a lookup entity to store the data displayed then the same considerations would apply as for Lookup Entities. If the values are hard coded into the JavaScript then this would be more akin to the Option-Set solution import.

    Removing values over time (Autocomplete)

    Since the actual field is stored as a text field there is no issue with removing values. Existing data will still be preserved.

    Multi-Language Options (Autocomplete)

    You can detect the user interface language and return a specific language field to the user via JavaScript however it will be stored in the textbox and other users will not see it in their language (unlike an option-set). One solution to this would be to use the autocomplete for data entry and then use a web resource to present the field value in the local user's language.

    Dependant/Filtered options (Autocomplete)

    You can apply whatever filtering you need using JavaScript.

    Additional information stored against each option (Autocomplete)

    If you use the autocomplete to search a custom entity you can store additional data as additional attributes. The autocomplete flyout can display multiple values for each result row.

    Autocomplete fields have the advantage that they can show an icon that is specific to the record (e.g. The flag of the country). If you need this feature, then Auto completes are a good choice.

     

    Search/Filtering (Autocomplete)

    If you use a free text autocomplete it's advisable to additionally populate a backing lookup field to facilitate searching/filtering. This would also allow you to ensure that 'unresolved' values cannot be saved by using an OnSave event to check that the text field matches a hidden lookup field that is populated in the OnChange event.

    Configure once, deploy everywhere (Autocomplete)

    Autocomplete does not work on phone/tablet native apps yet.

    Show me the Code!

    I have added support for the Auto Complete SDK extensions in CRM2016 to SparkleXRM. To show a country autocomplete lookup, you'd add onload code similar to:

    public static void OnLoad()
    {
        Control control = Page.GetControl("dev1_countryautocomplete");
        control.AddOnKeyPress(OnCountrySearch);
    }
    
    public static void OnCountrySearch(ExecutionContext context)
    {
        string searchTerm = Page.GetControl("dev1_countryautocomplete").GetValue<string>();
        string fetchXml = String.Format(@"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
                                  <entity name='dev1_country'>
                                    <attribute name='dev1_countryid' />
                                    <attribute name='createdon' />
                                    <attribute name='dev1_name' />
                                    <attribute name='dev1_longname' />
                                    <attribute name='dev1_iso' />
                                    <attribute name='entityimage_url' />
                                    <order attribute='dev1_name' descending='false' />
                                    <filter type='and'>
                                      <condition attribute='dev1_name' operator='like' value='{0}%' />
                                    </filter>
                                  </entity>
                                </fetch>", searchTerm);
    
        OrganizationServiceProxy.BeginRetrieveMultiple(fetchXml, delegate(object state)
        {
            try
            {
                // We use an aysnc call so that the user interface isn't blocked whilst we are searching for results
                EntityCollection countries = OrganizationServiceProxy.EndRetrieveMultiple(state, typeof(Entity));
                AutocompleteResultSet results = new AutocompleteResultSet();
    
                // The Autocomplete can have an action button in the footer of the results flyout
                AutocompleteAction addNewAction = new AutocompleteAction();
                addNewAction.Id = "add_new";
                addNewAction.Icon = @"/_imgs/add_10.png";
                addNewAction.Label = "New";
                addNewAction.Action = delegate()
                {
                    OpenEntityFormOptions windowOptions = new OpenEntityFormOptions();
                    windowOptions.OpenInNewWindow = true;
                    Utility.OpenEntityForm2("dev1_country", null,null, windowOptions);
                };
                results.Commands = addNewAction;
                results.Results = new List<AutocompleteResult>();
    
                // Add the results to the autocomplete parameters object
                foreach (Entity country in countries.Entities)
                {
                    AutocompleteResult result = new AutocompleteResult();
                    result.Id = country.Id;
                    result.Icon = country.GetAttributeValueString("entityimage_url");
                    result.Fields = new string[] { country.GetAttributeValueString("dev1_name"),                           
                        country.GetAttributeValueString("dev1_iso"),
                         country.GetAttributeValueString("dev1_longname")
                    };
                    ArrayEx.Add(results.Results, result);                    
                }
                if (results.Results.Count > 0)
                {
                    // Only show the autocomplete if there are results
                    context.GetEventSource().ShowAutoComplete(results);
                }
                else
                {
                    // There are no results so hide the autocomplete
                    context.GetEventSource().HideAutoComplete();
                }
            }
            catch(Exception ex)
            {
                Utility.AlertDialog("Could not load countries: " + ex.Message, null);
            }   
        });
    }
    

    This would result in JavaScript:

    ClientHooks.Autocomplete = function ClientHooks_Autocomplete() {
    }
    ClientHooks.Autocomplete.onLoad = function ClientHooks_Autocomplete$onLoad() {
        var control = Xrm.Page.getControl('dev1_countryautocomplete');
        control.addOnKeyPress(ClientHooks.Autocomplete.onCountrySearch);
    }
    ClientHooks.Autocomplete.onCountrySearch = function ClientHooks_Autocomplete$onCountrySearch(context) {
        var searchTerm = Xrm.Page.getControl('dev1_countryautocomplete').getValue();
        var fetchXml = String.format("<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>\r\n                                      <entity name='dev1_country'>\r\n                                        <attribute name='dev1_countryid' />\r\n                                        <attribute name='createdon' />\r\n                                        <attribute name='dev1_name' />\r\n                                        <attribute name='dev1_longname' />\r\n                                        <attribute name='dev1_iso' />\r\n                                        <attribute name='entityimage_url' />\r\n                                        <order attribute='dev1_name' descending='false' />\r\n                                        <filter type='and'>\r\n                                          <condition attribute='dev1_name' operator='like' value='{0}%' />\r\n                                        </filter>\r\n                                      </entity>\r\n                                    </fetch>", searchTerm);
        Xrm.Sdk.OrganizationServiceProxy.beginRetrieveMultiple(fetchXml, function(state) {
            try {
                var countries = Xrm.Sdk.OrganizationServiceProxy.endRetrieveMultiple(state, Xrm.Sdk.Entity);
                var results = {};
                var addNewAction = {};
                addNewAction.id = 'add_new';
                addNewAction.icon = '/_imgs/add_10.png';
                addNewAction.label = 'New';
                addNewAction.action = function() {
                    var windowOptions = {};
                    windowOptions.openInNewWindow = true;
                    Xrm.Utility.openEntityForm('dev1_country', null, null, windowOptions);
                };
                results.commands = addNewAction;
                results.results = [];
                var $enum1 = ss.IEnumerator.getEnumerator(countries.get_entities());
                while ($enum1.moveNext()) {
                    var country = $enum1.current;
                    var result = {};
                    result.id = country.id;
                    result.icon = country.getAttributeValueString('entityimage_url');
                    result.fields = [ country.getAttributeValueString('dev1_name'), country.getAttributeValueString('dev1_iso'), country.getAttributeValueString('dev1_longname') ];
                    Xrm.ArrayEx.add(results.results, result);
                }
                if (results.results.length > 0) {
                    context.getEventSource().showAutoComplete(results);
                }
                else {
                    context.getEventSource().hideAutoComplete();
                }
            }
            catch (ex) {
                Xrm.Utility.alertDialog('Could not load countries: ' + ex.message, null);
            }
        });
    }
    
    ClientHooks.Autocomplete.registerClass('ClientHooks.Autocomplete');
    });
    

     

     

     

  • Many of you will have seen (and possibly used) my previous version of NetworkView. Thank you for all the feedback you've given. The top 5 points of feedback were:

    1. Can I show connections on the graph?
    2. Can I show custom entities on the graph?
    3. I have very large graphs which makes it hard to understand what is going on.
    4. I have records owned by Teams but I cannot see that on my graph.
    5. How do I know which node is the one that I've opened the graph from?

    With version 1.6 I've addressed these points as follows:

    Showing Connections

    You can now show connections between entities! The connection roles that are discovered (e.g. Partner, Influencer etc.) are listed so that you can then highlight all the connections of a particular role on the graph by selecting the role. Loading connections may be turned on or off on a per-entity basis in the configuration.

    Connections shown on graph

    Activity Overflow

    Since most of the time folks are only interested in the links between records via activities rather than the actual activities themselves, the graph now loads the first 10 records and then shows an overflow node that you can double click to load the next 5 activities. Under the covers, all that activities that match the FetchXml filter are loaded and the links worked out but the links are shown to and from the 'overflow' node rather than the individual activities until you expand the node.

    Expanding Activities

    Iteration Limit

    Network graphs can often grow to a large size if you have many connections, the graph will now pause loading after a set number of iterations and ask if you want to load more. The most frequent cause of graph growth is where there are connections that are also users – for this reason, the graph will supress connections that have the same email domain as any of your users.

    Loading Paused

    Team Ownership

    In addition to showing users who own records and participate on activities the graph now shows Teams in same list and allows highlighting records that are associated with that team.

    Team Ownership on Graph

    Root Node identification

    You can now see which record you started the graph from highlighted in green.

    Cycle Mode

    When you first load the Network view, the users and connection are cycled through so that the various cohorts on your graph are highlighted for your viewing pleasure! This can be turned off by clicking the cycle button and the configuration controls if it is on by default.

    Configurable

    The graph is now configured using a customisable web-resource named:

    dev1_/js/NetworkViewConfig_1033.js

    To add your custom configuration you simply need to edit this web resource and publish the changes. You can use the following options:

    iterationCountPerLoad

    The number of iterations of the load mechanism before prompting the 'Load More' option.

    Default:10

    trace

    Output trace to F12 console whilst loading. This is good for working out why your graph looks the way it does

    Default: false

    demoModeInitialState

    Turn on the demo cycle when the graph first loads.

    Default: false

    connectionFetchXml

    The query to fetch connections. The placeholder {0} must be inserted where the ID filter should be added.

    acitvityFetchXml

    The query to fetch activities. The placeholder {0} must be inserted where the ID filter should be added.

    entities

    Array of Entities to load (including custom entities)

    Each Entity configuration has the following fields:

    displayName

    The name of the entity used in messages.

    logicalName

    The logical name of the entity to load

    nameAttribute

    The logical name of the attribute that holds the display name of the record.

    idAttribute

    The logical name of the attribute that holds the unique ID of the record

    parentAttributeId

    The logical name of the parent Record ID when hierarchical loading is enabled.

    loadActivities

    True when the graph should show activities for this entity

    loadConnections

    True when the graph should show connections for this entity (connections must be supported by this entity)

    hierarchical

    True when the parent/child relationships should be traversed using the hierarchical operators.

    fetchXml

    The query to get the records. The placeholder {0} must be inserted where the filter conditions should be added.

    joins

    Holds an array of joins to other entities.

     

    Installation

    To install or upgrade from the previous version :

    1. Install SparkleXRM v7.2.8 or later
    2. Install the NetworkView managed solution

    The usual disclaimer & license applies.

    Special thanks goes to @dynamiccrmcatfor help with testing this version.

    @ScottDurow

  • Having just got back from holiday in lovely the country of Hungary I've just got round to adding the links to the Refreshed Connection UI solution downloads.

    To download the files please refer to the original post!

  • Following on from my recent video series called 'Building a Refreshed Connections UI using SpakleXRM' this post shows you how to use the final solution to add a bit of sparkle to the otherwise slightly neglected connections feature in Dynamics CRM.

    Connections first saw an appearance in CRM2011 replacing the customer relationships entity. The big deal about connections was that they could connect any entity to any other entity using the same lookup field and could be viewed from either side of the relationship. This made them very powerful for creating complex relationships between contacts, contacts, cases and opportunities such that the relationships would be equally discoverable from both sides.

    • Two sided - If you add a link between John Smith and Contoso you can see that connection on the Contoso record as well as the John Smith record. Behind the scenes this is achieved by two connections being created with the duplicate having the Connect To and Connect From lookups reversed. The connection that you created is marked as the 'Master' and when opening the non-master connection you are actually shown the Master record.
    • Ubiquitous – A common ask from users is to see all relationships in one place. Since connections support multiple types of records you can have a single list that shows connections between many types of record. The Connection Role is used to determine the type of relationship and the Connection Role Category can be used to group the types of roles together (e.g. Business, Sales Team, Stakeholders etc.)
    • Customisable Intersect Entity– There are existing attributes such as Effective Start and Effective End that are useful when modelling relationships that change over time but you can also add you own fields. The downside of this of course is that those fields will be visible for all connections irrespective of the entities types that are being connected unless you do some hiding/showing dynamically.

    I've always loved the connection entity but it hasn't received the 'Refreshed UI' treatment in recent releases which is why I created the Refreshed Connection UI Solution.

    Current Experience (CRM2015)

    This is what the current experience is on the Opportunity Form:

    So, far so good, but if you add a custom connections sub grid you get the following. Perhaps the biggest drawback is that the Add New (+) button cannot be used on the form connections sub-grid.

    If you then use the associated sub grid you get the following. Adding connections uses the old style Ribbon Form.

    New Experience

    Using the new SparkleXRM Refreshed Connections UI this is the experience you'll get:

    Installing:

    The code is all in github if you want to take look - otherwise you can install the managed solution. 

    Configuring:

    1. You can add the component to any form that supports connections by using 'Add Web Resource'

    2. Select the Web Resource named con_/html/Connections.htm
    3. Within the Web Resources properties dialog on the Formatting tab set the height in Rows and un-check 'Display Boarder' and change the Scrolling to 'Never'
    4. Again, within the Web Resource properties dialog on the General tab you can pass the following properties:
      1. entities – (Optional) A Comma separated list of entity logical names that you want to support in the lookup. By default the entities will be account,contact,opportunity,systemuser
      2. pageSize – (Optional) The number of rows to display per page. By default a size of 10 will be used.
      3. view – (Optional) The name of the view to use (e.g. All Stakeholders). By default the Associated view will be used

    The usual disclaimer & license applies.

  • With the introduction of 'Turbo Forms' in CRM2015 Update 1 I thought I'd give you a heads up on what you'll need to address in your JavaScript to support this new form rendering engine. The Dynamics CRM Team Blog has a very good article on the changes but there have been some misunderstandings of the statement 'we have parallelized as many operations as possible'. In actual fact the script loading of custom web resource scripts has not really changed since CRM2013 - It remains the same as I describe in my CRM2013 Script Loading Deep Dive. The operations that are parallelized with turbo form rendering are the internal form load operations rather than custom ones. Custom JavaScript loading has always been parallelized since CRM2013.

    Turbo Form Page Differences

    Before Turbo Forms both global scripts and your custom web resources would have been loaded into a content IFRAME within the main window each time you navigate between records.

    The biggest change with Turbo Forms is that the content IFRAME is kept in memory and re-used for performance reasons. This parent content IFRAME is even kept in memory between navigation between different entity types. Any custom JavaScript is then loaded into a child IFRAME (via the ClientApiWrapper.aspx page) so that when you navigate between records they are re-loaded.

    SparkleXRM solutions already have a load order mechanism that ensure that your custom scripts are loaded in order that they are needed.

    Impact on unsupported code

    If your JavaScript is using only supported SDK calls then there will be no impact from this new loading mechanism. Sometimes it is necessary to use unsupported functions or referencing parameters of the Content IFRAME (such as the query string). Since your custom JavaScript is now running in the context of the ClientApiWrapper (rather than the content IFRAME) any reference to window methods such as window.openStdWin or window.location.href will fail. In order to access these objects you will need to reference parent.window.

    The ribbon workbench 'how-to' article that shows starting a dialog from a ribbon button does infact use openStdWin to ensure consistency with the out of the box dialog experience. I have updated the code to use the parent window when required.

    Footnotes

    There are a couple of other notable aspects of Turbo Forms that I thought I'd point out:

    IFRAMES that are collapsed by default are not sized correctly.

    If you have an IFRAME or HTML Webresource inside a Tab that is collaposed by default you will find that they are not sized correctly when the tab is expanded. This will be fixed in an upcoming update but until then you will need to show the tab by default and collapse when the form is loaded.

    entityType vs typename

    Turbo Forms have dropped the typename attribute of Lookup values.

    In the past, the following code would return the same:

    Xrm.Page.getAttribute("parentcustomerid").getValue()[0].typename
    Xrm.Page.getAttribute("parentcustomerid").getValue()[0].entityType

    With Turbo Forms only the documented entityType is available. The typename attribute was left over from the CRM4 days and just had not been removed until now!

    @ScottDurow

     

  • Metadata Server Logo

    Speed up your HTML web resources by caching metadata such as localised field labels and options sets.

    If you've developed an HTML web resource for Dynamics CRM that includes field labels, option sets or any other element that is stored in the Dynamics CRM metadata then you’ll know about the delay each time the your UI is rendered whilst information is downloaded from the server. You could of course hard code this information in your JavaScript but you'd suffer higher maintenance costs especially when supporting multiple languages.

    The SparkleXRM Metadata Server allows dynamic expressions to be included in JavaScript Web Resources that are evaluated on the server and then cached on the client side.

    /*metadata
    var fieldLabel = <@contact.fullname.DisplayName@>;
    metadata*/

    This will output and cache the following on the client (for an LCID of 1033):

    var fieldLabel = "Full Name";

    Learn more about the SparkleXRM Metadata Server!

  • KRIEGO KG
    Ahorngasse 5
    2435 Ebergassing  (Nähe Wien)
    Österreich
     
    Tel: +43 (680) 1333114
    E-Mail: Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein.