Monday, March 20, 2023

Separating Plugin Logic: A Guide to Testing Dataverse Plugins with IOC

I’m not a pure TDD developer.  I frequently take my best guess at a Dataverse plugin, then apply TDD until everything works.  This can lead to situations where my “rough draft” plugin is complete, but when I go to write my first test, I realize that I have to test allot, and that’s going to be very painful.  The solution to this is to restructure your plugin code so you can test logic independently of each other.  I ran into having to do this recently and decided that maybe a guide of what I do could be helpful to others.  So, if you ever find yourself in this situation and need a little help, this is the guide for you!

Background

The business requirement in my example is to create a “Total Fees” record per year for contacts, which contained the sum of fees from a grandchild record, where the year was determined by the connecting child record.  This resulted in a data model like this:


The plugin would trigger a recalc of fees for a contact, if:

  1. A grandchild was added
  2. A grandchild was removed.
  3. A grandchild fees was updated
  4. A child was added
  5. A child was removed
  6. A child year was updated

And this is a simplistic view still, since there are plenty of situations where changes shouldn’t trigger a recalc (like the fees being updated from null to 0, or a fee getting added when there is no child id, etc).  For now, let’s abstract all that /* logic */ which gives us these methods in the plugin, with the “OnX” methods being called from the Execute automatically by the plugin base class depending on the context, each each “OnX” method calling the RecalcTotalsForContact method:

private void OnGrandchildChange(ExtendedPluginContext context) { /* logic */ }

private void OnGrandchildCreate(ExtendedPluginContext context) { /* logic */ }

private void OnChildChange(ExtendedPluginContext context) { /* logic */ }

private void OnChildCreate(ExtendedPluginContext context) { /* logic */ }

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

    var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
    var nextYearStart = yearStart.AddYears(1);
    var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
    qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
        .WhereEqual(
            Acme_Child.Fields.Acme_ContactId, contactId,
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

    var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault());
    var upsert = new Acme_ContactTotal
    {
        Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
        Acme_Name = year + " Net Fees",
        Acme_Total = new Money(totalFees),
        Acme_Year = year.ToString()
    };
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

    context.SystemOrganizationService.Upsert(upsert);
}

Separating The Logic

When testing, we want to be able to test the “OnX” methods separately from the actual calculation logic in the RecaclTotalsForContact.  In order to do that we will need to be able to inject the calculation logic into the plugin, allowing it to run using a mock object that can be used to verify that the RecalcTotalsForContact was called correctly when testing, and using the actual logic when running on the Dataverse server.

There are 100 different ways to inject the logic into the plugin, but one of the simplest is to encapsulate the RecalcTotalsForContact logic into an interface and inject it into the IServiceProvider that is already in the plugin infrastructure.  Using this approach, the first step is to encapsulate the logic into an IContactTotalCalculator interface (Some purists will never put the interface and the implementation in the file, but if you’re only ever going to have one implementation, IMHO it makes finding the implementation much simpler to be in the same file):

public interface IContactTotalCalculator
{
    void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year);
}

public class ContactTotalCalculator : IContactTotalCalculator
{
    public void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
    {
        context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

        var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
        var nextYearStart = yearStart.AddYears(1);
        var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
        qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
            .WhereEqual(
                Acme_Child.Fields.Acme_ContactId, contactId,
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

        var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault())
        var upsert = new Acme_ContactTotal
        {
            Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
            Acme_Name = year + " Net Fees",
            Acme_Total = new Money(totalFees),
            Acme_Year = year.ToString()
        };
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

        context.SystemOrganizationService.Upsert(upsert);
    }
}

Then update the plugin to get the IContactTotalCalculator from the ServiceProvider, defaulting to the ContactTotalCalculator implementation if no implementation exists (which won’t on the Dataverse server):

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    var calculator = context.ServiceProvider.Get<IContactTotalCalculator>() ?? new ContactTotalCalculator();
    calculator.RecalcTotalsForContact(context, contactId, year);
}

With this simple change, The ContactTotalCalculater is now completely separate from the plugin and can be tested separately with ease!  The plugin triggering logic can now also be tested independently of the actual recalculation logic but there are a few more step required.  Here is a test helper method for the grand children logic that can be called multiple times with different pre-images and targets and the expected children that should be triggered to be recalculated:

private static void TestRecalcTriggered(
    IOrganizationService service,
    ITestLogger logger,
    MessageType message,
    Acme_Grandchild preImage,
    Acme_Grandchild target,
    string failMessage,
    params Acme_Child[] triggeredChildren)
{
    // CREATE LOGIC CONTACT TOTAL CALCULATOR MOCK THAT ACTUALLY DOES NOTHING
    var mockCalculator = new Moq.Mock<IContactTotalCalculator>();
    var plugin = new SumContactFeesPlugin();
    var context = new PluginExecutionContextBuilder()
        .WithFirstRegisteredEvent(plugin, p => p.EntityLogicalName == Acme_Grandchild.EntityLogicalName
                                               && p.Message == message)
        .WithTarget(target);
    if (preImage != null)
    {
        context.WithPreImage(preImage);
    }

    var serviceProvider = new ServiceProviderBuilder(service, context.Build(), logger)
        .WithService(mockCalculator.Object).Build(); // INJECT MOCK INTO SERVICE PROVIDER

    //
    // Act
    //
    plugin.Execute(serviceProvider);

    //
    // Assert
    //
    foreach (var triggeredChild in triggeredChildren)
    {
        mockCalculator.Verify(m =>
                m.RecalcTotalsForContact(It.IsAny<IExtendedPluginContext>(), triggeredChild.Acme_ContactId.Id, triggeredChild.Acme_Year.Year),
            failMessage);
    }

    // VERIFY MOCK CALLED THE EXPECTED # OF TIMES
    try
    {
        mockCalculator.VerifyNoOtherCalls();
    }
    catch
    {
        Assert.Fail(failMessage);
    }
}

Please note that I’m using Moq for my mocking framework and XrmUnitTest for my ServiceProviderBuilder.  You can use any mocking framework/Dataverse Testing framework that you’d like, they’ll all provide the same logic with similar effort.  The key concept is to inject the mock implementation into the IServiceProvider provided to the IPlugin Execute method, and then verify that it has been called the correct number of times with the correct arguments.

Thursday, January 5, 2023

How to Filter Dates in Canvas Apps Using Greater Than/Less Than Operators

Defining the Problem

Recently I was attempting to filter an on-premise SQL table by a DateTime field using a “greater than” operator, and displaying the results in a Data Table control.  When I applied the “greater than” condition to my filter, it would return 0 results.  The crazy thing was I wasn’t seeing any errors.  So I then turned on the Monitor tool and took a look at the response of the getRows request:

{
  "duration": 1130.2,
  "size": 494,
  "status": 400,
  "headers": {
    "Cache-Control": "no-cache,no-store",
    "Content-Length": 494,
    "Content-Type": "application/json",
    "Date": "Thu, 05 Jan 2023 13:36:12 GMT",
    "expires": -1,
    "pragma": "no-cache",
    "strict-transport-security": "max-age=31536000; includeSubDomains",
    "timing-allow-origin": "*",
    "x-content-type-options": "nosniff",
    "x-frame-options": "DENY",
    "x-ms-apihub-cached-response": true,
    "x-ms-apihub-obo": false,
    "x-ms-connection-gateway-object-id": "c29ec50d-0050-4470-ac93-339c4b208626",
    "x-ms-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c",
    "x-ms-user-agent": "PowerApps/3.22122.15 (Web AuthoringTool; AppName=f3d6b68b-f463-43a2-bb2b-b1ea9bd1a03b)",
    "x-ms-client-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c"
  },
  "body": {
    "status": 400,
    "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e127bd54-0038-4c46-9a31-ce94547c226c",
    "error": {
      "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime."
    },
    "source": "sql-eus.azconn-eus-002.p.azurewebsites.net"
  },
  "responseType": "text"
}

Ah, Power Apps shows no error since it returned a 400 status, but the body contains the actual error: "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e927bd54-0038-4c46-9a31-ce94547c226c".  Apparently my DateTime column in SQL does not play well with Power App’s Date Time.  After some googling I found some community posts as well:


The Solution

The last community post above suggests that I should try the DateTimeOffset column type in SQL, and after another return to the googling I found a very similar issue described by Tim Leung, describing the same thing.  Unfortunately no one documented how to do this, so here I am, documenting how to do it for you dear reader, as well as future me !  Please be warned, I’m still not sure how DateTimeOffset plays with other tools/systems, so test first!)

  1. Update the DateTime Column in SQL Server
  2. ALTER TABLE dbo.<YourTableName>
    ALTER COLUMN <YourDateColumn> datetimeoffset(0) NOT NULL;

    UPDATE dbo.<YourTableName>
    SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE <YourTimeZone>;

    /*
    I don't believe there is a Daylight Saving Time option to timezones, but I just happened to be in EST, not EDT, so my last line looked like this:

        SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE 'Eastern Standard Time';

    Use SELECT * FROM Sys.time_zone_info to find your time zone.
    */

  3. Refresh the Data source in the app

  4. In Canvas Apps Studio, click data source options menu and select Refresh
  5. Reload the app
  6. I had problems with the Data Table control I was using not applying the timezone offset correctly.  Reloading the app seemed to fix this issue.

  7. Viola!


It’s not hard, but it definitely is a headache that I would hope Microsoft will solve.



Friday, July 1, 2022

Enabling or Disabling All Plugin Steps In Dataverse

The Cause

Recently a bug (working by design?) with the PowerPlatform.BuildTools version 0.0.81 caused all my plugin steps to become disabled.  After looking at the Azure DevOps Pipeline output I found this lovely difference between versions .77 and .81:

0.0.77

Import-Solution: MySolution_managed.zip, HoldingSolution: True, OverwriteUnmanagedCustomizations: True, PublishWorkflows: True, SkipProductUpdateDependencies: False, AsyncOperation: True, MaxAsyncWaitTime: 01:00:00, ConvertToManaged: False, full path: D:\a\1\a\MySolution_managed.zip

0.0.81

Calling pac cli inputs: solution import --path D:\\a\\1\\a\\MySolution_managed.zip --async true --import-as-holding true --force-overwrite true --publish-changes true --skip-dependency-check false --convert-to-managed false --max-async-wait-time 60 --activate-plugins false' ]

When this solution imported, it deactivated all of my plugin steps in my solution (which had over 100). Manually updating it would have been ugly.  Luckily there is a work around…


The Fix

  1. If you haven’t already, install the XrmToolBox, and set it up to connect to your environment.
  2. Install Sql4Cds
    1. Click Tool Library:
    2. image
    3. Make sure your display tools check boxes have “Not installed” checked and install the tool:
    4. image
    5. Open the Sql 4 CDS tool, connecting to your environment.
    6. Execute the following statement to find the Id of the plugin assembly that you want to enable all plugin steps for:
      1. SELECT pluginassemblyid, name FROM pluginassembly ORDER BY name
         
        
    7. image
    8. Find and copy the plugin assembly id you want to enable (I’ve left the values needed to disable plugins but commented out, in case that is required in the future as well dear reader), and paste into the following query:
    9.  
      UPDATE sdkmessageprocessingstep
      SET statecode = 0, statuscode = 1  -- Enable
      -- SET statecode = 1, statuscode = 2 -- Disable
      WHERE sdkmessageprocessingstepid in (     SELECT sdkmessageprocessingstepid     FROM sdkmessageprocessingstep     WHERE plugintypeid IN (         SELECT plugintypeid         FROM plugintype         WHERE pluginassemblyid = '95858c14-e3c9-4ef9-b0ef-0a2c255ea6df'     )     AND statecode = 1
      )
       
      
    10. Execute the query, get a coffee/tea and let it update all of your steps for you!



Wednesday, April 27, 2022

Using AutoFixture To Create Early Bound Entities

@AutoFixtureAutoFixture is an open source library that is used in testing to create objects without having to explicit set all the values.  I recently attempted to use it in a unit test to create an instance of an early bound entity, and assumed it would be extremely trivial, but boy was a wrong.  But now at least, you have the “joy” of reading this blog post about it.




The Problem(s)

This is what attempting to use an AutoFixture straight out of the box to create an entity looks like:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{   
    var fixture = new Fixture();
    // Fails here:
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance from System.Runtime.Serialization.ExtensionDataObject,
    // most likely because it has no public constructor, is an abstract or non-public type.
    var contact = fixture.Create<Contact>();
    Assert.IsNotNull(contact.FirstName);
}

The error basically AutoFixture can’t create the ExtensionDataObject since it does not expose a public constructor.  OK, makes sense.  The simplest thing to do is to make a fluent build call and skip the property, but this doesn’t work because other types like Money, have the ExtensionData property and it will fail for those properties as well, and manually skip the ExtensionData property on every object would make AutoFixture viturally worthless.  The solution is to create an ISpecimenBuilder that tells AutoFixture how to create an ExtensionData (in actuality don’t, just set it to null).  This looks like this:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
   
    // New error
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance of type AutoFixture.Kernel.FiniteSequenceRequest
    // because the traversed object graph contains a circular reference. Information about the circular path follows below. This is the correct
    // behavior when a Fixture is equipped with a ThrowingRecursionBehavior, which is the default. This ensures that you are being made aware of
    // circular references in your code. Your first reaction should be to redesign your API in order to get rid of all circular references.
    // However, if this is not possible (most likely because parts or all of the API is delivered by a third party), you can replace this default
    // behavior with a different behavior: on the Fixture instance, remove the ThrowingRecursionBehavior from Fixture.Behaviors, and instead add
    // an instance of OmitOnRecursionBehavior:
    //
    //   fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
    //       .ForEach(b => fixture.Behaviors.Remove(b));
    //   fixture.Behaviors.Add(new OmitOnRecursionBehavior());
    var contact = fixture.Create<Contact>();     Assert.IsNotNull(contact.FirstName);
}


public class SkipExtensionData : ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        return new NoSpecimen();
    }
}

But once again, a new error is generated.  This time a circular reference error.  Extra points to the team at AutoFixture for putting the solution to the issue in the code. But after adding it, more issues still pop up.

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
    fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
        .ForEach(b => fixture.Behaviors.Remove(b));
    fixture.Behaviors.Add(new OmitOnRecursionBehavior());

    // Yet another error:
    // System.InvalidOperationException: Sequence contains no elements
    // Stack Trace:
    //   Enumerable.First[TSource](IEnumerable`1 source)
    //   Entity.SetRelatedEntities[TEntity](String relationshipSchemaName, Nullable`1 primaryEntityRole, IEnumerable`1 entities)
    //   Contact.set_ReferencedContact_Customer_Contacts(IEnumerable`1 value) line 6219
    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

This is a fun error where when setting a related entity collection to an empty collection, you get a Sequence contains no elements error.  (Which I could possible handle in the Early Bound Generator I guess) but this calls out something that in my opinion shouldn’t be getting populated, child collections of entities.  Only the properties of the entity that are actual properties and not LINQ relationships needs to be populated, so we can actually remove the recursive behavior check and resolve this final issue by tweaking the ISpecimenBuilder to skip these types of properties, which brings us to the first solution that doesn’t throw an exception:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    // Fails!  FirstName is Null
    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return null;
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return null;
        }

        return new NoSpecimen();
    }
}

It was at this point that I couldn’t understand what was going on.  Why aren’t these values getting populated?  2 hours of debugging latter I finally realized that AutoFixture was setting the AttributeCollection of the Entity to null, effectively removing all other variables that were just being set by AutoFixture.  Some more internet researching later I discovered that there was an OmitSpecimen value that would leave the value untouched!  Armed this this knowledge the final solution presented itself!

The Solution

This final bit of code will correctly populate the attributes of the early bound entity:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return new OmitSpecimen();
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return new OmitSpecimen();
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             || typeof(AttributeCollection).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return new OmitSpecimen();
        }

        return new NoSpecimen();
    }
}

Here is an example screen shot from above:
image

Notice how everything except the AccountId (Since it’s readonly) has been automatically populated with a default value?  It’s a beautiful thing!

If you found this help, please share it!

Friday, September 17, 2021

Long Functions Are Always A Code Smell

This article is in response to fellow MVP Alex Shelga’s recent article Long functions in dataverse plugins – is it still ” code smell”?.  I’ll start with the fact that there is plenty of room for personal preference and there is no magic equation that can be applied to code that can ultimately define code as good or great or bad.  Alex shared his opinion, and here I’ll share mine.  I’ll tell you right now, they will differ (which shouldn’t be a surprise if you’ve read the title).  It is my hope that no one feels that I’m “attacking” Alex (especially Alex), but that everyone can see this as what it is intended to be, a healthy juxtaposition of ideas.

Alex’s Argument

Before I go into my reasons for why long functions are always a code smell, I’ll list the two reasons Alex sees plugins as different and summarize his arguments for why that matters:

  1. Plugins are inherently stateless
  2. Often developed to provide a piece of very specific business logic

This, he says, “seems to render object-oriented approach somewhat useless in the plugins (other than, maybe, for “structuring”)”.  He then dives into this further and seems to imply that OO code is slower and more complicated, and is primarily used to allow for reusability, and if it’s not making the code more reusable, there is no reason to utilize it.  His final point then is that it doesn’t matter to the performance of the system or to unit testing if the code is longer, and in his personal preference, he finds a longer function more readable: “I’d often prefer longer code in such cases since I don’t have to jump back and forth when reading it / debugging it”.  (If this is you, make sure you memorize the Navigate Forward and Navigate Backward commands in your IDE (View.NavigateBackword Ctrl+- and View.NavigateBackword Ctrl+Shift+- in Visual Studio and Alt+Left Arrow and Alt+Right Arrow in VSCode) then you need to spend the next 10 minutes diving into functions to see what they are doing, and then backing out of them using the navigation shortcut keys.  It could change your life. Scout’s honor)

My Argument

There are no facts that he presents that are wrong, plugin logic is inherently stateless, and doesn’t lend itself to loads of reusability.  I also can’t argue if his personal choice of readability is for long functions is right or wrong. But what I can do is argue why I see shorter functions as more readable, as well as other reasons that I have that shorter functions are better for the health and maintainability of the plugin project.

Why (I find) shorter functions are more readable

If you were to pick up a 300 page book with the title “Execute”, that you’ve never read before, with no cover art or introduction, or chapters or table of contents, or synopsis on the back page, but given 60 seconds to examine it and tell someone what it was about, you’d be pretty hard pressed to give an accurate definition.  But, if the book had a table of contents with these chapter names:

  1. Start at the Beginning
  2. Create a Vision
  3. Share the Vision
  4. Create the Company
  5. Invest in Others
  6. Invite Others to Invest
  7. Grow/Multiply

You could guess fairly confidently it’s a book about starting and growing a business.  If you were only interested in the details of how to get additional investors in a business, you might start at chapter 6.  If however the chapter names were as follows:

  1. Prewar
  2. Early Victories
  3. Atrocities Beyond Belief
  4. Final Battles
  5. Capture
  6. The Trial
  7. The Verdict
  8. Final Words

You could guess that the book is about a soldier/general that committed war crimes and was executed.  If you were only interested in learning if the individual had any remorse for their acts, you might start reading at chapter 8.  So not only do these chapter titles allow you to get a very quick understanding of what the book is about, they also allow you to skip large sections of the book when attempting to find a very narrow topic.  The same is true for code and long functions.  If a function is longer than your screen is tall, the first time you look at it, you will have no idea about what it does beyond the reach of your screen without scrolling and reading.  You’d have to read the entire function to determine what it does. This means that if you’re looking for a bug, you’ll need to read and understand half (on average) of the lines in the function before you could find where the bug is.  But, if the function is 15 lines long with 8 well named functions calls, you’d have a much better guess at what the entire function does and where the bug lies.  For example, given this Execute function:

public void Execute(ExtendedPluginContext context)
{
    var data = GetData(context);
    UpdateAttributes(context, data);
    CreateChildRecords(context, data);
    UpdateTarget(context, data);
}

Now these are probably some pretty poor function names, but you can immediately see that the plugin is getting data, updating some attributes, creating child records and then updating the target.  But just a small improvement in the naming would give even more details:

public void Execute2(ExtendedPluginContext context)
{
    var account = GetAccount(context);
    SetMaxCreditLimit(context, account);
    CreateAccountLogEntries(context, account);
    UpdateTargetStatus(context, account);
}

Now it’s easy to see that there is a call to get the account, which is used it to set the max credit limit and create some log entries and then update the status of the target.  If there is a bug with the status getting updated incorrectly, or the max credit limit not being set, or the log entries not having enough details, it is easy to see what function needs to be looked at first, and what functions can be ignored.  Small functions (when done well) are more efficient for understanding.

Another positives from smaller functions is the error log in the trace.  If my Execute function is 300 lines long and it has a null ref, I’ve got to look at 300 lines of code to guess where the null ref could have occurred.  But since the function name is included in the stack trace for plugins (even when the line number isn’t), if the 300 lines where split into 10 functions of 30 lines, then I’d know the function that would be causing the error and would only have a tenth of the code to analyze for null ref.  That’s huge!

My final note comes into play with nesting “ifs”.  Many times I will walk into a project with 300 line Execute functions nested 10-12 levels deep with “if” statements.  This especially causes issues when it comes to trying to line up curly braces, or when an “else” statement occurs when the matching “if” is not on the screen:

                if (bar)
                {
                    if (baz)
                    {
                        Go();
                    }
                }
                else
                {
                    Fight();
                }
            }
            else
            {
                // Wait, what is this else-ing?
                Win();
            }
        }
    }
}

Although there is nothing that says a longer function has to nest “ifs”, if your function is only 10 lines long, it limits the maximum possible number of nested “ifs”.

When Shorter Functions Help With Testing

Alex mentioned that Testing frameworks like FakeXrmEasy (and I’ll through my XrmUnitTest framework in here as well) don’t care about the length of an Execute function.  It’s a black box.  While this is true, as a test creator, the more complex the logic, the more helpful it is to test it in parts, rather than the whole.  For example, in my Execute2 function above, if there are 3 different branches of logic in GetAccount and 2 in SetMaxCreditLimit, and 4 in CreateAccountLogEntries, and 1 in UpdateTargetStatus, this results in 24 different potential dependent paths to test.  Contrast this to testing the parts separately, and only having 10 different tests with only the required setup for each specific function.  This is much more maintainable.  Personally I believe that this can be taken to the extreme as well, and trying to test 100 functions to perfection is usually not the ideal time investment as well, so I may have a couple tests of the execute function start to finish, and cherry pick some of the more complicated functions to test, rather than try to test everything. 

In Conclusion

Take time to analyze other peoples opinions and determine if you agree or disagree to the point where you are be prepared to argue why.  We are all learning and growing in our craft as developers, which requires us to continue to allow new ideas to challenge our existing conventions.  Share it, blog about it, and grow, remembering to always “Raise La Bar”.

Wednesday, January 13, 2021

How To Create Daily Bulk Delete Jobs in Dataverse/CDS/Power Apps/CRM As A Different User

UPDATE!

The first statement is a lie!  The UI for Dataverse/CDS/Power Apps/CRM bulk delete jobs does allow for creating a reoccurring daily Bulk Delete Job (Thanks Oliver Flint).   Even though it looks like a dropdown, you can type whatever number you’d like.  As such, this post is still helpful if you want to create a duplicate bulk delete job in multiple environments, or if you want to create it with someone else like an App User as the owner.

Original Post:

The UI for Dataverse/CDS/Power Apps/CRM bulk delete jobs does not allow for creating a reoccurring daily Bulk Delete Job.  The smallest value to choose from is weekly, which means if you want to run something daily, you’d have to create 7 jobs, one for each day of the week. Ew!  But, this can be set programatically via the SDK, and here is how (Please note, this is just code, it can be compiled and run anywhere.  When you run from the XTB though, you can either login with an Application User, or impersonate it if you have impersonation rights, which would set the owner of the bulk delete record, and help prevent any issues when a user leaves, but owns all of the Bulk Delete Jobs):

  1. Open the XrmToolBox, and connect to the environment (Bonus, connect with an application user to create the Bulk Delete Job as an application user, so that it isn’t owned by a person that leaves the company or has permissions removed.)
  2. Install the Code Now XrmToolBox Plugin if not already installed, and open it.
  3. If the logged in user is the XTB is the desired user, great, if you’d like the bulk delete request to be owned by a different user, push the Impersonate button at the top of the XTB and select the appropriate user.  (Testing has shown that impersonating the System user will not work to set the owner as system.  An Application User will be required)
  4. Copy and paste the following code into the window:
public static void CodeNow()
{
    var bulkDeleteRequest = new Microsoft.Crm.Sdk.Messages.BulkDeleteRequest
    {
        JobName = "Daily 3am Delete Job",
        QuerySet = new [] {
            new QueryExpression {
                ColumnSet = new ColumnSet("acme_tableid", "acme_tablename", "createdon"),
                EntityName = "acme_table",
                Criteria = {
                    Filters = {
                        new FilterExpression {
                            FilterOperator = LogicalOperator.And,
                            Conditions = {
                                new ConditionExpression("acme_delete_me", ConditionOperator.Equal, true)
                            }
                        }
                    }
                }
            }
        },
        StartDateTime = new DateTime(2021, 1, 8, 8, 0, 0, DateTimeKind.Utc),
        RecurrencePattern = "FREQ=DAILY;INTERVAL=1;",
        ToRecipients = new Guid[] { },
        CCRecipients = new Guid[] { },
        SendEmailNotification = false
    };

    Service.Execute(bulkDeleteRequest);
}

 

Update the following values

  1. Update the JobName to what ever your preference is.
  2. QuerySet is a collection of QueryExpressions.  Add at least one (I don’t know what happens if you add two, my guess is that all records returned from all Query Expressions will get deleted.  My guess is it is expecting to always have just the Primary Id of the table, the Primary Name Column, and the “Created On” column.
  3. Update the StartDateTime a future date to start. It's format is new DateTime(YYYY, MM, DD, ... ). Please note, the time is UTC, so the current value is Jan. 8, 2021 at 3am EST (Not EDT). 
  4. Update RecurrencePattern to your liking.  FREQ=DAILY;INTERVAL=1; means it will be ran every day.  (Not sure of other FREQ values could be, but you could use the FetchXmlBuilder to query for other existing values)
  5. Update ToRecipients and CCRecipients to what I believe are the SystemUser Ids that would be notified when the job runs.  (I’ve never used it, so don’t quote me on this one)
  6. Update SendEmailNotification to true to send out emails to the To and CC Recipients if desired.

Run the Code in Code Now… err… now!

Verify that the record was created correctly by navigating to the Bulk Delete Jobs:

  1. Open Advanced Settings by clicking the gearbox icon in the top right-hand corner
  2. Navigate to the "Data Management" area
  3. Click "Bulk Record Deletion"
  4. Select Recurring Bulk Delete System Jobs
  5. Verify that the job is created with the correct Owner and the Next Run values, and that it is in a "Waiting" status reason.
  6. You can verify the history of this job by using the Completed Bulk Deletion System Jobs view once the next run time has passed.