All articles

Bulk Delete Records in Dataverse: The Right Way

Five ways to bulk delete records in Dataverse, when to use each one, and why your Power Automate loop is the wrong answer for anything over a few hundred rows.

· 6 min read

At some point in every Dataverse project, somebody needs to delete a lot of records. Test data from development. Orphaned activities from a retired workflow. A million rows of imported junk from a migration that went sideways.

The question isn’t whether you’ll need to do it. It’s how you do it without bringing the environment to its knees.

There are five real options. Each has a purpose. Most teams pick the wrong one.

Option 1: Built-In Bulk Delete Jobs

This is the native approach that Microsoft gives you, and it’s shockingly underused considering it’s free, requires no code, and runs in the background.

How to create one

  1. Go to the Power Platform admin center → select your environment → SettingsData managementBulk deletion.
  2. Click New to start the wizard.
  3. Build your criteria using the query builder. This works like Advanced Find — you define filters on the entity, and every record matching those filters gets deleted.
  4. Give the job a name. Set the schedule — you can run it once or on a recurring basis (useful for cleanup jobs that run weekly).
  5. Choose whether to send an email notification when the job finishes.
  6. Submit and let it run.

The job shows up under System Jobs where you can monitor progress, see how many records were deleted, and check for failures.

When to use it: housekeeping tasks, scheduled cleanup of stale records, deleting records by a simple filter condition. It handles millions of records without you writing a single line of code.

Limitations: the query builder is basic. If your deletion criteria require complex joins or logic that Advanced Find can’t express, you need a different approach. It’s also not fast — it processes records in batches internally, and for very large volumes it can take hours.

Option 2: ExecuteMultiple with the SDK

When you need programmatic control — conditional logic, logging, error handling per record — the SDK is the right tool. The pattern is ExecuteMultiple: you batch up to 1,000 delete requests into a single API call.

public static void BulkDeleteWithExecuteMultiple(
    IOrganizationService service,
    string entityName,
    string fetchXml)
{
    var query = new FetchExpression(fetchXml);
    EntityCollection results;
    int totalDeleted = 0;

    do
    {
        results = service.RetrieveMultiple(query);

        if (results.Entities.Count == 0)
            break;

        var multipleRequest = new ExecuteMultipleRequest
        {
            Requests = new OrganizationRequestCollection(),
            Settings = new ExecuteMultipleSettings
            {
                ContinueOnError = true,
                ReturnResponses = false // saves bandwidth
            }
        };

        foreach (var entity in results.Entities)
        {
            multipleRequest.Requests.Add(new DeleteRequest
            {
                Target = entity.ToEntityReference()
            });
        }

        var response = (ExecuteMultipleResponse)service.Execute(multipleRequest);

        // Check for individual failures
        if (response.IsFaulted)
        {
            foreach (var item in response.Responses)
            {
                if (item.Fault != null)
                {
                    Console.WriteLine(
                        $"Failed to delete {results.Entities[item.RequestIndex].Id}: " +
                        $"{item.Fault.Message}");
                }
            }
        }

        totalDeleted += results.Entities.Count - 
            response.Responses.Count(r => r.Fault != null);

        Console.WriteLine($"Deleted so far: {totalDeleted}");

    } while (results.MoreRecords);

    Console.WriteLine($"Done. Total deleted: {totalDeleted}");
}

A few things to note:

  • Batch size matters. The maximum for ExecuteMultiple is 1,000 requests per call. But in practice, batches of 200–500 tend to perform better because you stay under the 2-minute execution timeout more reliably.
  • Set ReturnResponses to false if you don’t need per-record success confirmation. This cuts the response payload significantly.
  • Set ContinueOnError to true so one bad record doesn’t kill the entire batch.
  • Page through results. Don’t retrieve all records up front. Fetch a page, delete it, fetch the next page.

Performance: on a typical environment, ExecuteMultiple with batches of 500 deletes somewhere around 5,000–15,000 records per minute, depending on plugins, cascade rules, and entity complexity.

Option 3: DeleteMultiple (Newer, Faster)

Microsoft introduced DeleteMultiple as part of the elastic tables story, and it’s now available for standard tables too. Instead of wrapping 500 individual DeleteRequest objects in an ExecuteMultiple, you send a single DeleteMultiple message with a query defining what to delete.

The difference is that Dataverse can optimize the operation internally — it doesn’t have to process each delete as a separate pipeline execution.

var request = new OrganizationRequest("DeleteMultiple")
{
    ["Targets"] = new EntityReferenceCollection(
        entitiesToDelete.Select(e => e.ToEntityReference()).ToList()
    )
};
service.Execute(request);

Performance: early benchmarks show DeleteMultiple running 2–5x faster than the same operation through ExecuteMultiple. The gap widens as record counts grow. If you’re on a recent SDK version and your environment supports it, prefer this over the ExecuteMultiple wrapper pattern.

Caveat: as of mid-2025, DeleteMultiple support across all standard entities is still rolling out. Test it on your specific entity before building your whole pipeline around it.

Option 4: XrmToolBox — Bulk Data Updater / Bulk Data Deleter

For one-off operations where you don’t want to write code, XrmToolBox with the Bulk Data Updater or SQL 4 CDS plugin is the practical choice.

  1. Connect to your environment.
  2. Use FetchXML Builder to construct a precise query for the records you want gone.
  3. Open Bulk Data Updater → select all records → delete.

Or with SQL 4 CDS, you can just write:

DELETE FROM contact WHERE statecode = 1 AND modifiedon < '2024-01-01'

It translates the SQL into Dataverse API calls under the hood.

When to use it: ad-hoc cleanup during development or UAT. Fast to set up, no deployment needed, gives you a visual preview of what’s about to be deleted.

When NOT to use it: production automation. XrmToolBox is an interactive desktop tool. It’s not something you schedule or embed in a release pipeline.

Option 5: Power Automate (And Why It’s Usually Wrong)

Let me be direct: using Power Automate to loop through and delete thousands of records is an anti-pattern.

I see this constantly. Someone builds a flow with “List rows” → “Apply to each” → “Delete a row.” It works in testing with 50 records. Then they point it at a table with 80,000 records and wonder why it’s been running for six hours, hit throttling limits, burned through their API request quota, and still isn’t done.

Here’s why it fails at scale:

  • Throttling. Dataverse connector actions are subject to Power Platform request limits. Depending on your license, that’s somewhere between 6,000 and 40,000 API requests per 24 hours per user. Each delete is one request. Do the math.
  • Speed. A Power Automate flow processes “Apply to each” iterations sequentially by default. Even with concurrency turned up to 50, you’re looking at maybe 10–20 deletions per second at best. That’s 36,000–72,000 per hour. Sounds OK until you compare it to ExecuteMultiple doing the same in minutes.
  • Cost. Every action execution counts against your plan. A flow that deletes 100,000 records runs 200,000+ actions (list + delete for each). On a per-flow license that’s fine. On a per-user license you might blow through your daily allocation in one run.
  • No error batching. If one delete fails mid-loop, your flow either stops or silently skips it depending on how you configured error handling. There’s no built-in batch retry.

The one exception: if you need to delete a small number of records (under 500) as part of a business process that’s already in Power Automate, it’s fine. Don’t rewrite your flow to call the SDK for 200 records. But if someone asks you to “build a flow that cleans up old records nightly” and the volume is in the thousands, push back. Use a bulk delete job or a scheduled console app with the SDK instead.

What to Do Before Any Bulk Delete

Regardless of which method you use, do these things first. Skipping them is how you turn a 30-minute operation into a 3-day disaster.

1. Disable Plugins and Flows

Every record deletion fires the execution pipeline. If you have plugins registered on the Delete message for that entity, they’ll fire for every single record. Same for Power Automate flows triggered by “When a row is deleted.”

For bulk operations, this means:

  • A plugin that takes 200ms per execution adds 55 hours to a 1-million-record delete.
  • A flow triggered per delete might generate a million flow runs, each consuming API requests and compute.

Disable them. Deactivate the flows. Unregister the plugin steps (or set them to disabled in the Plugin Registration Tool). Run your bulk delete. Re-enable everything after.

If the plugins do something essential (like cascade cleanup), you may need to run compensating logic after the bulk delete. That’s still faster than letting them fire a million times.

2. Understand Cascade Delete Relationships

Before deleting parent records, check the cascade behavior on every relationship.

Cascade BehaviorWhat Happens on Delete
Cascade AllChild records are deleted too
Remove LinkChild records remain, lookup field is cleared
RestrictDelete is blocked if children exist
No CascadeNothing happens to children

If cascade is set to “Cascade All” and your parent Account has 50 child Contacts, deleting one Account actually deletes 51 records. Multiply that by 10,000 accounts and your “10,000 record delete” is actually 510,000 operations. Plan accordingly.

If you don’t want to cascade, delete children first, then parents. Or temporarily change the relationship behavior — but be careful with that in production.

3. Take a Backup

This should be obvious, but: take an on-demand backup of the environment before running a bulk delete. Dataverse has no recycle bin for bulk operations. Once the records are gone, they’re gone. The admin center lets you create a manual backup in about 30 seconds. Do it.

4. Run in Off-Hours

Bulk deletes consume server resources. Locking, transaction logging, index updates — they all compete with your users’ normal operations. Schedule bulk deletes for nights or weekends when nobody is in the system.

Choosing the Right Approach

ScenarioBest Option
Scheduled cleanup of old records by simple filterBulk Delete job
One-time cleanup during developmentXrmToolBox
Programmatic deletion with complex logicSDK — ExecuteMultiple or DeleteMultiple
Small deletion (< 500 records) inside a business flowPower Automate (it’s fine)
Millions of records, fastest possibleSDK — DeleteMultiple with parallelism
Nightly purge as part of a release pipelineScheduled console app with SDK

The right tool depends on the volume, the frequency, and whether you need it automated. But for anything over a few hundred records, the answer is almost never Power Automate.

Wrapping Up

Bulk deletion in Dataverse isn’t complicated. The platform gives you good options for every scenario — from the zero-code bulk delete job to the high-performance DeleteMultiple API.

The mistakes I see are always the same: people reach for Power Automate because it’s familiar, they forget about cascade relationships, and they leave plugins active during the operation. Avoid those three things and your bulk deletes will run clean and fast.

Share this article LinkedIn X / Twitter

Related articles