All articles

Connecting D365 to Everything Else

A practitioner's guide to every integration option Dataverse offers — Webhooks, Service Bus, Virtual Tables, Dual-Write, Power Automate, Custom APIs, Web API, and the .NET SDK. When to use each, when to avoid them, and how to pick the right one.

· 15 min read

Every D365 project eventually becomes an integration project. The CRM or the ERP is never the only system. There’s always an ERP on the other side, a data warehouse, a third-party SaaS product, a legacy system with a SOAP endpoint, or a mobile app that needs data.

After 14+ years and north of 50 implementations, I’ve used every integration pattern Dataverse offers. Some are great. Some are oversold. Some are actively dangerous if you pick them for the wrong scenario.

This is the guide I wish I had when I started. For each pattern, I’ll tell you what it does, when to use it, when to run away from it, and what the failure modes look like.

The eight patterns

Here’s the full list:

  1. Webhooks — real-time HTTP push from Dataverse
  2. Azure Service Bus — reliable messaging with queues/topics
  3. Virtual Tables — expose external data as Dataverse tables without copying
  4. Dual-Write — bidirectional sync between D365 CE and Finance & Operations
  5. Power Automate — low-code integration middleware
  6. Custom APIs — expose Dataverse logic as callable endpoints
  7. Web API (OData) — direct HTTP calls into Dataverse
  8. Dataverse SDK for .NET — strongly typed .NET client

Let’s go through each one.


1. Webhooks

Webhooks are the simplest real-time push mechanism. You register a URL, pick a message (Create, Update, Delete), and Dataverse sends an HTTP POST to your endpoint every time that event fires. The payload is JSON containing the entity record and execution context.

When to use it

  • You need real-time notification of data changes
  • The target system has an HTTP endpoint that can receive POST requests
  • You want something simpler than Service Bus
  • Your integration is fire-and-forget (or you handle retries yourself)

When NOT to use it

  • You need guaranteed delivery. Webhooks have a retry policy (3 retries within about 1 minute), but if your endpoint is down for an extended period, messages are lost
  • Your target system can’t handle the volume. A bulk import of 10,000 records will fire 10,000 webhooks
  • You need to process messages in order
  • The endpoint is slow. Synchronous webhooks block the Dataverse pipeline. Async webhooks don’t block, but you lose the ability to cancel the operation

Performance characteristics

Synchronous webhooks add their execution time directly to the user’s save operation. If your endpoint takes 2 seconds to respond, the user waits 2 extra seconds. The hard timeout is 60 seconds for sync webhooks. Async webhooks run in the background with the async service, which has its own queue and processing delays.

Reliability

This is where webhooks fall short. The retry window is short. There’s no dead-letter queue. There’s no built-in monitoring dashboard. If you need reliable delivery, use Service Bus instead. I’ve seen organizations build elaborate retry infrastructure around webhooks to compensate — at that point, you’ve reinvented Service Bus badly.

Registration example (via the Plugin Registration Tool):

Step: Create of account
Event Pipeline: PostOperation
Execution Mode: Asynchronous
Type: Webhook
Endpoint URL: https://myapi.azurewebsites.net/api/dataverse/account-created
Authentication: WebhookKey (passed as x-ms-dynamics-key header)

My take

Webhooks are great for low-stakes, real-time notifications where losing the occasional message isn’t catastrophic. Updating a search index. Sending a Slack notification. Triggering a cache refresh. For anything where message loss means data inconsistency, skip to Service Bus.


2. Azure Service Bus Integration

This is the enterprise-grade version of webhooks. Instead of posting to an HTTP endpoint, Dataverse drops messages onto an Azure Service Bus queue or topic. The message sits there until a consumer picks it up — even if that consumer is offline for hours.

When to use it

  • You need guaranteed delivery of every message
  • The consuming system might be offline or slow
  • You need to decouple the producer (Dataverse) from the consumer
  • You want multiple consumers processing the same events (use topics with subscriptions)
  • You need dead-letter handling for failed messages
  • You’re building an event-driven architecture

When NOT to use it

  • Simple point-to-point integrations where Power Automate would work fine
  • You don’t have Azure in your stack (the overhead of setting up Service Bus just for one integration is rarely worth it)
  • You need request-response patterns (Service Bus is one-way)

Performance characteristics

Messages are posted asynchronously by the Dataverse async service. There’s inherent latency — typically a few seconds, sometimes more under load. This is not a sub-second system. The async service processes messages in batches and can fall behind during bulk operations.

If you register the Service Bus step as synchronous, the message is posted during the pipeline execution, and the calling user waits. This is unusual but supported.

Reliability

This is the strong suit. Messages persist in the queue until consumed or expired (configurable TTL). Dead-letter queues catch poison messages. You can set up duplicate detection. You get Azure Monitor metrics, alerts, and diagnostic logs. In 50+ projects, I’ve never lost a message through Service Bus when it was configured correctly.

The setup:

  1. Create a Service Bus namespace in Azure
  2. Create a queue or topic
  3. Create a Shared Access Policy with Send rights
  4. Register the Service Bus endpoint in the Plugin Registration Tool using the SAS connection string
  5. Register a step on the message/entity/event you want

My take

If you’re already in Azure, this is the default choice for reliable event-driven integrations. The operational overhead is low, the cost is negligible (Basic tier handles most D365 workloads), and the reliability is proven. The only downside is that you need a consumer — something has to read from the queue. That’s usually an Azure Function or a Logic App.


3. Virtual Tables (Virtual Entities)

Virtual Tables let you expose external data as if it were a native Dataverse table. Users can see rows in views, open forms, even use the data in Advanced Find. But the data isn’t stored in Dataverse — every read goes to the external source in real time.

When to use it

  • You need to display external data in model-driven apps without copying it
  • The external system is the system of record and you don’t want duplicate data
  • You need read-only access to external data (write support exists but is limited and painful)
  • The external data volume is too large to copy into Dataverse

When NOT to use it

  • You need to write back to the external system from Dataverse forms (technically possible, but the UX is terrible and error handling is almost nonexistent)
  • The external API is slow. Every view load, every form open, every Advanced Find query hits the external API. If that API takes 3 seconds to respond, every user interaction takes 3+ seconds
  • You need to use the data in business rules, workflows, or plugins that expect local data
  • You need offline access (mobile/offline-first scenarios)
  • You need to join virtual table data with native table data in complex queries — this doesn’t work the way you’d expect

Performance characteristics

Every. Single. Read. Goes. To. The. External. API.

There is no caching by default. Open a view with 50 rows? That’s a query to the external system. Open a form? Another query. Navigate back to the view? Another query. The performance ceiling is entirely determined by your external API’s response time and throughput.

You can implement caching in your virtual table data provider, but then you’re managing cache invalidation, which is one of the two hard problems in computer science.

Reliability

If the external API is down, the virtual table is down. Users see errors. There’s no graceful degradation. No offline fallback. No “last known good” data. This makes virtual tables inappropriate for mission-critical workflows where the external system has less than 99.9% uptime.

My take

Virtual Tables are oversold. Microsoft demos them as this magical “single pane of glass” — just connect everything and it all works! In practice, the performance is usually poor, the write support is half-baked, and users hate waiting 3-5 seconds for a view to load when every other view loads instantly.

Where they shine: read-only reference data from a fast, reliable API. Product catalogs. Exchange rates. Employee directories backed by Microsoft Graph. Anything where the data is too large or too volatile to copy, and read-only is fine.

Where they fail: trying to make Dataverse a unified UI for slow enterprise systems. I’ve seen multiple projects abandon virtual tables after users revolted against the performance.


4. Dual-Write

Dual-Write is Microsoft’s solution for keeping data synchronized between Dynamics 365 CE (the CRM side) and Dynamics 365 Finance & Operations (the ERP side). It’s a near-real-time, bidirectional sync that runs on infrastructure managed by Microsoft.

When to use it

  • You have both D365 CE and D365 F&O and need shared entities (accounts/customers, contacts, products) to stay in sync
  • You want Microsoft-supported, out-of-the-box table maps
  • You’re on a greenfield project where you can design around Dual-Write’s constraints from the start

When NOT to use it

  • One-directional sync (Dual-Write is bidirectional by design, which adds complexity you don’t need for one-way flows)
  • You need to sync custom entities with complex transformation logic
  • Your data volumes are very high for initial sync (the initial sync can be painfully slow for millions of records)
  • You’re connecting to anything other than D365 F&O — Dual-Write is specifically for CE-to-F&O

Performance characteristics

Near-real-time, not truly real-time. Changes propagate in seconds under normal load. During peak periods or large bulk operations, the lag can grow. The initial sync for large tables (100K+ records) can take hours and sometimes fails partway through, requiring restarts.

There’s a concept of “live sync” (ongoing changes) vs “initial sync” (backfill). Live sync is generally reliable. Initial sync is where most of the pain lives.

Reliability

Dual-Write has improved significantly since its early days, but it still has rough edges. Conflict resolution for bidirectional updates is simplistic (last write wins by default). Error handling requires monitoring the Dual-Write admin page in the Power Platform admin center. Failed records pile up in an error log that someone needs to review.

The dependency on Microsoft’s infrastructure means you’re at the mercy of service health. When Dual-Write has issues, your options are limited to opening a support ticket and waiting.

My take

If you’re in a CE + F&O environment, Dual-Write is the path of least resistance for standard entity sync. The out-of-the-box maps for accounts, contacts, and products save a lot of custom development. But don’t try to force it into scenarios it wasn’t designed for. Complex transformations, conditional sync logic, or non-Microsoft targets — use something else.

Also: test initial sync with production-scale data volumes early in the project. I’ve seen go-lives delayed because initial sync took 12 hours instead of the 30 minutes it took in dev with 500 records.


5. Power Automate as Integration Middleware

Power Automate is the Swiss Army knife of D365 integration. It has 500+ connectors, a visual designer, built-in error handling, and it’s included in most D365 licenses. For a huge percentage of integrations, it’s the right answer.

When to use it

  • 80% of integrations. Seriously. If the requirement is “when X happens in Dataverse, do Y in another system” or “pull data from Z into Dataverse on a schedule,” Power Automate handles it
  • You want maintainability — your team can see what the flow does without reading code
  • The integration is relatively simple: trigger, transform, send
  • You need it built fast. A Power Automate flow takes hours to build. A custom Azure Function integration takes days
  • You want built-in retry policies, error notifications, and run history

When NOT to use it

  • High-volume, high-throughput scenarios. Power Automate has request limits tied to your license. The Performance plan helps, but at 10,000+ actions per flow run or 100,000+ runs per day, you’re fighting the platform
  • Sub-second latency requirements. Power Automate flows typically take 2-10 seconds to start executing after the trigger fires
  • Complex data transformations. The expression language is functional but limited. If you’re writing nested if() expressions 4 levels deep inside a json() inside a replace(), you’ve left the sweet spot
  • Binary data processing. Large file handling in Power Automate is awkward and memory-constrained
  • Long-running processes. Flow runs have a 30-day timeout, but individual actions time out much sooner, and keeping a flow running for hours is fragile
  • When you need transactional consistency with the triggering operation (use a plugin for that)

Performance characteristics

The “Automated” trigger for Dataverse fires within seconds of the event, but not instantly. Scheduled triggers run at their configured interval (minimum 1 minute for most plans). There’s cold-start latency for the first action in a flow. Each action adds a few hundred milliseconds of overhead.

The big gotcha: API request limits. Every connector action counts against your daily limit. A flow that makes 10 API calls per run, triggered 1,000 times per day, burns 10,000 requests. The default limit for most D365 licenses is 40,000 requests per 24 hours per user (or per flow for service principal runs). Hit the limit and your flows get throttled — they still run, but slower, with 429 retry delays.

Reliability

Power Automate’s reliability is surprisingly good for most use cases. Built-in retry (up to 4 times by default, configurable). Run history for debugging. Configure error handling with runAfter on failed branches. Send failure notifications.

The failure modes I see most often:

  • Throttling during bulk operations
  • Connector token expiration (especially for on-premises data gateway connections)
  • Flows silently disabled after repeated failures
  • Concurrent modification issues when multiple flow runs touch the same record

Power Automate vs Azure Logic Apps

This comes up constantly. Here’s the honest comparison:

FactorPower AutomateLogic Apps
DesignerSimpler, newer (Cloud flows)More mature, supports code view
PricingPer-user or per-flow licensePay-per-execution (consumption) or fixed (Standard)
ConnectorsSame connector ecosystemSame connector ecosystem
HostingMicrosoft-managedAzure-managed (Standard can run in your own App Service)
Enterprise featuresLimited. No VNET integration, limited custom connectorsVNET integration, ISE (deprecated but still in use), private endpoints
GovernanceManaged via Power Platform admin center, DLP policiesManaged via Azure Portal, Azure Policy, RBAC
When to pickInternal business integrations, team-maintained flowsHigh-security, high-throughput, or infrastructure-as-code requirements

My rule of thumb: start with Power Automate. Move to Logic Apps when you hit a wall — usually around network isolation (VNET), throughput, or when your DevOps team needs ARM/Bicep deployment of integration logic.

My take

Power Automate is genuinely good middleware for 80% of D365 integrations. It’s fast to build, easy to monitor, and the connector ecosystem is massive. The problem is the other 20%. When you need high throughput, complex transformations, or precise error handling, Power Automate becomes a liability. I’ve seen teams build 200-action flows with nested loops and conditional branches that nobody can debug. At that point, write code.

The 80/20 rule: if your flow has more than 30 actions, or you’re spending more time fighting the expression language than writing the actual logic, stop. Write an Azure Function. You’ll ship faster and sleep better.


6. Custom APIs

Custom APIs are user-defined messages in Dataverse that you can call like any built-in action. You define the request parameters and response properties, write a plugin to handle the logic, and then call it via the Web API, from Power Automate, or from other plugins.

When to use it

  • You need to expose Dataverse-side business logic as a callable endpoint
  • You want to create reusable operations that can be called from multiple clients (Web API, Power Automate, other plugins, client-side JavaScript)
  • You’re replacing Custom Actions (the older mechanism) — Custom APIs are the modern replacement
  • You need to enforce business logic on the server regardless of what client calls it

When NOT to use it

  • Simple CRUD operations. The Web API already handles create/read/update/delete. Don’t wrap basic CRUD in a Custom API just for the sake of abstraction
  • As a general-purpose API gateway. Custom APIs run inside Dataverse, with Dataverse’s execution constraints (2-minute timeout, limited CPU/memory). For heavy processing, call an external service
  • When Power Automate can orchestrate the same logic without code

Performance characteristics

Custom APIs execute as plugins — same pipeline, same constraints. Synchronous execution, 2-minute timeout, 256MB memory limit (approximate, not officially documented at a fixed number). They’re fast for data operations against Dataverse but become bottlenecks if they call external services synchronously.

Reliability

Same as plugins. They run inside the Dataverse transaction if registered in the pipeline. They participate in the plugin execution pipeline’s error handling. If the plugin throws, the caller gets an error response.

Example

Define a Custom API called myorg_CalculateDiscount:

Request parameters:

  • AccountId (EntityReference)
  • OrderTotal (Decimal)

Response properties:

  • DiscountPercent (Decimal)
  • DiscountReason (String)

Call it from the Web API:

POST /api/data/v9.2/myorg_CalculateDiscount
Content-Type: application/json

{
  "AccountId": {
    "@odata.type": "Microsoft.Dynamics.CRM.account",
    "accountid": "00000000-0000-0000-0000-000000000001"
  },
  "OrderTotal": 15000.00
}

Response:

{
  "DiscountPercent": 12.5,
  "DiscountReason": "Gold tier customer, order over $10K"
}

My take

Custom APIs are underused. Most teams I work with either put all their logic in Power Automate (too fragile for complex rules) or in client-side JavaScript (not enforceable). Custom APIs give you a clean contract, server-side enforcement, and reusability across every client. If you’re building anything with real business logic beyond basic CRUD, define Custom APIs for your key operations.


7. Web API (OData) — Direct Calls

The Dataverse Web API is an OData v4 REST endpoint. Any system that can make HTTP requests can talk to Dataverse. This is the most universal integration pattern — it works from any language, any platform, any cloud.

When to use it

  • External systems need to read or write Dataverse data
  • You’re building a custom UI or mobile app that talks to Dataverse
  • You need batch operations (the $batch endpoint supports change sets)
  • Your integration partner’s system can make REST calls but doesn’t have a Power Automate connector

When NOT to use it

  • From within Dataverse plugins (use the Organization Service / SDK instead — it’s faster and participates in the transaction)
  • When a pre-built connector exists in Power Automate and meets your needs (why write HTTP calls by hand?)
  • For real-time event-driven patterns (you’d be polling; use webhooks or Service Bus instead)

Performance characteristics

The Web API is fast. Single record operations complete in 50-200ms from the same Azure region. Batch operations can include up to 1,000 requests per batch. The $batch endpoint with change sets gives you transactional semantics — all-or-nothing for a set of operations.

Throttling is the main concern. Dataverse enforces API protection limits:

  • Per-user: 6,000 requests in a 5-minute sliding window (roughly 20 requests/second sustained)
  • Concurrent: 52 concurrent requests per user
  • Execution time: cumulative execution time limits

When you hit the limit, you get HTTP 429 (Too Many Requests) with a Retry-After header. Respect it. Don’t retry immediately — back off for the indicated duration.

Retry strategy

1. Send request
2. If 429: wait for Retry-After duration, then retry
3. If 5xx: exponential backoff (1s, 2s, 4s, 8s) up to 3 retries
4. If 412 (Precondition Failed): re-read the record, re-apply changes, retry
5. If 401: refresh OAuth token, retry once

Always implement this. Every. Single. Client. I’ve lost count of the production incidents caused by integration code that didn’t handle 429s.

My take

The Web API is the bread and butter of Dataverse integration. It’s well-documented, well-supported, and predictable. The only mistake people make is treating it like an unlimited resource. Respect the throttling limits. Implement retries. Use batch operations for bulk scenarios. Use Prefer: odata.maxpagesize to control result sizes. You’ll be fine.


8. Dataverse SDK for .NET

The SDK is a set of NuGet packages (Microsoft.PowerPlatform.Dataverse.Client for the modern client, or the older Microsoft.CrmSdk.CoreAssemblies) that give you strongly typed .NET access to Dataverse. It’s what plugins use internally, and it’s what you’d use from a .NET console app, Azure Function, or ASP.NET service.

When to use it

  • You’re building a .NET application that integrates with Dataverse
  • You want strongly typed entities (early-bound classes generated from your Dataverse schema)
  • You’re writing plugins (you’re already using it)
  • You need features not easily accessible via the Web API (some metadata operations, solution manipulation)
  • You want connection pooling and automatic token management

When NOT to use it

  • Your integration is in Python, Java, Node.js, or any non-.NET language (use the Web API)
  • Simple integrations where Power Automate would suffice
  • Client-side code (JavaScript in model-driven apps should use Xrm.WebApi, not the .NET SDK)

Performance characteristics

Under the hood, the modern ServiceClient uses the Web API, so performance is comparable. The advantage is connection management — the SDK handles token refresh, connection pooling, and retry logic for you. It also supports ExecuteMultipleRequest for batching up to 1,000 operations in a single round trip.

For plugins specifically, the IOrganizationService provided by the plugin context is optimized for in-process calls and doesn’t incur HTTP overhead.

Reliability

The SDK handles transient failures better than raw HTTP calls because it has built-in retry logic. ServiceClient retries on 429s and transient 5xx errors automatically (configurable).

Modern setup

using Microsoft.PowerPlatform.Dataverse.Client;

var connectionString = 
    "AuthType=ClientSecret;" +
    "Url=https://myorg.crm.dynamics.com;" +
    "ClientId=00000000-0000-0000-0000-000000000000;" +
    "ClientSecret=your-secret";

using var client = new ServiceClient(connectionString);

// Create a record
var account = new Entity("account");
account["name"] = "Contoso Ltd";
Guid id = client.Create(account);

// Execute a Custom API
var request = new OrganizationRequest("myorg_CalculateDiscount");
request["AccountId"] = new EntityReference("account", id);
request["OrderTotal"] = 15000.00m;
var response = client.Execute(request);
decimal discount = (decimal)response["DiscountPercent"];

My take

If you’re in .NET, use the SDK. The ServiceClient is mature, handles retry logic correctly, and saves you from writing HTTP boilerplate. Early-bound entities catch schema errors at compile time instead of at runtime. The tooling (pac modelbuilder build) generates the entity classes from your environment.

The one trap: don’t use the SDK from client-side Blazor or other browser-hosted .NET code. The SDK is for server-side use. In the browser, use the Web API through Xrm.WebApi or direct fetch calls.


Decision Diagram: Which Pattern to Pick

Here’s how I walk through the decision for a new integration requirement.

flowchart TD A[New integration requirement] --> B{Direction?} B -->|Dataverse → External| C{Need guaranteed delivery?} B -->|External → Dataverse| D{Caller platform?} B -->|Bidirectional CE ↔ F&O| E[Dual-Write] B -->|Display external data in D365| F{External API fast & reliable?} C -->|Yes| G[Azure Service Bus] C -->|No| H{Latency requirement?} H -->|Real-time, sub-second| I[Webhook - sync] H -->|Near-real-time, seconds OK| J{Complex logic needed?} J -->|Yes, > 30 actions| K[Webhook or Service Bus → Azure Function] J -->|No, simple flow| L[Power Automate - Dataverse trigger] D -->|.NET| M[Dataverse SDK] D -->|Any language / REST| N[Web API - OData] D -->|Low-code / citizen dev| O[Power Automate] F -->|Yes, < 500ms response| P[Virtual Table - read-only] F -->|No or unreliable| Q[Sync data into Dataverse on schedule] Q --> O Q --> N style G fill:#2d6a4f,color:#fff style L fill:#2d6a4f,color:#fff style E fill:#e76f51,color:#fff style P fill:#e9c46a,color:#000

Read it top to bottom. The first branch is direction — where does data originate, and where does it need to go? That alone eliminates half the options.


Patterns I See Overengineered

Let me be direct about what I see teams overcomplicate.

1. Custom middleware for simple event forwarding

I’ve reviewed architectures where a team built an Azure Function, API Management layer, Service Bus queue, another Azure Function, and a database — just to forward Dataverse events to a third-party API. Power Automate with the Dataverse trigger and an HTTP action would have taken 20 minutes to build and been easier to monitor.

Unless you need more than 40,000 API calls per day, or sub-second latency, or complex transformation logic, Power Automate is the right starting point.

2. Virtual Tables for everything

The “single pane of glass” pitch is seductive. But every time I’ve seen a project try to surface a slow ERP system through virtual tables, the result is the same: users complain about performance, someone builds a cache layer, and eventually the team gives up and syncs the data into Dataverse on a schedule. Save yourself six months and start with the sync.

3. Dual-Write for non-standard scenarios

Dual-Write works well for its designed purpose (CE ↔ F&O standard entities). Trying to extend it to custom entities with complex transformation logic is a project risk. Use Data Integration (the older mechanism), Azure Data Factory, or Power Automate scheduled sync instead.


API Limits and Throttling: The Numbers That Matter

Every integration pattern is subject to Dataverse’s API protection limits. Here are the numbers that actually matter in production:

LimitValueWhat happens
Per-user API requests6,000 per 5-min windowHTTP 429, Retry-After header
Concurrent requests per user52HTTP 429
Per-org API requests6,000 × licensed users (minimum)HTTP 429
ExecuteMultiple batch size1,000 operations maxError if exceeded
Plugin execution time2 minutesOperation cancelled
Power Automate daily actionsLicense-dependent (typically 10K-500K)Flows throttled or suspended
Service Bus message size256 KB (Basic/Standard)Message rejected

The per-user limit is the one that bites most often. A Power Automate flow running under a service account counts all its API calls against that one account’s limit. Run three high-volume flows under the same service account and they compete for the same 6,000-per-5-minutes budget.

Solution: use separate app registrations (service principals) for different integration workloads. Each service principal gets its own API budget.


Retry Strategies by Pattern

Every integration needs a retry strategy. Here’s what I use for each pattern:

Power Automate: Built-in. Configure retry policy on each action (fixed interval or exponential backoff). Default is 4 retries. For Dataverse actions, the connector handles 429 retries automatically.

Web API / SDK calls from external code: Implement exponential backoff with jitter. Start at 1 second, double each time, add random jitter of 0-500ms, cap at 30 seconds. Respect Retry-After headers — they override your backoff.

wait_time = min(base * 2^attempt + random(0, 500ms), 30s)

Webhooks: Dataverse retries 3 times over ~1 minute. After that, the message is gone. If you need more, add your own retry queue on the receiving end.

Service Bus consumers: Use the built-in Service Bus retry. Dead-letter messages after N failed attempts. Build a dead-letter processor that alerts your team and allows manual resubmission.

Virtual Tables: No retry for reads — the user sees an error. Implement circuit breaker logic in your data provider to fail fast rather than hanging when the external API is down.


What I’d Actually Recommend

After all those patterns, here’s what I tell teams on day one of an integration project.

Start with Power Automate. For most D365 integrations — syncing records between systems, triggering external processes on data changes, pulling data on a schedule — Power Automate is fast to build, easy to monitor, and good enough. Your team doesn’t need to maintain custom code. The connector ecosystem covers most SaaS products. Build it in a day, ship it, move on.

Graduate to Azure Service Bus + Azure Functions when: you need guaranteed delivery, high throughput (thousands of events per minute), complex transformation logic, or integration patterns that Power Automate can’t express in under 30 actions.

Use the Web API or SDK when: external systems need to push data into Dataverse. There’s no shortcut here — they need to make API calls.

Use Custom APIs when: you have server-side business logic that multiple clients need to call. Don’t put that logic in Power Automate and don’t duplicate it across client scripts.

Use Virtual Tables sparingly. Read-only reference data from fast APIs. That’s the sweet spot. Anything else is a science project.

Use Dual-Write when: you have CE + F&O and you’re syncing standard entities. Don’t fight it for custom scenarios.

The integration pattern isn’t the hard part. The hard part is error handling, monitoring, and knowing what to do when something fails at 2 AM. Pick the simplest pattern that meets your requirements, invest your time in observability and alerting, and you’ll have integrations that run for years without drama.

Share this article LinkedIn X / Twitter

Related articles