Dataverse or SharePoint Connector? Why It Matters More Than You Think
Trigger filters, FetchXML vs OData, pagination, throttling, and lookup handling — a practical comparison of the two connectors developers mix up most.
If you build flows against both Dataverse and SharePoint, you’ve already run into this: an approach that works perfectly with one connector does something completely different — or just doesn’t exist — in the other. The connectors look similar on the surface, but under the hood they have different trigger behavior, different query capabilities, different throttling rules, and different ways of handling column types.
I answer questions about these differences constantly. This article is the reference I wish I had when I started building flows against both data sources in the same week.
Trigger Differences
Both connectors have triggers that fire when data changes. The names even sound alike. But the behavior is not the same.
Dataverse gives you “When a row is added, modified, or deleted” — a single trigger that covers all three operations. You pick which combination you want. It’s backed by the Dataverse event pipeline, meaning it fires reliably on any data change regardless of what caused it (a user, a plugin, another flow, an API call).
SharePoint splits this into separate triggers: “When an item is created” and “When an item or a file is modified.” These triggers work by polling the list on an interval. They check for changes every few minutes depending on your plan and recurrence settings. If a record is created and then updated within the same polling interval, you might only catch the modification trigger, not the creation.
Trigger Filtering
This is where the gap gets wide.
The Dataverse trigger lets you set Filter Rows (an OData filter applied server-side) and Select Columns (so the trigger only fires when specific columns change). This is a big deal. You can set a trigger to fire only when the statuscode column changes on rows where ownerid equals a specific team. The filter happens before the flow even starts, so you’re not burning flow runs on irrelevant changes.
SharePoint triggers have no equivalent. There’s no server-side row filter on the trigger itself. The “When an item or file is modified” trigger does let you scope to a specific folder in a document library, and you can filter by view — but that’s it. If you need to react only to certain field changes or certain record conditions, you have to let the trigger fire on everything and add a condition action at the top of your flow to exit early. That means more flow runs, more API calls, and more entries in your flow run history to wade through.
Querying Data: FetchXML vs OData
When you need to pull records inside a flow, the two connectors take very different approaches.
Dataverse “List Rows” supports both OData query syntax and FetchXML. OData filters work fine for straightforward queries, but FetchXML gives you aggregations, linked entity joins, outer joins, and complex condition groups that OData flat-out cannot express. If you’ve worked with Dataverse for any length of time, you probably have a FetchXML builder open in another tab right now.
SharePoint “Get Items” supports OData filter queries and CAML queries (via the “Send an HTTP request to SharePoint” action). The built-in OData support is limited — no $expand, restricted function support, and certain column types (like multi-select choice) don’t filter correctly. CAML is more powerful but requires constructing raw XML and using the HTTP action, which makes flows harder to read and maintain.
Pagination
Dataverse’s List Rows action returns up to 5,000 rows per page by default and supports automatic pagination via the action settings toggle. You can set a Row Count and it will page through until it hits that number. When you enable pagination, the action handles the @odata.nextLink behind the scenes.
SharePoint’s Get Items action has a hard cap of 100 items per request by default, configurable up to 5,000 via the Top Count parameter. Pagination in SharePoint requires enabling the “Pagination” toggle in action settings, and it works — but it can be slow on large lists because each page is a separate API call to SharePoint Online.
Throttling and API Limits
Both connectors are subject to Power Platform request limits, but the back-end throttling is different.
| Limit | Dataverse Connector | SharePoint Connector |
|---|---|---|
| Connector type | Premium | Standard |
| API request limits | Governed by Power Platform request entitlements (per user/per flow) | Governed by Power Platform request limits + SharePoint service limits |
| Service throttling | Dataverse returns HTTP 429 with a Retry-After header; the connector retries automatically | SharePoint returns HTTP 429 or 503; retry behavior is less predictable |
| Concurrent connections | Up to 52 concurrent requests per user to the org | SharePoint throttles per tenant; heavy usage by any app affects your flow |
| Batch operations | Supported via Dataverse “Perform a changeset request” | Not natively supported; bulk operations require looping |
The practical impact: Dataverse handles high-throughput scenarios better because it was designed for transactional workloads. SharePoint throttling is shared across the entire tenant, so a busy Power App or a Teams site with heavy traffic can throttle your flow even if the flow itself is modest.
Column Type Handling
This is the part that catches developers off guard the most.
Lookups
In Dataverse, a lookup column stores a GUID reference to another table. When you read a row in Power Automate, you get the raw GUID (like _ownerid_value) and the formatted display name in the dynamic content. Setting a lookup requires the entity logical name and the record GUID. The syntax in an update action looks like:
Account (Accounts): /accounts(00000000-0000-0000-0000-000000000001)
In SharePoint, a lookup column stores an integer ID referencing a row in another list. When you read an item, you get both the LookupId (integer) and LookupValue (display text). Setting a lookup requires only the integer ID. Much simpler in theory — but if the lookup list has thousands of items, performance degrades, and there’s a lookup threshold of 12 lookup columns per list view before you hit query throttling.
Choice Columns
Dataverse returns choice values as integers with a separate label property. You need to map the integer to the label yourself unless you use the formatted value. Multi-select choices come back as comma-separated integers.
SharePoint returns choice values as their text labels directly. Multi-select choices come back as an array of strings. Easier to work with in flows, but you lose the ability to reliably compare against a stable identifier (someone renames a choice label, and your flow condition breaks).
Date/Time
Dataverse always stores dates in UTC. The connector returns ISO 8601 strings. Time zone conversion is your responsibility.
SharePoint stores dates in UTC but the connector often returns them in the site’s regional settings time zone, depending on the action and column configuration. This inconsistency has caused more bugs in my flows than I’d like to admit.
The Full Comparison Table
| Feature | Dataverse Connector | SharePoint Connector |
|---|---|---|
| License | Premium | Standard |
| Trigger type | Event-based (real-time) | Polling-based (interval) |
| Trigger filtering | OData filter rows + select columns | Folder/view only |
| Query language | OData + FetchXML | OData (limited) + CAML via HTTP |
| Max rows per page | 5,000 | 5,000 (default 100) |
| Batch operations | Native changeset support | No native batch |
| Lookup handling | GUID-based, requires entity path | Integer ID, simpler syntax |
| Choice values | Integer + label | String label only |
| Date/time | Always UTC | Depends on site settings |
| Offline / mobile | Works with model-driven app offline | Limited offline support |
| File storage | File/image columns (limited size) | Full document library support |
| Throttle scope | Per-org | Per-tenant (shared) |
| Delegation from canvas apps | Strong delegation support | Limited delegation |
When to Use Which
This isn’t always a choice. Sometimes the data already lives in SharePoint and moving it isn’t realistic. But when you do have a choice, here’s how I think about it.
Use Dataverse when:
- You need reliable, event-driven triggers that don’t fire on irrelevant changes
- Your flow processes large datasets or needs server-side aggregation
- You need relational integrity (proper lookups with referential constraints, not just a column pointing at another list)
- You’re already using model-driven apps, Dynamics 365, or any Power Platform app that stores data in Dataverse
- You need row-level security applied automatically to your flow’s data operations
Use SharePoint when:
- You need document storage and versioning (SharePoint document libraries are still better for this than Dataverse file columns)
- The users already manage their data in SharePoint lists and you’re automating around their existing process
- You need a standard connector to avoid premium licensing costs — this is a legitimate constraint in many organizations
- Your data volumes are modest (under a few thousand items) and the query limitations won’t bite you
Avoid the hybrid approach unless you have to. Flows that read from SharePoint and write to Dataverse (or vice versa) accumulate the limitations of both connectors. Every lookup mapping becomes a translation layer. Every date comparison needs explicit timezone handling. It works, but it’s more maintenance than people expect.
The Licensing Elephant
The Dataverse connector is premium. That single word drives more architecture decisions than any technical consideration in this article. Teams that would benefit from Dataverse triggers and FetchXML queries end up building convoluted SharePoint flows with dozens of condition branches and Apply to Each loops — not because SharePoint is the right choice, but because the license budget doesn’t include premium connectors.
If your flow is running into the limits described in this article — polling triggers firing too often, no trigger-level filtering, pagination struggling with large lists, lookup thresholds — calculate what those workarounds cost in development time and maintenance. Then compare it to a Power Automate Premium license. The math works out more often than people assume.
Related articles
Connecting D365 to Everything Else
A practitioner's guide to every integration option Dataverse offers — Webhooks, Service Bus, Virtual Tables, Dual-Write, Power Automate, Custom APIs, Web API, and the .NET SDK. When to use each, when to avoid them, and how to pick the right one.
Power Platform Licensing: What I Wish Someone Told Me on Day One
Licensing isn't a procurement problem — it's an architecture decision. Here's the mental model every solution architect needs before starting a Power Platform project, with real scenarios and a pre-project checklist.
ALM for Power Platform: Pipelines, Branches, and Avoiding the Export Trap
Application Lifecycle Management on Power Platform has come a long way. Here's how to set up a real CI/CD pipeline using Azure DevOps and the Power Platform Build Tools, and where the sharp edges still are.