Skip to main content
Skip table of contents

Understanding API Timeouts When Retrieving Large Datasets

Summary

When an integration or external system retrieves large datasets from the API (for example, from /entity/Default/23.200.001/Customer) without using pagination or batch processing, the request may exceed the server's processing or timeout limit.

Why This Occurs

  • A single API call is made to retrieve the full dataset.

  • No pagination, filtering, or batching is implemented by the integration.

  • The server ends the request when the processing time or payload exceeds configured limits.

  • This design prevents excessively long-running or resource-intensive requests.

Recommended Integration Approach

1. Implement Pagination or Batching

Use built-in pagination support by including $top and $skip parameter parameters in your API calls. Example:

CODE
GET /entity/Default/23.200.001/Customer?$top=100&$skip=0
GET /entity/Default/23.200.001/Customer?$top=100&$skip=100

Continue looping until no further records are returned.

2. Apply Filters to Reduce Data Volume

Use $filter to request only the records that have changed since the last sync or that match specific criteria. This minimises payload size and improves performance.

3. Use Retry Logic

Add retry handling with exponential backoff to handle:

  • Intermittent failures

  • Throttling

  • Timeout responses

4. Optimise Data Selection

Use $select to retrieve only required fields rather than full records.
Example:

CODE
GET /entity/Default/23.200.001/Customer?$select=CustomerID,CustomerName

5. Use Background Jobs or Scheduled Syncs

For large dataset synchronisation, schedule background processes that fetch data incrementally instead of attempting real-time retrieval of the entire dataset.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.