API FAQ
What are the API rate limits?
The API allows up to 10 requests per second, with a burst capacity of 20 requests in 5 seconds. Exceeding this limit will return a 429 Too Many Requests response.
Do you support upsert?
Yes, the DATABASICS API supports upsert operations for certain endpoints. This functionality allows you to either update an existing record or insert a new one, depending on whether the specified record already exists.
Does the API support Partial Updates?
Yes, the DATABASICS API supports partial updates. You only need to send the specific fields or columns you wish to update as part of your API call. Any fields not included in the request will remain unchanged, ensuring efficient and targeted updates without the need to resend the entire record.
For example:
- To update only an employee's email address, you can send a request with just the employee ID and the updated email field.
- This minimizes data transfer and speeds up the update process.
Do you support API Bulk Upload?
DATABASICS APIs support bulk uploads for updating or inserting multiple records in a single API call. A different approach must be followed to standard API calls to achieve this. Instead of passing parameters as outlined in the API documentation, developers must pass the data via the request body formatted as form-data in text format. Detailed examples of this process are available in our API Examples.
To ensure a smooth and efficient experience, we recommend adhering to the following best practices:
Recommendations for Bulk Insert/Update:
- Maximum Payload Size:
- Up to 5MB per request.
- For batch operations (insert/update), the maximum size increases to 10MB.
- Maximum Records per Batch:
- Limit each request to 100 records.
- If working with larger datasets, adjust batch sizes to avoid exceeding the payload size limit.
- Request Timeouts:
- Ensure that requests complete within 30 seconds to prevent server timeouts.
- Retry Mechanism:
- Implement error handling for timeout errors.
- If a request fails, retry with smaller batch sizes as necessary.
By following these recommendations, developers can minimize the risk of errors and optimize the performance of bulk operations.
How does the DATABASICS API handle or use chunking?
Currently, the DATABASICS API does not support automated chunking for bulk uploads. Developers are responsible for dividing large datasets into smaller subsets manually to optimize processing and ensure compliance with API constraints. Below are recommendations to effectively handle bulk data uploads while adhering to system limitations:
Key Constraints to Consider:
-
API Rate Limits:
Ensure your requests comply with the limit of 10 requests per second (burst capacity of 20 requests over 5 seconds). Exceeding these limits will result in a429 Too Many Requests
error. -
Payload Size Limits:
Each API request should not exceed a payload size of 5 MB to avoid payload-related errors. -
Request Size Recommendation:
Limit each request to 100 records per API call to reduce the chance of errors and improve efficiency.
Recommendations for Chunking:
-
Manual Chunking:
Divide your data into manageable chunks of up to 100 records per request. For example, if you need to process 5,000 records:- Create 50 chunks of 100 records each.
- Submit these chunks sequentially or in parallel, ensuring adherence to rate limits and payload size constraints.
-
Error Isolation and Retry Mechanism:
- Isolate Errors: Chunking allows you to identify and isolate errors at the subset level, making it easier to debug and correct issues without affecting the entire dataset.
- Retry Efficiently: Implement a retry mechanism to resubmit only the failed chunks, minimizing resource usage and processing time.
While DATABASICS does not support automated chunking at this time, you can achieve efficient bulk uploads and updates by following the guidance above. For scenarios where large-scale data management is required, consider leveraging the bulk upsert functionality outlined earlier in this document. Future updates to the platform may include support for automated chunking to streamline these processes further.
Does the system allow Parallel Processing?
Yes, the DATABASICS API supports parallel processing, allowing multiple API calls to be made concurrently.
Key Considerations for Parallel Processing:
Rate Limits: Ensure that parallel requests adhere to the API's rate limits, which allow up to 10 requests per second with a burst capacity of 20 requests in 5 seconds. Exceeding these limits will result in a 429 Too Many Requests response.
Optimizing Throughput: Divide large datasets into manageable chunks (e.g., 250–500 records per chunk) and process them in parallel threads or processes.
For example, if you have 5,000 records, you can split them into 20 chunks of 250 records and process them simultaneously across multiple threads.
Error Handling: Implement robust error handling for parallel processing to manage rate-limit responses or network issues. For instance:
- Use retry mechanisms with exponential backoff for failed requests.
- Log errors at the chunk level for easier troubleshooting.
System Resources: Ensure that the client-side infrastructure is capable of managing parallel threads without overloading the network or system resources.
Testing Parallel Processing: We recommend testing parallel requests in a staging or sandbox environment to validate performance and adherence to rate limits before deploying in production.
How do I handle errors or failed API requests?
- The API provides detailed error codes and messages to help identify and resolve issues.
- For rate-limit errors (429 Too Many Requests), implement exponential backoff and retry strategies.
- Log errors for further analysis and troubleshooting.
Are there data dependencies in the DATABASICS API?
Yes, the DATABASICS API enforces data dependencies to maintain data integrity. For example, an employee must belong to an operating unit and department. These related entities (operating units or departments) must already exist in the system before you can insert or update an employee record referencing them. If you attempt to insert or update a record (e.g., an employee) that references a non-existent entity (e.g., a department), the API will return an error indicating the missing or invalid dependency.
What is the best practice for managing data dependencies?
To manage data dependencies effectively, follow these steps:
- Validate Dependencies Before Making API Calls: Before inserting or updating records, ensure that all dependent entities (e.g., operating units, departments) already exist in the system.
- Establish a Data Hierarchy for API Calls: Insert or update dependent entities first (e.g., operating units and departments). After dependencies are created, insert or update the main entity (e.g., employees).
Example Workflow:
Step 1: Create or validate the department list via the department API.
Step 2: Insert employees referencing the validated department IDs.
- Use Pre-Upload Checks: Perform a pre-upload validation to cross-check dependencies (e.g., ensuring department IDs exist for all employees in your dataset).
- Implement Error Handling and Retries: If the API returns an error due to a missing dependency, handle it by:
Logging the error for troubleshooting. Creating the missing dependency and retrying the original request.
How do I identify attachments that are part of an itemization?
When retrieving attachment data using the getXpAttachmentList API, the response includes various details that indicate how each attachment is associated with an expense. To determine if an attachment belongs to an itemized expense, follow these guidelines:
Understanding eLineNo and ccTransNo
eLineNo (Expense Line Number): This field represents the specific expense line that the attachment is linked to.
ccTransNo (Credit Card Transaction Number): This field is used to associate attachments with itemized expenses.
Identifying Report-Level Attachments vs. Itemized Attachments
If eLineNo is greater than 0
The attachment is linked to a specific expense line in the report.
If eLineNo is 0
The attachment is not linked to a specific expense line and requires further examination.
Checking ccTransNo when eLineNo == 0
If ccTransNo == 0, the attachment was uploaded at the report level and does not belong to an itemized charge.
If ccTransNo != 0, the attachment belongs to an itemized expense and should be linked to all expense lines that share the same ccTransNo (Note that positive ccTransNo is linked to an actual credit card charge and negative ccTransNo means the attachment is linked to and out of pocket itemized expense).
For an example checkout our API Example
We're here to help!
If you get stuck, shoot us an email or use our online support form or submit a request to our JIRA Customer Service.
Updated about 1 month ago