Amazon S3
Amazon ETL connector for data replication
Last updated
Amazon ETL connector for data replication
Last updated
Access to Amazon Web Services
Access to Daton to configure the destination
Permissions to manage S3 buckets in AWS
Once the pre-requisites are cleared, follow the steps below to configure S3 as a destination in Daton
Use the search functionality on your AWS console to easily navigate to the S3 Setup console.
Create a Storage Bucket or Use an existing Storage Bucket depending on your requirements
Skip to step 3 if you already have a bucket
Enter the bucket name and select the appropriate options to define the bucket parameters according to your organization preference.
One the bucket is created, grant permissions to Daton to create and manage files in this bucket.
Edit the bucket policy and add the following JSON to the policy after changing the bucket name to reflect the bucket where you want data loaded.
The steps laid out earlier in the document are a pre-requisite before you start configuring S3 as a destination in Daton.
Enter an integration name to identify the warehouse in Daton.
Enter S3 parameters that will determine how the data will be populated in the S3 buckets.
Select the region in which you have created the bucket.
Enter the bucket name that is going to host the files replicated by Daton
Enter the Object Key Prefix
Currently, we support CSVs, Parquet and JSON for S3 buckets. If you need support for any other format, do write to us at support@sarasanalytics.com
5. Optionally you can add the additional keys to Object key prefix by selecting optional key elements field.
6. The time zone used is UTC.
7. They key would have the following format if all the optional elements are selected: [object_key_prefix]/[integration_name]/[tablename]/[year]/[month]/[day]/[hour]/[timestamp.filetype] If only a combination of elements are selected then the order of optional elements in the key will be in descending order of the time period.