Amazon Redshift
Amazon Redshift ETL connector for data replication
Last updated
Amazon Redshift ETL connector for data replication
Last updated
Instructions for granting Amazon Redshift access and setting up Amazon Redshift as a destination in Daton.
Setting up access to your Amazon Redshift data warehouse is not very complicated and just takes a couple of simple steps. If you don't have an existing account on AWS, you can sign up for a free tier here.
Instructions to grant Redshift cluster access and (optionally) creating a user and schema for setting up Redshift as a destination in Daton:
Access your Amazon Redshift Console.
Click Clusters from the top left sidebar.
Click on the specific Cluster.
In the Configuration tab of the details page, note down the Redshift Hostname and Port from the Endpoint. This will be used on the Daton integration page.
In the Configuration tab of the details page, under the VPC security groups, click the name of the security group.
On the details page, click the Inbound tab, and click Edit.
Click Add Rule and white list the IP Addresses below.
35.238.200.205/32
Click on Actions and Modify publicly accessible setting and then set the instance to be publicly accessible
To enable Daton to write to a specific dataset, create a user for the relevant Amazon Redshift Dataset:
Access your Amazon Redshift Console.
Click the Query editor and run the below commands to grant privileges to Daton. Change the user, password, and db-name per your choice.
That's it, you're done configuring access — time to connect to Amazon Redshift.
Click Clusters from the left menu and click on the cluster name to view the cluster details.
Scroll down to see the JDBC URL details and note them. This URL is required along with the database username and password while integrating Redshift on Daton.
Copy the below SSH public key and use it while creating the SSH rule for the Security Group.
Login to Daton web-app.
Click Data Sources & select Choose Warehouse.
Click Amazon Redshift & enter any integration name.
Enter the JDBC URL, database username, and password from the above steps.
Click Next, if successfully connected, select the schema from the options.
Click Finish to complete the setup.
Daton has been built to handle multiple data warehouses. Since all data warehouses are not built the same, the choice of a data warehouse may impact how Daton loads data. This section tables some key capabilities and limitations of RedShift that impact how data looks in your data warehouse.
The redShift data warehouse does not support nested tables. Hence, Daton creates separate 'child' tables for each nested data present in the source schema(or parent table).
For example, if data is present in Listrecommendations.Fulfillmentrecommendations.Member.Itemdimensions.Dimensiondescriptionthen the following tables would be created -
Listrecommendations
(Parent Table)
Listrecommendations_Fulfillmentrecommendations
(Level 1 Child)
Listrecommendations_Fulfillmentrecommendations_Member
(Level 2 Child)
Listrecommendations_Fulfillmentrecommendations_Member_Itemdimensions
(Level 3 Child)
Any nested data present in Itemdimesions would be stored in text and any excess data which does not fit into that field would be truncated.
Data between parent and child tables can be mapped or combined using daton_parent_batch_id
and daton_batch_runtime
meta fields. The same applies to other levels of child tables.
For example, if the API response for ListOrderItems
contains data in the following structure,
then the following tables would be created by Daton on RedShift:
Table 1:
ListOrderItem (Parent Table)
Table 2:
ListOrderItem_ItemPrice (Level 1 Child Table)
In order to fetch the Item price for an AmazonOrderId
, daton_batch_id
and daton_batch_runtime
must be used for mapping records
Feature | Supported | Comments |
---|---|---|
Column Name | Datatype |
---|---|
Column Name | Datatype |
---|---|
Column Name | Datatype |
---|---|