Amazon Redshift
Amazon Redshift ETL connector for data replication
Last updated
Amazon Redshift ETL connector for data replication
Last updated
Instructions for granting Amazon Redshift access and setting up Amazon Redshift as a destination in Daton.
Setting up access to your Amazon Redshift data warehouse is not very complicated and just takes a couple of simple steps. If you don't have an existing account on the AWS, you can sign up a free tier here.
Instructions to grant Redshift cluster access and (optionally) creating a user and schema for setting up Redshift as a destination in Daton.
Access your Amazon Redshift Console.
Click Clusters from top left sidebar.
Click on the specific Cluster.
In the Configuration tab of the details page, note down the Redshift Host name and Port from the Endpoint. This will be used in the Daton integration page.
In the Configuration tab of the details page, under the VPC security groups, click the name of security group.
In the details page, click the Inbound tab, click Edit.
Click Add Rule and white list the IP Addresses below.
35.238.200.205/32
Click on Actions and Modify publicly accessible setting and then set the instance to be publicly accessible
To enable Daton to write to a specific dataset, create a user for the relevant Amazon Redshift Dataset:
Access your Amazon Redshift Console.
Click the Query editor and run the below commands to grant privileges to Daton. Change the user, password and db-name per your choice.
That's it, you're done configuring access — time to connect to Amazon Redshift.
Click Clusters from left menu and click on the cluster name to view the cluster details
Scroll down to see the JDBC URL details and note them. This URL is required along with database username and password while integrating Redshift on Daton
Copy the below SSH public key and use it while creating SSH rule for Security Group
Login to Daton web-app
Click Data Sources & select Choose Warehouse
Click Amazon Redshift & enter any integration name
Enter the JDBC URL, database username, and password from above steps
Click Next, if successfully connected, select the schema from the options
Click Finish to complete setup
Daton has been built to handle multiple data warehouses. Since all data warehouses are not built the same, the choice of a data warehouse may impact how Daton loads data. This section tables about some key capabilities and limitations of RedShift that impact how data looks in your data warehouse.
Feature
Supported
Comments
Nested Tables
No
Primary Keys
Yes
Updates
No
Redshift doesnt allow updating while insert mechanism. Instead, Daton will append data to the end of the table with every replication run.
Full table replication
Yes
Column based
No
Inserts only
Yes
SSH Connections
SSL Connections
Adding columns to existing integration
Yes
When new columns are added to an existing table in Amazon Redshift, user is given two options. Depending on their choice, the new column(s) is added and entire table is re-loaded or there is no re-load. Read the edit integration (proceed link) functionality to understand the impact of making changes.
Removing columns to existing integration
Yes
Users can use the edit integration functionality
Adding tables to existing integration
No
When tables are deselected from an integration, the loads into those tables stop. Daton doesn't delete the tables from the data warehouse. This is great for cases where usage of a source has been discontinued, but there is a desire to capture that data.
Table Partitioning
Yes
RedShift data warehouse does not support nested tables. Hence, Daton creates separate 'child' tables for each nested data present in source schema(or parent table).
For example, if data is present in Listrecommendations.Fulfillmentrecommendations.Member.Itemdimensions.Dimensiondescriptionthen the following tables would be created -
Listrecommendations (Parent Table)
Listrecommendations_Fulfillmentrecommendations (Level 1 Child)
Listrecommendations_Fulfillmentrecommendations_Member (Level 2 Child)
Listrecommendations_Fulfillmentrecommendations_Member_Itemdimensions (Level 3 Child)
Any nested data present in Itemdimesions would be stored at text and any excess data which does not fit into that field would be truncated.
Data between parent and child tables can be mapped or combined using daton_parent_batch_id and daton_batch_runtime meta fields. The same applies for other levels of child tables.
For example, if API response for ListOrderItems contains data in the following structure
Column Name
Datatype
AmazonOrderId
STRING
ASIN
STRING
SellerSKU
STRING
OrderItemId
STRING
ItemPrice
RECORD
ItemPrice.CurrencyCode
STRING
ItemPrice.Amount
STRING
then the following tables would be created by Daton on RedShift
Table 1:
ListOrderItem (Parent Table)
Column Name
Datatype
AmazonOrderId
STRING
ASIN
STRING
SellerSKU
STRING
OrderItemId
STRING
daton_user_id
NUMERIC
daton_batch_runtime
NUMERIC
daton_batch_id
NUMERIC
Table 2:
ListOrderItem_ItemPrice (Level 1 Child Table)
Column Name
Datatype
CurrencyCode
STRING
Amount
STRING
daton_user_id
NUMERIC
daton_batch_runtime
NUMERIC
daton_batch_id
NUMERIC
daton_parent_batch_id
NUMERIC
In order to fetch Item price for an AmazonOrderId, daton_batch_id and daton_batch_runtime must be used for mapping records