Amazon Redshift
Amazon Redshift ETL connector for data replication
Last updated
Amazon Redshift ETL connector for data replication
Last updated
Instructions for granting Amazon Redshift access and setting up Amazon Redshift as a destination in Daton.
Setting up access to your Amazon Redshift data warehouse is not very complicated and just takes a couple of simple steps. If you don't have an existing account on AWS, you can sign up for a free tier here.
Instructions to grant Redshift cluster access and (optionally) creating a user and schema for setting up Redshift as a destination in Daton:
Access your Amazon Redshift Console.
Click Clusters from the top left sidebar.
Click on the specific Cluster.
In the Configuration tab of the details page, note down the Redshift Hostname and Port from the Endpoint. This will be used on the Daton integration page.
In the Configuration tab of the details page, under the VPC security groups, click the name of the security group.
On the details page, click the Inbound tab, and click Edit.
Click Add Rule and white list the IP Addresses below.
35.238.200.205/32
Click on Actions and Modify publicly accessible setting and then set the instance to be publicly accessible
To enable Daton to write to a specific dataset, create a user for the relevant Amazon Redshift Dataset:
Access your Amazon Redshift Console.
Click the Query editor and run the below commands to grant privileges to Daton. Change the user, password, and db-name per your choice.
That's it, you're done configuring access — time to connect to Amazon Redshift.
Click Clusters from the left menu and click on the cluster name to view the cluster details.
Scroll down to see the JDBC URL details and note them. This URL is required along with the database username and password while integrating Redshift on Daton.
Copy the below SSH public key and use it while creating the SSH rule for the Security Group.
Login to Daton web-app.
Click Data Sources & select Choose Warehouse.
Click Amazon Redshift & enter any integration name.
Enter the JDBC URL, database username, and password from the above steps.
Click Next, if successfully connected, select the schema from the options.
Click Finish to complete the setup.
Daton has been built to handle multiple data warehouses. Since all data warehouses are not built the same, the choice of a data warehouse may impact how Daton loads data. This section tables some key capabilities and limitations of RedShift that impact how data looks in your data warehouse.
Feature | Supported | Comments |
---|---|---|
Nested Fields | Yes | |
Primary Keys | Yes | |
Updates | No | Redshift doesnt allow updating while insert mechanism. Instead, Daton will append data to the end of the table with every replication run. |
Full table replication | Yes | |
Column based | No | |
Inserts only | Yes | |
SSH Connections | Yes | |
Adding columns to existing integration | Yes | When new columns are added to an existing table in Amazon Redshift, user is given two options. Depending on their choice, the new column(s) is added and entire table is re-loaded or there is no re-load. Read the edit integration (proceed link) functionality to understand the impact of making changes. |
Removing columns to existing integration | Yes | Users can use the edit integration functionality |
Adding tables to existing integration | No | When tables are deselected from an integration, the loads into those tables stop. Daton doesn't delete the tables from the data warehouse. This is great for cases where usage of a source has been discontinued, but there is a desire to capture that data. |
Table Partitioning | Yes |
The redShift data warehouse does not support nested tables. Hence, Daton creates separate 'child' tables for each nested data present in the source schema(or parent table).
For example, if data is present in Listrecommendations.Fulfillmentrecommendations.Member.Itemdimensions.Dimensiondescriptionthen the following tables would be created -
Listrecommendations
(Parent Table)
Listrecommendations_Fulfillmentrecommendations
(Level 1 Child)
Listrecommendations_Fulfillmentrecommendations_Member
(Level 2 Child)
Listrecommendations_Fulfillmentrecommendations_Member_Itemdimensions
(Level 3 Child)
Any nested data present in Itemdimesions would be stored in text and any excess data which does not fit into that field would be truncated.
Data between parent and child tables can be mapped or combined using daton_parent_batch_id
and daton_batch_runtime
meta fields. The same applies to other levels of child tables.
For example, if the API response for ListOrderItems
contains data in the following structure,
Column Name | Datatype |
---|---|
AmazonOrderId | STRING |
ASIN | STRING |
SellerSKU | STRING |
OrderItemId | STRING |
ItemPrice | RECORD |
ItemPrice.CurrencyCode | STRING |
ItemPrice.Amount | STRING |
then the following tables would be created by Daton on RedShift:
Table 1:
ListOrderItem (Parent Table)
Column Name | Datatype |
---|---|
AmazonOrderId | STRING |
ASIN | STRING |
SellerSKU | STRING |
OrderItemId | STRING |
daton_user_id | NUMERIC |
daton_batch_runtime | NUMERIC |
daton_batch_id | NUMERIC |
Table 2:
ListOrderItem_ItemPrice (Level 1 Child Table)
Column Name | Datatype |
---|---|
CurrencyCode | STRING |
Amount | STRING |
daton_user_id | NUMERIC |
daton_batch_runtime | NUMERIC |
daton_batch_id | NUMERIC |
daton_parent_batch_id | NUMERIC |
In order to fetch the Item price for an AmazonOrderId
, daton_batch_id
and daton_batch_runtime
must be used for mapping records