Bulk upload a table to Redshift.

Parameters:

Parameters:
See dedicated page for more information.
redShiftBulkUpload automates a fast, reliable bulk load of a local dataset into Amazon Redshift.
Under the hood it:
This design keeps loads repeatable and safe (you never write directly to the destination table until the staging step succeeds). The action works well for both one-off transfers and production pipelines.
An S3 bucket accessible from your Redshift cluster.
AWS credentials (Access Key ID & Secret) with permissions to:
s3:PutObject, s3:GetObject, s3:AbortMultipartUpload on the bucket/prefix you’ll uses3:ListBucket on the bucketA reachable Redshift cluster and a valid ODBC connection string.
Destination schema/table must exist (the action will load into the table you specify).
Tip: Use a staging table name per run (the node can auto-generate one).
public.event.my-bucket-name, eu-west-3).Run the pipeline. You’ll see a temporary table like stagetable1234567 created, loaded, and the data committed to public.event.
| Section / Field | What it is | Notes / Examples |
|---|---|---|
| Local .gel file to upload to RedShift | The input file to load. | Choose a source: assets, temporary data, recorded data, or a JavaScript expression. For dynamic paths you can use JS (e.g., >"${vars.loadPath}/daily_export.csv"). |
| Name of Primary Columns in uploaded table (comma separated) | One or more key columns. | Example: id or tenant_id,order_id. Used during the finalization step (upsert/replace semantics depend on your environment). |
| Name of Final Destination Table in RedShift | The table that will hold the final data. | Example: public.event. Must exist and be compatible with the file schema. |
| Name of Temporary Staging Table in RedShift | The transient table used by the loader. | Default shown in the UI is a safe generator: > "stagetable"+Math.floor( Math.random() * 9999999+1 ). You may also provide a fixed name (e.g., stg_event). |
| S3 Bucket name | Target bucket for the intermediate upload. | Example: my-bucket-name. |
| S3 Bucket Region | Region of that bucket. | Select from the dropdown (e.g., eu-west-3, us-east-1, …). |
| S3 Access Key ID | AWS access key (for the upload). | Use secret storage; avoid hardcoding. |
| S3 Secret Access Key | AWS secret key. | Use secret storage; avoid hardcoding. |
| The Redshift ODBC connection URL | Standard Redshift ODBC DSN/connection string. | Example pattern: Driver={Amazon Redshift (x64)};Server=<endpoint>;Port=5439;Database=<db>;UID=<user>;PWD=<pwd>; |
Security tips
- Store AWS keys and ODBC credentials in your platform’s secrets/variables, not inline.
- If your environment supports assume-role or IAM profiles (no static keys), prefer those.
s3://<bucket>/<auto-generated prefix>/….COPY from S3 → staging (leveraging Redshift’s parallel readers).file.csv.gz), Redshift’s COPY supports them transparently.redshift/stage/event/y=2025/m=09/d=18/…).s3:PutObject and s3:ListBucket.stl_load_errors for row-level failures.