Upload a table into Aleria database.

Parameters:

Parameters:
Upload/insert rows from the pipeline into an Aleria (green UI) platform database.
This writer is the Aleria-only twin of our PostgreSQL “upsert/insert” actions: the server and port are managed by Aleria, so you only provide the Database, User, and Password (or paste an Aleria DB link).
Target of this box is to write data into Aleria’s DB (so it becomes immediately available to downstream Aleria apps/AI features). It is not a reader.
Database name – paste the Aleria DB name or a full Aleria DB link/URL.
Login/user and Password – supply your Aleria DB credentials.
Click Inspect database (🔍) to verify connection and list tables.
Table Name – enter the target table (must already exist).
Operation – keep insert (CSV Copy) for the fastest load.
(Optional) Set Write all columns on if you want to write every incoming column; otherwise ensure only target columns are present.
Run the pipeline.
Column mapping: the action writes by column name. Make sure incoming column names match the destination table names (case-sensitive rules follow the DB settings).
| Field | What it is |
|---|---|
| Database name (you may paste aleria link here) | The Aleria database identifier, or a full Aleria DB connection link. Host/port are embedded/managed by Aleria. |
| Login/user | Aleria database user. |
| Password | Password for the user. |
| Inspect database | Tests connectivity and lists schemas/tables to help you verify your target. |
| Field | What it is |
|---|---|
| Table Name | Destination table to receive the rows (must exist). |
| Operation | insert (CSV Copy) — bulk insert using high-speed CSV copy. (Other modes may appear in future revisions.) |
| Write all columns | If on, every incoming column is written (must exist on target). If off, provide only the subset present on the target. |
| Field | What it is / Guidance |
|---|---|
| BULK operations: number of rows in one go | Chunk size for batching rows before sending. Start with 5k–50k; increase for large, stable loads. |
| Number of “Insert”s per transaction (0 = auto-commit) | Group multiple batches into a single transaction. 0 lets the action auto-commit. For large loads, try 0 or 1–5 depending on rollback needs. |
| Allow “strange” characters in Column’s Names | Permits non-alphanumeric characters (only enable if your schema truly needs it). |
| Error Management — Status Column Name | Name of a column to append/return that carries write status (e.g., Postgre_Status_1). |
| Error Management — In case of Error | Behavior on errors. only abort on critical error is safest for long loads. |
| Write all rows | If on, processes every row, even if some error statuses are reported. Turn off if you prefer a strict fail-fast behavior. |
Security tip: Prefer dedicated service accounts with least-privilege (INSERT permission on the target table).
Schema alignment: ensure names and types match; cast in the pipeline if needed.
Primary keys & constraints: for pure inserts, unset unique constraints or ensure no duplicates.
Performance:
insert (CSV Copy) for the fastest path.Error visibility: set a Status Column Name to trace per-row outcomes in downstream steps.
Strange column names: avoid turning this on unless legacy schemas force it—it can reduce portability.
Goal: Load curated customer rows into public.customers_hist.
Database: aleria://my-workspace/my-db
User: pipeline_writer / Password: ••••••
Table Name: public.customers_hist
Operation: insert (CSV Copy)
Advanced:
Run — verify Aleria_Status = OK on output.
