Upload JSON documents to Elastic Search.

Parameters:

Parameters:
See dedicated page for more information.
Upload rows from your pipeline to Elasticsearch using the Bulk API.
https://<cluster-host>:443/<index>/_bulkapplication/x-ndjson) in batches_id unless your platform adds one)Cluster URL (Elastic Cloud example)
Looks like: https://<random>.us-central1.gcp.cloud.es.io:443

One of the two auth methods
Basic auth
elastic
API key auth (recommended for production)

The action expects a column whose cells contain one JSON object per row (not a file path).
body{"customer_id":"C001","name":"Alice","email":"alice@example.com","created_at":"2025-08-20T10:00:00Z"}
{"customer_id":"C002","name":"Bob","email":"bob@example.com","created_at":"2025-08-21T09:30:00Z"}

Why: the Bulk API consumes NDJSON. The action builds that for you from the column values. If you pass a path string instead of JSON, ES returns
not_x_content_exception.
Prepare data
body.Configure ElasticSearchUpload
search-66dmbody columnhttps://<your-cluster>.cloud.es.io:443elastic1000 (tune later)Success signal in log:
"successful": <n>, "failed": 0 and each item shows "result":"created".

Check result:


Refresh after indexing: add refresh=wait_for in idOptional (query string) to make docs immediately searchable.
Ingest pipeline: add pipeline=my_ingest in idOptional.
Rate limiting / throughput:
IDs & duplicates:
_id exists (this action logs the 409s as failures).index/update, or delete/reindex.not_x_content_exception: Compressor detection…
Cause: idData points to a file path string or non-JSON text.
Fix: idData must be a column whose values are JSON objects (see Inputs).
Log line: Authorization required, but no authorization protocol specified
This banner comes from the runner; if your request shows result:"created" and failed:0, you’re fine.
401/403 Unauthorized
Check secEnabled, username/password, and that url is the cluster root (not the Kibana URL).
Index not found
Create the index first in Index Management, or allow auto-create in your cluster settings.
Partial success (some failed)
Switch idErrorManagement to “continue and report” to let the pipeline progress, then inspect the per-item errors in the log.
date fields like created_at) before big loads.