Apply a Python model created with ETL.

Parameters:

Parameters:
See dedicated page for more information.
Apply a Python predictive model (.pyModel) to the incoming table (first pin).
Current limitation: models must be Multi-Class Logistic Regression models exported in the .pyModel format.
I/O
column name containing the predictions).This action loads a serialized Python model (extension .pyModel) and applies it to each row of the input table. The model expects exactly the same feature set and ordering as used during training. Typical uses include scoring batches of events, users, or transactions and adding the predicted class to each row for downstream filtering, KPI computation, or export.
What the component does
.pyModel file once per partition (or once for the whole table if unpartitioned).column name containing the predictions.Model compatibility
Only models exported as Multi-Class Logistic Regression.pyModelfiles are currently supported. If you trained a different type of model, re-train/export it in the supported format before use.
Train & Export the model
Use your training pipeline (e.g., the companion training action of this family) to produce a .pyModel file.
Place the model where the action can see it
assets/ folder (e.g., assets/models/my_model.pyModel).Wire your scoring flow
python model to apply → assets → pick your .pyModel.column name containing the predictions (e.g., prediction).Run & Validate
Feature alignment matters
Make sure the incoming table contains all features the model expects, with the same names, order, and types used at training time.
Name your output clearly
Use a specific column name containing the predictions like predicted_class to avoid accidental overwrites.
Process at scale
For large datasets, enable Input table partitioning to reduce memory footprint and allow parallel execution (depending on your runtime).
Version your models
Keep models under assets/models/ with semantic versions (churn_v1.pyModel, churn_v2.pyModel) to reproduce results easily.
