deploy#
- datarobotx.deploy(model, *args, target_type=None, target=None, classes=None, name=None, description=None, hooks=None, extra_requirements=None, environment_id=None, runtime_parameters=None, **kwargs)#
Deploy a model to MLOps.
If model object is a non-DR model, will build and configure an appropriate supporting custom environment and custom inference model within DataRobot as part of deployment.
- Parameters:
model (Any) – The model object to deploy. See the drx documentation for additional information on supported model types.
*args (Any, optional) – Additional model objects; required for certain model types, e.g. Huggingface tokenizer + pre-trained model
target_type (str, optional) – Target type for the custom deployment. If not provided, will attempt to be automatically inferred based on the provided model artifacts and arguments. If provided, should be one of ‘Binary’, ‘Multiclass’, ‘Regression’, ‘Anomaly’, ‘Unstructured’, ‘TextGeneration’
target (str, optional) – Name of the target variable; required for supervised model types
classes (list of str, optional) – Names of the target variable classes; required for supervised classification problems; for binary classification, the first item should be the positive class
name (str, optional) – Name of the ML Ops deployment
description (str, optional) – Short description for the ML Ops deployment
extra_requirements (list of str, optional) – For custom model deployments: additional python pypi package names to include in the custom environment. Default behavior is to include standard dependencies for the model type
hooks (dict of callable, optional) – For custom model deployments: additional hooks to include with the deployment; see the DataRobot User Models documentation for details on supported hooks. Make sure any import statements each hook depends on have executed prior to calling deploy() or are within the hook itself; add optional dependencies with the extra_requirements argument.
environment_id (str, optional) – Custom environment id to be used for this deployment. If provided, an existing environment will be used for this deployment instead of automatically detecting requirements and creating a new one. Uses the latest environment version associated with the environment id.
runtime_parameters (list of str, optional) – List of runtime parameters to inject from the DR credential store into the deployment environment. Parameters values can be retrieved inside custom hooks using the datarobot_drum package. Duplicate parameters will be ignored.
**kwargs (
Any
) – Additional keyword arguments that may be model specific
- Returns:
deployment – Resulting ML Ops deployment; returned immediately and automatically updated asynchronously as the deployment process proceeds
- Return type:
Examples
scikit-learn pipeline
>>> import sklearn.pipeline >>> from datarobotx.models.deploy import deploy >>> >>> pipe : sklearn.pipeline.Pipeline # assumes pipe has been defined & fit elsewhere >>> deployment_1 = deploy(pipe, ... target='my_target', ... classes=['my_pos_class', 'my_neg_class'])
scikit-learn pipeline with custom preprocessing hook
>>> import io >>> import pandas as pd >>> from datarobotx.models.deploy import deploy >>> >>> df : pd.DataFrame # assumes training data was previously read elsewhere >>> my_types = df.dtypes >>> def force_schema(input_binary_data, *args, **kwargs): ... buffer = io.BytesIO(input_binary_data) ... return pd.read_csv(buffer, dtype=dict(my_types)) >>> >>> deployment_2 = deploy(pipe, ... target='my_target', ... classes=['my_pos_class', 'my_neg_class'], ... hooks={'read_input_data': force_schema})