abacusai.prediction_client

Module Contents

Classes

PredictionClient

Abacus.AI Prediction API Client. Does not utilize authentication and only contains public prediction methods

class abacusai.prediction_client.PredictionClient(client_options=None)

Bases: abacusai.client.BaseApiClient

Abacus.AI Prediction API Client. Does not utilize authentication and only contains public prediction methods

Parameters:

client_options (ClientOptions) – Optional API client configurations

predict_raw(deployment_token, deployment_id, **kwargs)

Raw interface for returning predictions from Plug and Play deployments.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • **kwargs (dict) – Arbitrary key/value pairs may be passed in and is sent as part of the request body.

lookup_features(deployment_token, deployment_id, query_data={}, limit_results=None, result_columns=None)

Returns the feature group deployed in the feature store project.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

  • limit_results (int) – If present, will limit the number of results to the value provided.

  • result_columns (list) – If present, will limit the columns present in each result to the columns specified in this list

Return type:

Dict

predict(deployment_token, deployment_id, query_data={})

Returns a prediction for Predictive Modeling

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

Return type:

Dict

predict_multiple(deployment_token, deployment_id, query_data={})

Returns a list of predictions for Predictive Modeling

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (list) – This will be a list of dictionaries where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

Return type:

Dict

predict_from_datasets(deployment_token, deployment_id, query_data={})

Returns a list of predictions for Predictive Modeling

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the source dataset name and ‘Value’ will be a list of records corresponding to the dataset rows

Return type:

Dict

predict_lead(deployment_token, deployment_id, query_data, explain_predictions=False, explainer_type=None)

Returns the probability of a user to be a lead on the basis of his/her interaction with the service/product and user’s own attributes (e.g. income, assets, credit score, etc.). Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘user_id’ mapped to mapping ‘LEAD_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary containing user attributes and/or user’s interaction data with the product/service (e.g. number of click, items in cart, etc.).

  • explain_predictions (bool) – Will explain predictions for lead

  • explainer_type (str) – Type of explainer to use for explanations

Return type:

Dict

predict_churn(deployment_token, deployment_id, query_data)

Returns a probability of a user to churn out in response to his/her interactions with the item/product/service. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘churn_result’ mapped to mapping ‘CHURNED_YN’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

Return type:

Dict

predict_takeover(deployment_token, deployment_id, query_data)

Returns a probability for each class label associated with the types of fraud or a ‘yes’ or ‘no’ type label for the possibility of fraud. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘account_name’ mapped to mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary containing account activity characteristics (e.g. login id, login duration, login type, ip address, etc.).

Return type:

Dict

predict_fraud(deployment_token, deployment_id, query_data)

Returns a probability of a transaction performed under a specific account as being a fraud or not. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘account_number’ mapped to the mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary containing transaction attributes (e.g. credit card type, transaction location, transaction amount, etc.).

Return type:

Dict

predict_class(deployment_token, deployment_id, query_data={}, threshold=None, threshold_class=None, thresholds=None, explain_predictions=False, fixed_features=None, nested=None, explainer_type=None)

Returns a classification prediction

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

  • threshold (float) – float value that is applied on the popular class label.

  • threshold_class (str) – label upon which the threshold is added (Binary labels only)

  • thresholds (list) – maps labels to thresholds (Multi label classification only). Defaults to F1 optimal threshold if computed for the given class, else uses 0.5

  • explain_predictions (bool) – If true, returns the SHAP explanations for all input features.

  • fixed_features (list) – Set of input features to treat as constant for explanations.

  • nested (str) – If specified generates prediction delta for each index of the specified nested feature.

  • explainer_type (str) –

Return type:

Dict

predict_target(deployment_token, deployment_id, query_data={}, explain_predictions=False, fixed_features=None, nested=None, explainer_type=None)

Returns a prediction from a classification or regression model. Optionally, includes explanations.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the entity against which a prediction is performed and ‘Value’ will be the unique value of the same entity.

  • explain_predictions (bool) – If true, returns the SHAP explanations for all input features.

  • fixed_features (list) – Set of input features to treat as constant for explanations.

  • nested (str) – If specified generates prediction delta for each index of the specified nested feature.

  • explainer_type (str) –

Return type:

Dict

get_anomalies(deployment_token, deployment_id, threshold=None, histogram=False)

Returns a list of anomalies from the training dataset

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • threshold (float) – The threshold score of what is an anomaly. Valid values are between 0.8 and 0.99.

  • histogram (bool) – If True, will return a histogram of the distribution of all points

Return type:

io.BytesIO

is_anomaly(deployment_token, deployment_id, query_data=None)

Returns a list of anomaly attributes based on login information for a specified account. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘account_name’ mapped to mapping ‘ACCOUNT_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – The input data for the prediction.

Return type:

Dict

get_forecast(deployment_token, deployment_id, query_data, future_data=None, num_predictions=None, prediction_start=None, explain_predictions=False, explainer_type=None)

Returns a list of forecasts for a given entity under the specified project deployment. Note that the inputs to the deployed model will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘holiday_yn’ mapped to mapping ‘FUTURE’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘store_id’ in your dataset) mapped to the column mapping ITEM_ID that uniquely identifies the entity against which forecasting is performed and ‘Value’ will be the unique value of the same entity.

  • future_data (dict) – This will be a dictionary of values known ahead of time that are relevant for forecasting (e.g. State Holidays, National Holidays, etc.). The key and the value both will be of type ‘String’. For example future data entered for a Store may be {“Holiday”:”No”, “Promo”:”Yes”}.

  • num_predictions (int) – The number of timestamps to predict in the future.

  • prediction_start (str) – The start date for predictions (e.g., “2015-08-01T00:00:00” as input for mid-night of 2015-08-01).

  • explain_predictions (bool) – Will explain predictions for forecasting

  • explainer_type (str) – Type of explainer to use for explanations

Return type:

Dict

get_k_nearest(deployment_token, deployment_id, vector, k=None, distance=None, include_score=False)

Returns the k nearest neighbors for the provided embedding vector.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • vector (list) – Input vector to perform the k nearest neighbors with.

  • k (int) – Overrideable number of items to return

  • distance (str) – Specify the distance function to use when finding nearest neighbors

  • include_score (bool) – If True, will return the score alongside the resulting embedding value

Return type:

Dict

get_multiple_k_nearest(deployment_token, deployment_id, queries)

Returns the k nearest neighbors for the queries provided

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • queries (list) – List of Mappings of format {“catalogId”: “cat0”, “vectors”: […], “k”: 20, “distance”: “euclidean”}. See getKNearest for additional information about the supported parameters

get_labels(deployment_token, deployment_id, query_data, threshold=None)

Returns a list of scored labels from

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – Dictionary where key is “Content” and value is the text from which entities are to be extracted.

  • threshold (None) – Deprecated

Return type:

Dict

get_recommendations(deployment_token, deployment_id, query_data, num_items=50, page=1, exclude_item_ids=[], score_field='', scaling_factors=[], restrict_items=[], exclude_items=[], explore_fraction=0.0)

Returns a list of recommendations for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘time’ mapped to mapping ‘TIMESTAMP’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_name’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which recommendations are made and ‘Value’ will be the unique value of the same item. For example, if you have the column name ‘user_name’ mapped to the column mapping ‘USER_ID’, then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value.

  • num_items (int) – The number of items to recommend on one page. By default, it is set to 50 items per page.

  • page (int) – The page number to be displayed. For example, let’s say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the ‘page’ variable will display a list of items that rank from 11th to 20th.

  • exclude_item_ids (list) – [DEPRECATED]

  • score_field (str) – The relative item scores are returned in a separate field named with the same name as the key (score_field) for this argument.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

  • restrict_items (list) – It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, “value3”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”, “value3”, …]” to which to restrict the recommendations to. Let’s take an example where the input to restrict_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restrition is particularly useful if there’s a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list.

  • exclude_items (list) – It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” to exclude from the recommendations. Let’s take an example where the input to exclude_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is particularly useful if there’s a list of items that you know is of no use in some particular scenario and you don’t want to show those items present in that list.

  • explore_fraction (float) – The fraction of recommendations that is to be new items.

Return type:

Dict

get_personalized_ranking(deployment_token, deployment_id, query_data, preserve_ranks=[], preserve_unknown_items=False, scaling_factors=[])

Returns a list of items with personalized promotions on them for a given user under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary with two key-value pairs. The first pair represents a ‘Key’ where the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a ‘Value’ which is the identifier value for that user. The second pair will have a ‘Key’ which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a ‘Value’ which will be a list of identifiers that uniquely identifies those items.

  • preserve_ranks (list) – List of dictionaries of format {“column”: “col0”, “values”: [“value0, value1”]}, where the ranks of items in query_data is preserved for all the items in “col0” with values, “value0” and “value1”. This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation.

  • preserve_unknown_items (bool) – If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

Return type:

Dict

get_ranked_items(deployment_token, deployment_id, query_data, preserve_ranks=[], preserve_unknown_items=False, scaling_factors=[])

Returns a list of re-ranked items for a selected user when a list of items is required to be reranked according to the user’s preferences. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary with two key-value pairs. The first pair represents a ‘Key’ where the column name (e.g. a column with name ‘user_id’ in your dataset) mapped to the column mapping USER_ID uniquely identifies the user against whom a prediction is made and a ‘Value’ which is the identifier value for that user. The second pair will have a ‘Key’ which will be the name of the column name (e.g. movie_name) mapped to ITEM_ID (unique item identifier) and a ‘Value’ which will be a list of identifiers that uniquely identifies those items.

  • preserve_ranks (list) – List of dictionaries of format {“column”: “col0”, “values”: [“value0, value1”]}, where the ranks of items in query_data is preserved for all the items in “col0” with values, “value0” and “value1”. This option is useful when the desired items are being recommended in the desired order and the ranks for those items need to be kept unchanged during recommendation generation.

  • preserve_unknown_items (bool) – If true, any items that are unknown to the model, will not be reranked, and the original position in the query will be preserved.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

Return type:

Dict

Returns a list of related items for a given item under the specified project deployment. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘item_code’ mapped to mapping ‘ITEM_ID’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – This will be a dictionary where ‘Key’ will be the column name (e.g. a column with name ‘user_name’ in your dataset) mapped to the column mapping USER_ID that uniquely identifies the user against which related items are determined and ‘Value’ will be the unique value of the same item. For example, if you have the column name ‘user_name’ mapped to the column mapping ‘USER_ID’, then the query must have the exact same column name (user_name) as key and the name of the user (John Doe) as value.

  • num_items (int) – The number of items to recommend on one page. By default, it is set to 50 items per page.

  • page (int) – The page number to be displayed. For example, let’s say that the num_items is set to 10 with the total recommendations list size of 50 recommended items, then an input value of 2 in the ‘page’ variable will display a list of items that rank from 11th to 20th.

  • scaling_factors (list) – It allows you to bias the model towards certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”], “factor”: 1.1}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” in reference to which the model recommendations need to be biased; and the key, “factor” takes the factor by which the item scores are adjusted. Let’s take an example where the input to scaling_factors is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”], “factor”: 1.4}]. After we apply the model to get item probabilities, for every SUV and Sedan in the list, we will multiply the respective probability by 1.1 before sorting. This is particularly useful if there’s a type of item that might be less popular but you want to promote it or there’s an item that always comes up and you want to demote it.

  • restrict_items (list) – It allows you to restrict the recommendations to certain items. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, “value3”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”, “value3”, …]” to which to restrict the recommendations to. Let’s take an example where the input to restrict_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. This input will restrict the recommendations to SUVs and Sedans. This type of restrition is particularly useful if there’s a list of items that you know is of use in some particular scenario and you want to restrict the recommendations only to that list.

  • exclude_items (list) – It allows you to exclude certain items from the list of recommendations. The input to this argument is a list of dictionaries where the format of each dictionary is as follows: {“column”: “col0”, “values”: [“value0”, “value1”, …]}. The key, “column” takes the name of the column, “col0”; the key, “values” takes the list of items, “[“value0”, “value1”]” to exclude from the recommendations. Let’s take an example where the input to exclude_items is [{“column”: “VehicleType”, “values”: [“SUV”, “Sedan”]}]. The resulting recommendation list will exclude all SUVs and Sedans. This is particularly useful if there’s a list of items that you know is of no use in some particular scenario and you don’t want to show those items present in that list.

Return type:

Dict

get_feature_group_rows(deployment_token, deployment_id, query_data)
Parameters:
  • deployment_token (str) –

  • deployment_id (str) –

  • query_data (dict) –

get_search_results(deployment_token, deployment_id, query_data)

TODO

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – Dictionary where key is “Content” and value is the text from which entities are to be extracted.

Return type:

Dict

get_sentiment(deployment_token, deployment_id, document)

TODO

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • document (str) – # TODO

Return type:

Dict

get_entailment(deployment_token, deployment_id, document)

TODO

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • document (str) – # TODO

Return type:

Dict

get_classification(deployment_token, deployment_id, document)

TODO

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • document (str) – # TODO

Return type:

Dict

get_summary(deployment_token, deployment_id, query_data)

Returns a json of the predicted summary for the given document. Note that the inputs to this method, wherever applicable, will be the column names in your dataset mapped to the column mappings in our system (e.g. column ‘text’ mapped to mapping ‘DOCUMENT’ in our system).

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – Raw Data dictionary containing the required document data - must have a key document corresponding to a DOCUMENT type text as value.

Return type:

Dict

predict_language(deployment_token, deployment_id, query_data)

TODO

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (str) – # TODO

Return type:

Dict

get_assignments(deployment_token, deployment_id, query_data, forced_assignments=None)

Get all positive assignments that match a query.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – specifies the set of assignments being requested.

  • forced_assignments (dict) – set of assignments to force and resolve before returning query results.

Return type:

Dict

check_constraints(deployment_token, deployment_id, query_data)

Check for any constraints violated by the overrides.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • query_data (dict) – assignment overrides to the solution.

Return type:

Dict

predict_with_binary_data(deployment_token, deployment_id, blob, blob_key_name='blob')

Make predictions for a given blob, e.g. image, audio

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • blob (io.TextIOBase) – The multipart/form-data of the data

  • blob_key_name (str) – the key to access this blob data in the model query data

Return type:

Dict

describe_image(deployment_token, deployment_id, image, categories, top_n=None)

Describe the similarity between image and categories.

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • image (io.TextIOBase) – The image to describe

  • categories (list) – A list of candidate categories to compare with the image

  • top_n (int) – Return the N most similar categories

Return type:

Dict

transcribe_audio(deployment_token, deployment_id, audio)

Transcribe the audio

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • audio (io.TextIOBase) – The audio to transcribe

Return type:

Dict

classify_image(deployment_token, deployment_id, image)

Classify the image

Parameters:
  • deployment_token (str) – The deployment token to authenticate access to created deployments. This token is only authorized to predict on deployments in this project, so it is safe to embed this model inside of an application or website.

  • deployment_id (str) – The unique identifier to a deployment created under the project.

  • image (io.TextIOBase) – The image to classify

Return type:

Dict