Title: | 'Prevision.io' R SDK |
---|---|
Description: | For working with the 'Prevision.io' AI model management platform's API <https://prevision.io/>. |
Authors: | Florian Laroumagne [aut, cre], Prevision.io Inc [cph] |
Maintainer: | Florian Laroumagne <[email protected]> |
License: | MIT + file LICENSE |
Version: | 11.7.0 |
Built: | 2025-02-15 04:40:29 UTC |
Source: | https://github.com/cran/previsionio |
Create a new connector of a supported type (among: "SQL", "FTP", "SFTP", "S3", "GCP"). If check_if_exist is enabled, the function will check if a connector with the same name already exists. If yes, it will return a message and the information of the existing connector instead of creating a new one.
create_connector( project_id, type, name, host, port, username, password, google_credentials = NULL, check_if_exist = FALSE )
create_connector( project_id, type, name, host, port, username, password, google_credentials = NULL, check_if_exist = FALSE )
project_id |
id of the project, can be obtained with get_projects(). |
type |
connector type. |
name |
connector name. |
host |
connector host. |
port |
connector port. |
username |
connector username. |
password |
connector password. |
google_credentials |
google credentials JSON (for GCP only). |
check_if_exist |
boolean (FALSE by default). If TRUE, makes extra checks to see if a connector with the same name is already existing. |
list - parsed content of the connector.
Create a new contact point of a supported type (among: "email", "slack").
create_contact_point( project_id, type, name, addresses = NULL, webhook_url = NULL )
create_contact_point( project_id, type, name, addresses = NULL, webhook_url = NULL )
project_id |
id of the project, can be obtained with get_projects(). |
type |
contact point type among "email" or "slack". |
name |
contact point name. |
addresses |
contact point addresses. |
webhook_url |
contact point webhook_url. |
list - parsed content of the contact point.
Create a dataframe from a dataset_id.
create_dataframe_from_dataset(dataset_id)
create_dataframe_from_dataset(dataset_id)
dataset_id |
dataset id. |
data.frame - a R dataframe matching the dataset.
Create a dataset embedding from a dataset_id.
create_dataset_embedding(dataset_id)
create_dataset_embedding(dataset_id)
dataset_id |
dataset id. |
integer - 200 on success.
Upload dataset from data frame.
create_dataset_from_dataframe(project_id, dataset_name, dataframe, zip = FALSE)
create_dataset_from_dataframe(project_id, dataset_name, dataframe, zip = FALSE)
project_id |
id of the project, can be obtained with get_projects(). |
dataset_name |
given name of the dataset on the platform. |
dataframe |
data.frame to upload. |
zip |
is the temp file zipped before sending it to Prevision.io (default = FALSE). |
list - parsed content of the dataset.
Create a dataset from an existing datasource.
create_dataset_from_datasource(project_id, dataset_name, datasource_id)
create_dataset_from_datasource(project_id, dataset_name, datasource_id)
project_id |
id of the project, can be obtained with get_projects(). |
dataset_name |
given name of the dataset on the platform. |
datasource_id |
datasource id. |
list - parsed content of the dataset.
Upload dataset from file name.
create_dataset_from_file( project_id, dataset_name, file, separator = ",", decimal = "." )
create_dataset_from_file( project_id, dataset_name, file, separator = ",", decimal = "." )
project_id |
id of the project, can be obtained with get_projects(). |
dataset_name |
given name of the dataset on the platform. |
file |
path to the dataset. |
separator |
column separator in the file (default: ",") |
decimal |
decimal separator in the file (default: ".") |
list - parsed content of the dataset.
Create a new datasource If check_if_exist is enabled, the function will check if a datasource with the same name already exists. If yes, it will return a message and the information of the existing datasource instead of creating a new one.
create_datasource( project_id, connector_id, name, path = "", database = "", table = "", bucket = "", request = "", check_if_exist = FALSE )
create_datasource( project_id, connector_id, name, path = "", database = "", table = "", bucket = "", request = "", check_if_exist = FALSE )
project_id |
id of the project, can be obtained with get_projects(). |
connector_id |
connector_id linked to the datasource. |
name |
datasource name. |
path |
datasource path (for SFTP & FTP connector). |
database |
datasource database (for SQL connector). |
table |
datasource table (for SQL connector). |
bucket |
datasource bucket (for S3 connector). |
request |
datasource request (for SQLconnector). |
check_if_exist |
boolean (FALSE by default). If TRUE, makes extra checks to see if a datasource with the same name is already existing. |
list - parsed content of the datasource.
Create a new API key for a deployed model.
create_deployment_api_key(deployment_id)
create_deployment_api_key(deployment_id)
deployment_id |
id of the deployment to create an API key on, can be obtained with get_deployments(). |
list - API key information.
Create a new deployment for a model.
create_deployment_model( project_id, name, experiment_id, main_model_experiment_version_id, challenger_model_experiment_version_id = NULL, access_type = c("fine_grained"), type_violation_policy = c("best_effort"), description = NULL, main_model_id, challenger_model_id = NULL )
create_deployment_model( project_id, name, experiment_id, main_model_experiment_version_id, challenger_model_experiment_version_id = NULL, access_type = c("fine_grained"), type_violation_policy = c("best_effort"), description = NULL, main_model_id, challenger_model_id = NULL )
project_id |
id of the project, can be obtained with get_projects(). |
name |
name of the deployment. |
experiment_id |
id of the experiment to deploy, can be obtained with get_experiment_id_from_name(). |
main_model_experiment_version_id |
id of the experiment_version to deploy, can be obtained with get_experiment_version_id(). |
challenger_model_experiment_version_id |
id of the challenger experiment_version to deploy, can be obtained with get_experiment_version_id(). |
access_type |
type of access of the deployment among "fine_grained" (project defined, default), "private" (instance) or "public" (everyone). |
type_violation_policy |
handling of type violation when making predictions among "best_effort" (default) or "strict" (stops the prediction if there is a type violation). |
description |
description of the deployment. |
main_model_id |
id of the model to deploy |
challenger_model_id |
id of the challenger model to deploy |
list - parsed content of the deployment.
Create predictions on a deployed model using a dataset.
create_deployment_predictions(deployment_id, dataset_id)
create_deployment_predictions(deployment_id, dataset_id)
deployment_id |
id of the deployment, can be obtained with get_deployments(). |
dataset_id |
id of the dataset to predict, can be obtained with get_dataset_id_from_name(). |
integer - 200 on success.
Create a new experiment. If check_if_exist is enabled, the function will check if an experiment with the same name already exists. If yes, it will return a message and the information of the existing experiment instead of creating a new one.
create_experiment( project_id, name, provider, data_type, training_type, check_if_exist = FALSE )
create_experiment( project_id, name, provider, data_type, training_type, check_if_exist = FALSE )
project_id |
id of the project in which we create the experiment. |
name |
name of the experiment. |
provider |
provider of the experiment ("prevision-auto-ml" or "external") |
data_type |
type of data ("tabular", "images" or "timeseries"). |
training_type |
type of the training you want to achieve ("regression", "classification", "multiclassification", "clustering", "object-detection" or "text-similarity"). |
check_if_exist |
boolean (FALSE by default). If TRUE, makes extra checks to see if an experiment with the same name is already existing. |
list - experiment information.
Create a new version of an existing experiment.
create_experiment_version( experiment_id, dataset_id = NULL, target_column = NULL, holdout_dataset_id = NULL, id_column = NULL, drop_list = NULL, profile = NULL, experiment_description = NULL, metric = NULL, fold_column = NULL, normal_models = NULL, lite_models = NULL, simple_models = NULL, with_blend = NULL, weight_column = NULL, features_engineering_selected_list = NULL, features_selection_count = NULL, features_selection_time = NULL, folder_dataset_id = NULL, filename_column = NULL, ymin = NULL, ymax = NULL, xmin = NULL, xmax = NULL, time_column = NULL, start_dw = NULL, end_dw = NULL, start_fw = NULL, end_fw = NULL, group_list = NULL, apriori_list = NULL, content_column = NULL, queries_dataset_id = NULL, queries_dataset_content_column = NULL, queries_dataset_id_column = NULL, queries_dataset_matching_id_description_column = NULL, top_k = NULL, lang = NULL, models_params = NULL, name = NULL, onnx_file = NULL, yaml_file = NULL )
create_experiment_version( experiment_id, dataset_id = NULL, target_column = NULL, holdout_dataset_id = NULL, id_column = NULL, drop_list = NULL, profile = NULL, experiment_description = NULL, metric = NULL, fold_column = NULL, normal_models = NULL, lite_models = NULL, simple_models = NULL, with_blend = NULL, weight_column = NULL, features_engineering_selected_list = NULL, features_selection_count = NULL, features_selection_time = NULL, folder_dataset_id = NULL, filename_column = NULL, ymin = NULL, ymax = NULL, xmin = NULL, xmax = NULL, time_column = NULL, start_dw = NULL, end_dw = NULL, start_fw = NULL, end_fw = NULL, group_list = NULL, apriori_list = NULL, content_column = NULL, queries_dataset_id = NULL, queries_dataset_content_column = NULL, queries_dataset_id_column = NULL, queries_dataset_matching_id_description_column = NULL, top_k = NULL, lang = NULL, models_params = NULL, name = NULL, onnx_file = NULL, yaml_file = NULL )
experiment_id |
id of the experiment that will host the new version. |
dataset_id |
id of the dataset used for the training phase. |
target_column |
name of the TARGET column. |
holdout_dataset_id |
id of the holdout dataset. |
id_column |
name of the id column. |
drop_list |
list of names of features to drop. |
profile |
chosen profil among "quick", "normal", "advanced". |
experiment_description |
experiment description. |
metric |
name of the metric to optimise. |
fold_column |
name of the fold column. |
normal_models |
list of (normal) models to select with full FE & hyperparameters search (among "LR", "RF", "ET", "XGB", "LGB", "NN", "CB"). |
lite_models |
list of (lite) models to select with lite FE & default hyperparameters (among "LR", "RF", "ET", "XGB", "LGB", "NN", "CB", "NBC"). |
simple_models |
list of simple models to select (among "LR", "DT"). |
with_blend |
boolean, do we allow to include blend in the modelisation. |
weight_column |
name of the weight columns. |
features_engineering_selected_list |
list of feature engineering to select (among "Counter", "Date", "freq", "text_tfidf", "text_word2vec", "text_embedding", "tenc", "poly", "pca", "kmean"). |
features_selection_count |
number of features to keep after the feature selection process. |
features_selection_time |
time budget in minutes of the feature selection process. |
folder_dataset_id |
id of the dataset folder (images). |
filename_column |
name of the file name path (images). |
ymin |
name of the column matching the lower y value of the image (object detection). |
ymax |
name of the column matching the higher y value of the image (object detection). |
xmin |
name of the column matching the lower x value of the image (object detection). |
xmax |
name of the column matching the higher x value of the image (object detection). |
time_column |
name of column containing the timestamp (time series). |
start_dw |
value of the start of derivative window (time series), should be a strict negative integer. |
end_dw |
value of the end of derivative window (time series), should be a negative integer greater than start_dw. |
start_fw |
value of the start of forecast window (time series), should be a strict positive integer. |
end_fw |
value of the end of forecast window (time series), should be a strict positive integer greater than start_fw. |
group_list |
list of name of feature that describes groups (time series). |
apriori_list |
list of name of feature that are a priori (time series). |
content_column |
content column name (text-similarity). |
queries_dataset_id |
id of the dataset containing queries (text-similarity). |
queries_dataset_content_column |
name of the column containing queries in the query dataset (text-similarity). |
queries_dataset_id_column |
name of the ID column in the query dataset (text-similarity). |
queries_dataset_matching_id_description_column |
name of the column matching id in the description dataset (text-similarity). |
top_k |
top k individual to find (text-similarity). |
lang |
lang of the text (text-similarity). |
models_params |
parameters of the model (text-similarity). |
name |
name of the external model (external model). |
onnx_file |
path to the onnx file (external model). |
yaml_file |
path to the yaml file (external model). |
list - experiment information.
Export data using an existing exporter and the resource to export
create_export(exporter_id, type, dataset_id = NULL, prediction_id = NULL)
create_export(exporter_id, type, dataset_id = NULL, prediction_id = NULL)
exporter_id |
id of the exporter, can be obtained with get_exporters(). |
type |
type of data to export among \"dataset"\, \"validation-prediction\" or \"deployment-prediction\" |
dataset_id |
id of the dataset to export (only for type == \"dataset\") |
prediction_id |
id of the prediction to export (only for type == \"validation_prediction\" or type == \"deployment-prediction\") |
list - parsed content of the export.
Create a new exporter
create_exporter( project_id, connector_id, name, description = "", filepath = "", file_write_mode = "timestamp", database = "", table = "", database_write_mode = "append", bucket = "" )
create_exporter( project_id, connector_id, name, description = "", filepath = "", file_write_mode = "timestamp", database = "", table = "", database_write_mode = "append", bucket = "" )
project_id |
id of the project, can be obtained with get_projects(). |
connector_id |
connector_id linked to the exporter. |
name |
exporter name. |
description |
description of the exporter. |
filepath |
exporter path (for SFTP & FTP connector). |
file_write_mode |
writing type when exporting a file (for SFT & FTP connector, among \"timestamp\", \"safe\" or \"replace\") |
database |
exporter database (for SQL connector). |
table |
exporter table (for SQL connector). |
database_write_mode |
writing type when exporting data within a database (for SQL connector, among \"append\" or \"replace\"). |
bucket |
exporter bucket (for S3 connector). |
list - parsed content of the exporter.
Upload folder from a local file.
create_folder(project_id, folder_name, file)
create_folder(project_id, folder_name, file)
project_id |
id of the project, can be obtained with get_projects(). |
folder_name |
given name of the folder on the platform. |
file |
path to the folder. |
list - parsed content of the folder.
Trigger an existing pipeline run.
create_pipeline_trigger(pipeline_id)
create_pipeline_trigger(pipeline_id)
pipeline_id |
id of the pipeline run to trigger, can be obtained with get_pipelines(). |
integer - 200 on success.
Create a prediction on a specified experiment_version
create_prediction( experiment_version_id, dataset_id = NULL, folder_dataset_id = NULL, confidence = FALSE, best_single = FALSE, model_id = NULL, queries_dataset_id = NULL, queries_dataset_content_column = NULL, queries_dataset_id_column = NULL, queries_dataset_matching_id_description_column = NULL, top_k = NULL )
create_prediction( experiment_version_id, dataset_id = NULL, folder_dataset_id = NULL, confidence = FALSE, best_single = FALSE, model_id = NULL, queries_dataset_id = NULL, queries_dataset_content_column = NULL, queries_dataset_id_column = NULL, queries_dataset_matching_id_description_column = NULL, top_k = NULL )
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
dataset_id |
id of the dataset to start the prediction on, can be obtained with get_datasets(). |
folder_dataset_id |
id of the folder dataset to start prediction on, can be obtained with get_folders(). Only usefull for images use cases. |
confidence |
boolean. If enable, confidence interval will be added to predictions. |
best_single |
boolean. If enable, best single model (non blend) will be used for making predictions other wise, best model will be used unless if model_id is fed. |
model_id |
id of the model to start the prediction on. If provided, it will overwrite the "best single" params. |
queries_dataset_id |
id of the dataset containing queries (text-similarity). |
queries_dataset_content_column |
name of the content column in the queries dataset (text-similarity). |
queries_dataset_id_column |
name of the id column in the queries dataset (text-similarity). |
queries_dataset_matching_id_description_column |
name of the column matching the id (text-similarity). |
top_k |
number of class to retrieve (text-similarity). |
list - parsed prediction information.
Create a new project. If check_if_exist is enabled, the function will check if a project with the same name already exists. If yes, it will return a message and the information of the existing project instead of creating a new one.
create_project( name, description = NULL, color = "#a748f5", check_if_exist = FALSE )
create_project( name, description = NULL, color = "#a748f5", check_if_exist = FALSE )
name |
name of the project. |
description |
description of the project. |
color |
color of the project among \"#4876be\", \"#4ab6eb\", \"#49cf7d\", \"#dc8218\", \"#ecba35\", \"#f45b69\", \"#a748f5\", \"#b34ca2\" or \"#2fe6d0\" (#a748f5 by default). |
check_if_exist |
boolean (FALSE by default). If TRUE, makes extra checks to see if a project with the same name is already existing. |
list - information of the created project.
Add user in and existing project.
create_project_user(project_id, user_mail, user_role)
create_project_user(project_id, user_mail, user_role)
project_id |
id of the project, can be obtained with get_projects(). |
user_mail |
email of the user to be add. |
user_role |
role to grand to the user among "admin", "contributor", "viewer" or "end_user". |
list - information of project's users.
Delete an existing connector.
delete_connector(connector_id)
delete_connector(connector_id)
connector_id |
id of the connector to be deleted, can be obtained with get_connectors(). |
integer - 200 on success.
Delete an existing contact_point
delete_contact_point(contact_point_id)
delete_contact_point(contact_point_id)
contact_point_id |
id of the contact point to be deleted, can be obtained with get_contact_points(). |
integer - 204 on success.
Delete an existing dataset.
delete_dataset(dataset_id)
delete_dataset(dataset_id)
dataset_id |
id of the dataset, can be obtained with get_datasets(). |
integer - 204 on success.
Delete a datasource
delete_datasource(datasource_id)
delete_datasource(datasource_id)
datasource_id |
id of the datasource to be deleted, can be obtained with get_datasources(). |
integer - 200 on success.
Delete an existing deployment.
delete_deployment(deployment_id)
delete_deployment(deployment_id)
deployment_id |
id of the deployment, can be obtained with get_deployments(). |
integer - 204 on success.
Delete a experiment on the platform.
delete_experiment(experiment_id)
delete_experiment(experiment_id)
experiment_id |
id of the experiment, can be obtained with get_experiments(). |
integer - 204 on success.
Delete an exporter
delete_exporter(exporter_id)
delete_exporter(exporter_id)
exporter_id |
id of the exporter to be deleted, can be obtained with get_exporters(). |
integer - 204 on success.
Delete an existing folder.
delete_folder(folder_id)
delete_folder(folder_id)
folder_id |
id of the folder to be deleted. |
integer - 200 on success.
Delete an existing pipeline
delete_pipeline(pipeline_id, type)
delete_pipeline(pipeline_id, type)
pipeline_id |
id of the pipeline to be retrieved, can be obtained with get_pipelines(). |
type |
type of the pipeline to be retrieved among "component", "template", "run". |
integer - 204 on success.
Delete a prediction.
delete_prediction(prediction_id)
delete_prediction(prediction_id)
prediction_id |
id of the prediction to be deleted, can be obtained with get_experiment_version_predictions(). |
integer - 204 on success.
list of predictions of experiment_id.
Delete an existing project.
delete_project(project_id)
delete_project(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
integer - 204 on success.
Delete user in and existing project.
delete_project_user(project_id, user_id)
delete_project_user(project_id, user_id)
project_id |
id of the project, can be obtained with get_projects(). |
user_id |
user_id of the user to be delete, can be obtained with get_project_users(). |
integer - 200 on success.
Get the model_id that provide the best predictive performance given experiment_version_id. If include_blend is false, it will return the model_id from the best "non blended" model.
get_best_model_id(experiment_version_id, include_blend = TRUE)
get_best_model_id(experiment_version_id, include_blend = TRUE)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
include_blend |
boolean, indicating if you want to retrieve the best model among blended models too. |
character - model_id.
Get a connector_id from a connector_name for a given project_id. If duplicated name, the first connector_id that match it is retrieved.
get_connector_id_from_name(project_id, connector_name)
get_connector_id_from_name(project_id, connector_name)
project_id |
id of the project, can be obtained with get_projects(project_id). |
connector_name |
name of the connector we are searching its id from. |
character - id of the connector if found.
Get information about connector from its id.
get_connector_info(connector_id)
get_connector_info(connector_id)
connector_id |
id of the connector to be retrieved, can be obtained with get_connectors(). |
list - parsed content of the connector.
Get information of all connectors available for a given project_id.
get_connectors(project_id)
get_connectors(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all connectors for the supplied project_id.
Get a contact point information from its contact_point_id.
get_contact_point_info(contact_point_id)
get_contact_point_info(contact_point_id)
contact_point_id |
id of the contact point, can be obtained with get_contact_points(). |
list - information of the contact point.
Get information of all contact points available for a given project_id.
get_contact_points(project_id)
get_contact_points(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all contact points for the supplied project_id.
Get a dataset embedding from a dataset_id.
get_dataset_embedding(dataset_id)
get_dataset_embedding(dataset_id)
dataset_id |
dataset id. |
integer - 200 on success.
Show the head of a dataset from its id.
get_dataset_head(dataset_id)
get_dataset_head(dataset_id)
dataset_id |
id of the dataset, can be obtained with get_datasets(). |
data.frame - head of the dataset.
Get a dataset_id from a dataset_name. If duplicated name, the first dataset_id that match it is retrieved.
get_dataset_id_from_name(project_id, dataset_name)
get_dataset_id_from_name(project_id, dataset_name)
project_id |
id of the project, can be obtained with get_projects(). |
dataset_name |
name of the dataset we are searching its id from. Can be obtained with get_datasets(). |
character - id of the dataset if found.
Get a dataset from its id.
get_dataset_info(dataset_id)
get_dataset_info(dataset_id)
dataset_id |
id of the dataset, can be obtained with get_datasets(). |
list - parsed content of the dataset.
Get information of all datasets available for a given project_id.
get_datasets(project_id)
get_datasets(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all datasets for the suppled project_id.
Get a datasource_id from a datasource_name If duplicated name, the first datasource_id that match it is retrieved
get_datasource_id_from_name(project_id, datasource_name)
get_datasource_id_from_name(project_id, datasource_name)
project_id |
id of the project, can be obtained with get_projects(). |
datasource_name |
name of the datasource we are searching its id from. Can be obtained with get_datasources(). |
character - id of the datasource if found.
Get a datasource from its id.
get_datasource_info(datasource_id)
get_datasource_info(datasource_id)
datasource_id |
id of the data_sources to be retrieved, can be obtained with get_datasources(). |
list - parsed content of the data_sources.
Get information of all data sources available for a given project_id.
get_datasources(project_id)
get_datasources(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all data_sources for the supplied project_id.
Get a deployment_alert_id from a name and type for a given deployment_id.
get_deployment_alert_id_from_name(deployment_id, name)
get_deployment_alert_id_from_name(deployment_id, name)
deployment_id |
id of the deployment, can be obtained with get_deployments(). |
name |
name of the deployment_alert we are searching its id from. |
character - id of the deployment_alert if found.
Get information about a deployment_alert for a given deployed model.
get_deployment_alert_info(deployment_id, deployment_alert_id)
get_deployment_alert_info(deployment_id, deployment_alert_id)
deployment_id |
id of the deployment, can be obtained with get_deployments(). |
deployment_alert_id |
id of the deployment_alert to be retrieved, can be obtained with get_deployment_alerts(). |
list - parsed content of the deployment_alert.
Get information of all alerts related to a deployment_id.
get_deployment_alerts(deployment_id)
get_deployment_alerts(deployment_id)
deployment_id |
id of the project, can be obtained with get_deployments(). |
list - parsed content of all alerts for the supplied deployment_id
Get API keys for a deployed model.
get_deployment_api_keys(deployment_id)
get_deployment_api_keys(deployment_id)
deployment_id |
id of the deployment to get API keys, can be obtained with get_deployments(). |
data.frame - API keys available for deployment_id.
Get logs from a deployed app.
get_deployment_app_logs(deployment_id, log_type)
get_deployment_app_logs(deployment_id, log_type)
deployment_id |
id of the deployment to get the log, can be obtained with get_deployments(). |
log_type |
type of logs we want to get among "build", "deploy" or "run". |
list - logs from deployed apps.
Get a deployment_id from a name and type for a given project_id. If duplicated name, the first deployment_id that match it is retrieved.
get_deployment_id_from_name(project_id, name, type)
get_deployment_id_from_name(project_id, name, type)
project_id |
id of the project, can be obtained with get_projects(). |
name |
name of the deployment we are searching its id from. |
type |
type of the deployment to be retrieved among "model" or "app". |
character - id of the deployment if found.
Get information about a deployment from its id.
get_deployment_info(deployment_id)
get_deployment_info(deployment_id)
deployment_id |
id of the deployment to be retrieved, can be obtained with get_deployments(). |
list - parsed content of the deployment.
Get information related to predictions of a prediction_id.
get_deployment_prediction_info(prediction_id)
get_deployment_prediction_info(prediction_id)
prediction_id |
id of the prediction returned by create_deployment_predictions or that can be obtained with get_deployment_predictions(). |
list - prediction information for a deployed model.
Get listing of predictions related to a deployment_id.
get_deployment_predictions(deployment_id)
get_deployment_predictions(deployment_id)
deployment_id |
id of the deployment, can be obtained with get_deployments(). |
list - predictions available for a deployed model.
Get usage (calls, errors and response time) of the last version of a deployed model.
get_deployment_usage(deployment_id, usage_type)
get_deployment_usage(deployment_id, usage_type)
deployment_id |
id of the deployment to get usage, can be obtained with get_deployments(). |
usage_type |
type of usage to get, among "calls", "errors", "response_time". |
list - plotly object.
Get information of all deployments of a given type available for a given project_id.
get_deployments(project_id, type)
get_deployments(project_id, type)
project_id |
id of the project, can be obtained with get_projects(). |
type |
type of the deployment to retrieve among "model" or "app". |
list - parsed content of all deployments of the given type for the supplied project_id.
Get a experiment_id from a experiment_name If duplicated name, the first experiment_id that match it is retrieved.
get_experiment_id_from_name(project_id, experiment_name)
get_experiment_id_from_name(project_id, experiment_name)
project_id |
id of the project, can be obtained with get_projects(). |
experiment_name |
name of the experiment we are searching its id from. Can be obtained with get_experiments(). |
character - id matching the experiment_name if found.
Get a experiment from its experiment_id.
get_experiment_info(experiment_id)
get_experiment_info(experiment_id)
experiment_id |
id of the experiment, can be obtained with get_experiments(). |
list - parsed content of the experiment.
Get features information related to a experiment_version_id.
get_experiment_version_features(experiment_version_id)
get_experiment_version_features(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
list - parsed content of the experiment_version features information.
Get a experiment version id from a experiment_id and its version number.
get_experiment_version_id(experiment_id, version_number = 1)
get_experiment_version_id(experiment_id, version_number = 1)
experiment_id |
id of the experiment, can be obtained with get_experiments(). |
version_number |
number of the version of the experiment. 1 by default |
character - experiment version id.
Get a experiment_version info from its experiment_version_id
get_experiment_version_info(experiment_version_id)
get_experiment_version_info(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
list - parsed content of the experiment_version.
Get a model list related to a experiment_version_id.
get_experiment_version_models(experiment_version_id)
get_experiment_version_models(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
list - parsed content of models attached to experiment_version_id.
Get a list of prediction from a experiment_version_id.
get_experiment_version_predictions( experiment_version_id, generating_type = "user" )
get_experiment_version_predictions( experiment_version_id, generating_type = "user" )
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
generating_type |
can be "user" (= user predictions) or "auto" (= hold out predictions). |
list - parsed prediction list items.
Get information of all experiments available for a given project_id.
get_experiments(project_id)
get_experiments(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all experiments for the supplied project_id.
Get all exports done from an exporter_id
get_exporter_exports(exporter_id)
get_exporter_exports(exporter_id)
exporter_id |
id of the exporter to retrieve information, can be obtained with get_exporters(). |
list - list of exports of the supplied exporter_id.
Get a exporter_id from a exporter_name. If duplicated name, the first exporter_id that match it is retrieved
get_exporter_id_from_name(project_id, exporter_name)
get_exporter_id_from_name(project_id, exporter_name)
project_id |
id of the project, can be obtained with get_projects(). |
exporter_name |
name of the exporter we are searching its id from. Can be obtained with get_exporters(). |
character - id of the exporter if found.
Get an exporter from its id.
get_exporter_info(exporter_id)
get_exporter_info(exporter_id)
exporter_id |
id of the exporter to be retrieved, can be obtained with get_exporters(). |
list - parsed content of the exporter.
Get information of all exporters available for a given project_id.
get_exporters(project_id)
get_exporters(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all exporters for the supplied project_id.
Get information of a given feature related to a experiment_version_id.
get_features_infos(experiment_version_id, feature_name)
get_features_infos(experiment_version_id, feature_name)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
feature_name |
name of the feature to retrive information. |
list - parsed content of the specific feature.
Get a folder from its id.
get_folder(folder_id)
get_folder(folder_id)
folder_id |
id of the image folder, can be obtained with get_folders(). |
list - parsed content of the folder.
Get a folder_id from a folder_name. If duplicated name, the first folder_id that match it is retrieved.
get_folder_id_from_name(project_id, folder_name)
get_folder_id_from_name(project_id, folder_name)
project_id |
id of the project, can be obtained with get_projects(). |
folder_name |
name of the folder we are searching its id from. Can be obtained with get_folders(). |
character - id of the folder if found.
Get information of all image folders available for a given project_id.
get_folders(project_id)
get_folders(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - parsed content of all folders.
Get the cross validation file from a specific model.
get_model_cv(model_id)
get_model_cv(model_id)
model_id |
id of the model to get the CV, can be obtained with get_experiment_version_models(). |
data.frame - cross validation data coming from model_id.
Get feature importance corresponding to a model_id.
get_model_feature_importance(model_id, mode = "raw")
get_model_feature_importance(model_id, mode = "raw")
model_id |
id of the model, can be obtained with get_experiment_models(). |
mode |
character indicating the type of feature importance among "raw" (default) or "engineered". |
data.frame - dataset of the model's feature importance.
Get hyperparameters corresponding to a model_id.
get_model_hyperparameters(model_id)
get_model_hyperparameters(model_id)
model_id |
id of the model, can be obtained with experimentModels(experiment_id). |
list - parsed content of the model's hyperparameters.
Get model information corresponding to a model_id.
get_model_infos(model_id)
get_model_infos(model_id)
model_id |
id of the model, can be obtained with get_experiment_models(). |
list - parsed content of the model.
Get a pipeline_id from a pipeline_name and type for a given project_id. If duplicated name, the first pipeline_id that match it is retrieved.
get_pipeline_id_from_name(project_id, name, type)
get_pipeline_id_from_name(project_id, name, type)
project_id |
id of the project, can be obtained with get_projects(). |
name |
name of the pipeline we are searching its id from. |
type |
type of the pipeline to be retrieved among "component", "template", "run". |
character - id of the connector if found.
Get information about a pipeline from its id and its type.
get_pipeline_info(pipeline_id, type)
get_pipeline_info(pipeline_id, type)
pipeline_id |
id of the pipeline to be retrieved, can be obtained with get_pipelines(). |
type |
type of the pipeline to be retrieved among "component", "template", "run". |
list - parsed content of the pipeline.
Get information of all pipelines of a given type available for a given project_id.
get_pipelines(project_id, type)
get_pipelines(project_id, type)
project_id |
id of the project, can be obtained with get_projects(). |
type |
type of the pipeline to retrieve among "component", "template", or "run". |
list - parsed content of all pipelines of the given type for the supplied project_id.
Get a specific prediction from a prediction_id. Wait up until time_out is reached and wait wait_time between each retry.
get_prediction(prediction_id, prediction_type, time_out = 3600, wait_time = 10)
get_prediction(prediction_id, prediction_type, time_out = 3600, wait_time = 10)
prediction_id |
id of the prediction to be retrieved, can be obtained with get_experiment_version_predictions(). |
prediction_type |
type of prediction among "validation" (not deployed model) and "deployment" (deployed model). |
time_out |
maximum number of seconds to wait for the prediction. 3 600 by default. |
wait_time |
number of seconds to wait between each retry. 10 by default. |
data.frame - predictions coming from prediction_id.
Get a information about a prediction_id.
get_prediction_infos(prediction_id)
get_prediction_infos(prediction_id)
prediction_id |
id of the prediction to be retrieved, can be obtained with get_experiment_version_predictions(). |
list - parsed prediction information.
Get a project_id from a project_name If duplicated name, the first project_id that match it is retrieved.
get_project_id_from_name(project_name)
get_project_id_from_name(project_name)
project_name |
name of the project we are searching its id from. Can be obtained with get_projects(). |
character - project_id of the project_name if found.
Get a project from its project_id.
get_project_info(project_id)
get_project_info(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - information of the project.
Get users from a project.
get_project_users(project_id)
get_project_users(project_id)
project_id |
id of the project, can be obtained with get_projects(). |
list - information of project's users.
Retrieves all projects.
get_projects()
get_projects()
list - list of existing projects.
Get metrics on a CV file retrieved by Prevision.io for a binary classification use case
helper_cv_classif_analysis(actual, predicted, fold, thresh = NULL, step = 1000)
helper_cv_classif_analysis(actual, predicted, fold, thresh = NULL, step = 1000)
actual |
target comming from the cross Validation dataframe retrieved by Prevision.io |
predicted |
prediction comming from the cross Validation dataframe retrieved by Prevision.io |
fold |
fold number comming from the cross Validation dataframe retrieved by Prevision.io |
thresh |
threshold to use. If not provided optimal threshold given F1 score will be computed |
step |
number of iteration done to find optimal thresh (1000 by default = 0.1% resolution per fold) |
data.frame - metrics computed between actual and predicted vectors.
[BETA] Return a data.frame that contains features, a boolean indicating if the feature may have a different distribution between the submitted datasets (if p-value < threshold), their exact p-value and the test used to compute it.
helper_drift_analysis(dataset_1, dataset_2, p_value = 0.05, features = NULL)
helper_drift_analysis(dataset_1, dataset_2, p_value = 0.05, features = NULL)
dataset_1 |
the first data set |
dataset_2 |
the second data set |
p_value |
a p-value that will be the decision criteria for deciding if a feature is suspicious 5% by default |
features |
a vector of features names that should be tested. If NULL, only the intersection of the names() will be kept |
vector - a vector of suspicious features.
[BETA] Compute the optimal prediction for each rows in a data frame, for a given model, a list of actionable features and a number of samples for each features to be tested.
helper_optimal_prediction( project_id, experiment_id, model_id, df, actionable_features, nb_sample, maximize, zip = FALSE, version = 1 )
helper_optimal_prediction( project_id, experiment_id, model_id, df, actionable_features, nb_sample, maximize, zip = FALSE, version = 1 )
project_id |
id of the project containing the use case. |
experiment_id |
id of the experiment to be predicted on. |
model_id |
id of the model to be predicted on. |
df |
a data frame to be predicted on. |
actionable_features |
a list of actionable_featuress features contained in the names of the data frame. |
nb_sample |
a vector of number of sample for each actionable_features features. |
maximize |
a boolean indicating if we maximize or minimize the predicted target. |
zip |
a boolean indicating if the data frame to predict should be zipped prior sending to the instance. |
version |
version of the use case we want to make the prediction on. |
data.frame - optimal vector and the prediction associated with for each rows in the original data frame.
Plot RECALL, PRECISION & F1 SCORE versus top n predictions for a binary classification use case
helper_plot_classif_analysis(actual, predicted, top, compute_every_n = 1)
helper_plot_classif_analysis(actual, predicted, top, compute_every_n = 1)
actual |
true value (0 or 1 only) |
predicted |
prediction vector (probability) |
top |
top individual to analyse |
compute_every_n |
compute indicators every n individuals (1 by default) |
data.frame - metrics computed between actual and predicted vectors.
Pause a running experiment_version on the platform.
pause_experiment_version(experiment_version_id)
pause_experiment_version(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
integer - 200 on success.
Download resources according specific parameters.
pio_download(endpoint, tempFile)
pio_download(endpoint, tempFile)
endpoint |
end of the url of the API call. |
tempFile |
temporary file to download. |
list - response from the request.
Initialization of the connection to your instance Prevision.io.
pio_init(token, url)
pio_init(token, url)
token |
your master token, can be found on your instance on the "API KEY" page. |
url |
the url of your instance. |
list - url and token needed for connecting to the Prevision.io environment.
## Not run: pio_init('eyJhbGciOiJIUz', 'https://xxx.prevision.io')
## Not run: pio_init('eyJhbGciOiJIUz', 'https://xxx.prevision.io')
Convert a list returned from APIs to a dataframe. Only working for consistent list (same naming and number of columns).
pio_list_to_df(list)
pio_list_to_df(list)
list |
named list comming from an API call. |
data.frame - cast a consistent list to a data.frame.
Request the platform. Thanks to an endpoint, the url and the API, you can create request.
pio_request(endpoint, method, data = NULL, upload = FALSE)
pio_request(endpoint, method, data = NULL, upload = FALSE)
endpoint |
end of the url of the API call. |
method |
the method needed according the API (Available: POST, GET, DELETE). |
data |
object to upload when using method POST. |
upload |
used parameter when uploading dataset (for encoding in API call), don't use it. |
list - response from the request.
## Not run: pio_request(paste0('/jobs/', experiment$jobId), DELETE)
## Not run: pio_request(paste0('/jobs/', experiment$jobId), DELETE)
Resume a paused experiment_version on the platform.
resume_experiment_version(experiment_version_id)
resume_experiment_version(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
integer - 200 on success.
Stop a running or paused experiment_version on the platform.
stop_experiment_version(experiment_version_id)
stop_experiment_version(experiment_version_id)
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
integer - 200 on success.
Test an existing connector.
test_connector(connector_id)
test_connector(connector_id)
connector_id |
id of the connector to be tested, can be obtained with get_connectors(). |
integer - 200 on success.
Test an existing contact point
test_contact_point(contact_point_id)
test_contact_point(contact_point_id)
contact_point_id |
id of the contact point to be tested, can be obtained with get_contact_points(). |
integer - 200 on success.
Test a datasource
test_datasource(datasource_id)
test_datasource(datasource_id)
datasource_id |
id of the datasource to be tested, can be obtained with get_datasources(). |
integer - 200 on success.
Check if a type of a deployment is supported
test_deployment_type(type)
test_deployment_type(type)
type |
type of the deployment among "model" or "app". |
no return value, called for side effects.
Check if a type of a pipeline is supported
test_pipeline_type(type)
test_pipeline_type(type)
type |
type of the pipeline among "component", "template", "run". |
no return value, called for side effects.
Update the description of a given experiment_version_id.
update_experiment_version_description(experiment_version_id, description = "")
update_experiment_version_description(experiment_version_id, description = "")
experiment_version_id |
id of the experiment_version, can be obtained with get_experiment_version_id(). |
description |
Description of the experiment. |
integer - 200 on success.
Update user role in and existing project.
update_project_user_role(project_id, user_id, user_role)
update_project_user_role(project_id, user_id, user_role)
project_id |
id of the project, can be obtained with get_projects(). |
user_id |
user_id of the user to be delete, can be obtained with get_project_users(). |
user_role |
role to grand to the user among "admin", "contributor", "viewer" and "end_user". |
list - information of project's users.