1. Packages
  2. Oracle Cloud Infrastructure
  3. API Docs
  4. AiLanguage
  5. getModel
Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi

oci.AiLanguage.getModel

Explore with Pulumi AI

oci logo
Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi

    This data source provides details about a specific Model resource in Oracle Cloud Infrastructure Ai Language service.

    Gets a model by identifier

    Example Usage

    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.oci.AiLanguage.AiLanguageFunctions;
    import com.pulumi.oci.AiLanguage.inputs.GetModelArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            final var testModel = AiLanguageFunctions.getModel(GetModelArgs.builder()
                .modelId(testModelOciAiLanguageModel.id())
                .build());
    
        }
    }
    
    variables:
      testModel:
        fn::invoke:
          Function: oci:AiLanguage:getModel
          Arguments:
            modelId: ${testModelOciAiLanguageModel.id}
    

    Using getModel

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getModel(args: GetModelArgs, opts?: InvokeOptions): Promise<GetModelResult>
    function getModelOutput(args: GetModelOutputArgs, opts?: InvokeOptions): Output<GetModelResult>
    def get_model(id: Optional[str] = None,
                  opts: Optional[InvokeOptions] = None) -> GetModelResult
    def get_model_output(id: Optional[pulumi.Input[str]] = None,
                  opts: Optional[InvokeOptions] = None) -> Output[GetModelResult]
    func GetModel(ctx *Context, args *GetModelArgs, opts ...InvokeOption) (*GetModelResult, error)
    func GetModelOutput(ctx *Context, args *GetModelOutputArgs, opts ...InvokeOption) GetModelResultOutput

    > Note: This function is named GetModel in the Go SDK.

    public static class GetModel 
    {
        public static Task<GetModelResult> InvokeAsync(GetModelArgs args, InvokeOptions? opts = null)
        public static Output<GetModelResult> Invoke(GetModelInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetModelResult> getModel(GetModelArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: oci:AiLanguage/getModel:getModel
      arguments:
        # arguments dictionary

    The following arguments are supported:

    Id string
    Unique identifier model OCID of a model that is immutable on creation
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    id String
    Unique identifier model OCID of a model that is immutable on creation
    id string
    Unique identifier model OCID of a model that is immutable on creation
    id str
    Unique identifier model OCID of a model that is immutable on creation
    id String
    Unique identifier model OCID of a model that is immutable on creation

    getModel Result

    The following output properties are available:

    CompartmentId string
    The OCID for the model's compartment.
    DefinedTags Dictionary<string, object>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    Description string
    A short description of the Model.
    DisplayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    EvaluationResults List<GetModelEvaluationResult>
    model training results of different models
    FreeformTags Dictionary<string, object>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    LifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    ModelDetails List<GetModelModelDetail>
    Possible model types
    ProjectId string
    The OCID of the project to associate with the model.
    State string
    The state of the model.
    SystemTags Dictionary<string, object>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    TestStrategies List<GetModelTestStrategy>
    Possible strategy as testing and validation(optional) dataset.
    TimeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    TimeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    TrainingDatasets List<GetModelTrainingDataset>
    Possible data set type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    CompartmentId string
    The OCID for the model's compartment.
    DefinedTags map[string]interface{}
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    Description string
    A short description of the Model.
    DisplayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    EvaluationResults []GetModelEvaluationResult
    model training results of different models
    FreeformTags map[string]interface{}
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    LifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    ModelDetails []GetModelModelDetail
    Possible model types
    ProjectId string
    The OCID of the project to associate with the model.
    State string
    The state of the model.
    SystemTags map[string]interface{}
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    TestStrategies []GetModelTestStrategy
    Possible strategy as testing and validation(optional) dataset.
    TimeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    TimeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    TrainingDatasets []GetModelTrainingDataset
    Possible data set type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId String
    The OCID for the model's compartment.
    definedTags Map<String,Object>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description String
    A short description of the Model.
    displayName String
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    evaluationResults List<GetModelEvaluationResult>
    model training results of different models
    freeformTags Map<String,Object>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id String
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails String
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails List<GetModelModelDetail>
    Possible model types
    projectId String
    The OCID of the project to associate with the model.
    state String
    The state of the model.
    systemTags Map<String,Object>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies List<GetModelTestStrategy>
    Possible strategy as testing and validation(optional) dataset.
    timeCreated String
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated String
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets List<GetModelTrainingDataset>
    Possible data set type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId string
    The OCID for the model's compartment.
    definedTags {[key: string]: any}
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description string
    A short description of the Model.
    displayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    evaluationResults GetModelEvaluationResult[]
    model training results of different models
    freeformTags {[key: string]: any}
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id string
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails GetModelModelDetail[]
    Possible model types
    projectId string
    The OCID of the project to associate with the model.
    state string
    The state of the model.
    systemTags {[key: string]: any}
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies GetModelTestStrategy[]
    Possible strategy as testing and validation(optional) dataset.
    timeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets GetModelTrainingDataset[]
    Possible data set type
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartment_id str
    The OCID for the model's compartment.
    defined_tags Mapping[str, Any]
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description str
    A short description of the Model.
    display_name str
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    evaluation_results Sequence[ailanguage.GetModelEvaluationResult]
    model training results of different models
    freeform_tags Mapping[str, Any]
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id str
    Unique identifier model OCID of a model that is immutable on creation
    lifecycle_details str
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    model_details Sequence[ailanguage.GetModelModelDetail]
    Possible model types
    project_id str
    The OCID of the project to associate with the model.
    state str
    The state of the model.
    system_tags Mapping[str, Any]
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    test_strategies Sequence[ailanguage.GetModelTestStrategy]
    Possible strategy as testing and validation(optional) dataset.
    time_created str
    The time the the model was created. An RFC3339 formatted datetime string.
    time_updated str
    The time the model was updated. An RFC3339 formatted datetime string.
    training_datasets Sequence[ailanguage.GetModelTrainingDataset]
    Possible data set type
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId String
    The OCID for the model's compartment.
    definedTags Map<Any>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description String
    A short description of the Model.
    displayName String
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    evaluationResults List<Property Map>
    model training results of different models
    freeformTags Map<Any>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id String
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails String
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails List<Property Map>
    Possible model types
    projectId String
    The OCID of the project to associate with the model.
    state String
    The state of the model.
    systemTags Map<Any>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies List<Property Map>
    Possible strategy as testing and validation(optional) dataset.
    timeCreated String
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated String
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets List<Property Map>
    Possible data set type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    Supporting Types

    GetModelEvaluationResult

    ClassMetrics List<GetModelEvaluationResultClassMetric>
    List of text classification metrics
    ConfusionMatrix string
    class level confusion matrix
    EntityMetrics List<GetModelEvaluationResultEntityMetric>
    List of entity metrics
    Labels List<string>
    labels
    Metrics List<GetModelEvaluationResultMetric>
    Model level named entity recognition metrics
    ModelType string
    Model type
    ClassMetrics []GetModelEvaluationResultClassMetric
    List of text classification metrics
    ConfusionMatrix string
    class level confusion matrix
    EntityMetrics []GetModelEvaluationResultEntityMetric
    List of entity metrics
    Labels []string
    labels
    Metrics []GetModelEvaluationResultMetric
    Model level named entity recognition metrics
    ModelType string
    Model type
    classMetrics List<GetModelEvaluationResultClassMetric>
    List of text classification metrics
    confusionMatrix String
    class level confusion matrix
    entityMetrics List<GetModelEvaluationResultEntityMetric>
    List of entity metrics
    labels List<String>
    labels
    metrics List<GetModelEvaluationResultMetric>
    Model level named entity recognition metrics
    modelType String
    Model type
    classMetrics GetModelEvaluationResultClassMetric[]
    List of text classification metrics
    confusionMatrix string
    class level confusion matrix
    entityMetrics GetModelEvaluationResultEntityMetric[]
    List of entity metrics
    labels string[]
    labels
    metrics GetModelEvaluationResultMetric[]
    Model level named entity recognition metrics
    modelType string
    Model type
    class_metrics Sequence[ailanguage.GetModelEvaluationResultClassMetric]
    List of text classification metrics
    confusion_matrix str
    class level confusion matrix
    entity_metrics Sequence[ailanguage.GetModelEvaluationResultEntityMetric]
    List of entity metrics
    labels Sequence[str]
    labels
    metrics Sequence[ailanguage.GetModelEvaluationResultMetric]
    Model level named entity recognition metrics
    model_type str
    Model type
    classMetrics List<Property Map>
    List of text classification metrics
    confusionMatrix String
    class level confusion matrix
    entityMetrics List<Property Map>
    List of entity metrics
    labels List<String>
    labels
    metrics List<Property Map>
    Model level named entity recognition metrics
    modelType String
    Model type

    GetModelEvaluationResultClassMetric

    F1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Support double
    number of samples in the test set
    F1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Support float64
    number of samples in the test set
    f1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support Double
    number of samples in the test set
    f1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    label string
    Entity label
    precision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support number
    number of samples in the test set
    f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    label str
    Entity label
    precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support float
    number of samples in the test set
    f1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support Number
    number of samples in the test set

    GetModelEvaluationResultEntityMetric

    F1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    F1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    label string
    Entity label
    precision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    label str
    Entity label
    precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.

    GetModelEvaluationResultMetric

    Accuracy double
    The fraction of the labels that were correctly recognised .
    MacroF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    MacroPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MacroRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    MicroF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    MicroPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MicroRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    WeightedF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    WeightedPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    WeightedRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Accuracy float64
    The fraction of the labels that were correctly recognised .
    MacroF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    MacroPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MacroRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    MicroF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    MicroPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MicroRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    WeightedF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    WeightedPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    WeightedRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy Double
    The fraction of the labels that were correctly recognised .
    macroF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy number
    The fraction of the labels that were correctly recognised .
    macroF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy float
    The fraction of the labels that were correctly recognised .
    macro_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    macro_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macro_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    micro_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    micro_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    micro_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weighted_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    weighted_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weighted_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy Number
    The fraction of the labels that were correctly recognised .
    macroF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.

    GetModelModelDetail

    ClassificationModes List<GetModelModelDetailClassificationMode>
    classification Modes
    LanguageCode string
    supported language default value is en
    ModelType string
    Model type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    ClassificationModes []GetModelModelDetailClassificationMode
    classification Modes
    LanguageCode string
    supported language default value is en
    ModelType string
    Model type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes List<GetModelModelDetailClassificationMode>
    classification Modes
    languageCode String
    supported language default value is en
    modelType String
    Model type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes GetModelModelDetailClassificationMode[]
    classification Modes
    languageCode string
    supported language default value is en
    modelType string
    Model type
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classification_modes Sequence[ailanguage.GetModelModelDetailClassificationMode]
    classification Modes
    language_code str
    supported language default value is en
    model_type str
    Model type
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes List<Property Map>
    classification Modes
    languageCode String
    supported language default value is en
    modelType String
    Model type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    GetModelModelDetailClassificationMode

    ClassificationMode string
    classification Modes
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    ClassificationMode string
    classification Modes
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode String
    classification Modes
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode string
    classification Modes
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classification_mode str
    classification Modes
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode String
    classification Modes
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    GetModelTestStrategy

    StrategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    TestingDatasets List<GetModelTestStrategyTestingDataset>
    Possible data set type
    ValidationDatasets List<GetModelTestStrategyValidationDataset>
    Possible data set type
    StrategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    TestingDatasets []GetModelTestStrategyTestingDataset
    Possible data set type
    ValidationDatasets []GetModelTestStrategyValidationDataset
    Possible data set type
    strategyType String
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets List<GetModelTestStrategyTestingDataset>
    Possible data set type
    validationDatasets List<GetModelTestStrategyValidationDataset>
    Possible data set type
    strategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets GetModelTestStrategyTestingDataset[]
    Possible data set type
    validationDatasets GetModelTestStrategyValidationDataset[]
    Possible data set type
    strategy_type str
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testing_datasets Sequence[ailanguage.GetModelTestStrategyTestingDataset]
    Possible data set type
    validation_datasets Sequence[ailanguage.GetModelTestStrategyValidationDataset]
    Possible data set type
    strategyType String
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets List<Property Map>
    Possible data set type
    validationDatasets List<Property Map>
    Possible data set type

    GetModelTestStrategyTestingDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelTestStrategyTestingDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelTestStrategyTestingDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelTestStrategyTestingDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelTestStrategyTestingDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelTestStrategyTestingDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelTestStrategyTestingDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    GetModelTestStrategyValidationDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelTestStrategyValidationDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelTestStrategyValidationDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelTestStrategyValidationDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelTestStrategyValidationDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelTestStrategyValidationDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelTestStrategyValidationDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    GetModelTrainingDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelTrainingDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelTrainingDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelTrainingDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelTrainingDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelTrainingDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelTrainingDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    Package Details

    Repository
    oci pulumi/pulumi-oci
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the oci Terraform Provider.
    oci logo
    Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi