Probing Models

Available Models

Linear Model

Model Name: probe_ably.core.models.linear.LinearModel

class probe_ably.core.models.linear.LinearModel(params: Dict)[source]
__init__(params: Dict)[source]

Initiate the Linear Model

Parameters

params (Dict) –

Contains the parameters for initialization. Params data format is

{
    'representation_size': Dimension of the representation,
    'dropout': Dropout of module,
    'n_classes': Number of classes for classification,
    'alpha': Alpha value to calculate the complexity of the module
}

forward(representation: torch.Tensor, labels: torch.Tensor, eps=1e-05, **kwargs)Dict[str, torch.Tensor][source]

Forward method

Parameters
  • representation (Tensor) – Representation tensors

  • labels (Tensor) – Prediciton labels

Returns

Return dictionary of {‘loss’: loss, ‘preds’: preds }

Return type

Dict[str, Tensor]

get_complexity(**kwargs)Dict[str, float][source]

Computes the Nuclear Norm complexity

Returns

Returns the complexity value of {‘norm’: nuclear norm score of model}

Return type

Dict[str, float]

Here representation_size and n_classes will be provided during runtime by the trainer initated from the data. dropout and alpha values are static value ranges are provided in config/params/linear.json file. The probe_ably.core.utils.grid_model_factory (See Grid Model Factory) creates multiple models with the prescribe values for probing.

config/params/linear.json is defined as follows:

{
    "probe_ably.core.models.linear.LinearModel":{
        "params":[
            {
                "name":"dropout",
                "type":"float_range",
                "options":[
                0.0,
                0.51
                ]
            },
            {
                "name":"alpha",
                "transform":"2**x",
                "type":"float_range",
                "options":[
                -10.0,
                3.0
                ]
            }
        ]
    }
}

Here the ranges of __init__ static params is as follows:

  • dropout is of float range between 0.0 and 0.51

  • alpha is of float range between 2 -10.0 and 2 3.0

Multi Layer Perceptron (MLP) Model

Model Name: probe_ably.core.models.linear.LinearModel

class probe_ably.core.models.mlp.MLPModel(params: Dict)[source]
__init__(params: Dict)[source]

Initiate the MLP Model

Parameters

params (Dict) –

Contains the parameters for initialization. Params data format is

{
    'representation_size': Dimension of the representation,
    'dropout': Dropout of module,
    'hidden_size': Hidden layer size of MLP,
    'n_layers': Number of MLP Layers,
    'n_classes': Number of classes for classification,
}

forward(representation: torch.Tensor, labels: torch.Tensor, **kwargs)Dict[str, torch.Tensor][source]

Forward method

Parameters
  • representation (Tensor) – Representation tensors

  • labels (Tensor) – Prediciton labels

Returns

Return dictionary of {‘loss’: loss, ‘preds’: preds }

Return type

Dict[str, Tensor]

get_complexity(**kwargs)Dict[str, float][source]

Computes the number of params complexity

Returns

Returns the complexity value of as {‘n_params’: number of parameters in model}

Return type

float

Here representation_size and n_classes will be provided during runtime by the trainer initated from the data. dropout, hidden_size and n_layers values are static value ranges are provided in config/params/mlp.json file. The probe_ably.core.utils.grid_model_factory (See Grid Model Factory) creates multiple models with the prescribe values for probing.

config/params/mlp.json is defined as follows:

{
    "probe_ably.core.models.mlp.MLPModel":{
    "params":[
        {
            "name":"hidden_size",
            "type":"int_range",
            "options":[
                32,
                1024
            ]
        },
        {
            "name":"n_layers",
            "type":"int_range",
            "options":[
                1,
                10
            ]
        },
        {
            "name":"dropout",
            "type":"float_range",
            "options":[
                0.0,
                0.5
            ]
        }
    ]
    }
}

Here the ranges of __init__ static params is as follows:

  • dropout is of float range between 0.0 and 0.5

  • n_layers is of int range between 1 and 10

  • hidden_size is of int range between 32 and 1024

Implementing New Probing Models

If you are to implement a new Probing Model. There are two steps

Step 1: Implementing model by extending the abstract Model

class probe_ably.core.models.abstract_model.AbstractModel(params)[source]
__init__(params)[source]

Abstract class initialization method

Parameters

params ([type]) –

Contains the parameters for initialization. Params data format is

{
    'representation_size': Dimension of the representation,
    'n_layers': Number of MLP Layers,
}

abstract forward(representation: torch.Tensor, labels: torch.Tensor, **kwargs)Dict[str, torch.Tensor][source]

Abstract class forward method

Parameters
  • representation (Tensor) – Representation tensors

  • labels (Tensor) – Prediciton labels

Returns

Return dictionary of {‘loss’: losss, ‘preds’: preds }

Return type

Dict[str, Tensor]

abstract get_complexity(**kwargs)Dict[str, float][source]

Computes the complexity

Returns

Returns dictionary of {“complexity_measure1”: value1, “complexity_measure2”: value2}

Return type

Dict[str,float]

Here representation_size and n_classes will be provided during runtime by the trainer initated from the data.

Step 2: Create JSON configuration file with the ranges to create models. Refer to config/params/mlp.json and config/params/linear.json for example.