Skip to content

Feature Request: make SoftMax application optional in runInference #69

Open
@avyan-k

Description

@avyan-k
  • How it works currently: a applySoftmax field is set to true in the runInference base method (see here), which means the SoftMax function is always applied to the output of the model to generate class probabilities
  • How it would work ideally: have the option to decide if SoftMax is applied, either through changing some of the method overloads or a global option to avoid disrupting existing functionality (see incoming PR for simplified implementation of the former option)
  • Why I believe it is important/should be added: I would like to run inference using my own models, and they already apply softmax on their output. Since the SoftMax function is not idempotent, applying it a second time would "uniformise" the resulting class probabilities. To avoid this, I remove this last activation layer from my models, but this make the model architecture unclear and the TorchScript inconsistent from the base PyTorch model. I believe it is more consistent if runInference simply measures the output a model without modifying the output in any unexpected way, just as it does not modify the input beyond what is specified in the config.json file. Not applying the SoftMax automatically will also allow runInference to be used for other kinds of measurements beyond class probabilities (e.g. Sigmoid applied to each class for Multi-Label/Multi-Task classification), or perhaps other output activation functions to be applied instead of SoftMax.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions