Skip to content

Training (or Start/End of epoch) signal to Loss function #178

Open
@liamaltarac

Description

@liamaltarac

If you open a GitHub issue, here is our policy:

It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
The form below must be filled out.

Here's why we have that policy:.

Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.

System information.

TensorFlow version (you are using):
Are you willing to contribute it (Yes/No) :

Describe the feature and the current behavior/state.

Describe the feature clearly here. Be sure to convey here why the requested feature is needed. Any brief description about the use-case would help.

I am developing a custom loss function, that, throughout training, stores in memory previous loss values from each mini-batch, and uses those past values to compute the new loss (similar to reward calculations in RL) .

The problem I am facing is that (as far as I can tell), there is no way to tell the loss function whether or not it is in a training phase, or in a validation phase. Therefore, all the validation loss values get indiscriminately stored in memory as well, and get used in training in the later epochs.

I'm trying to make my loss function as user-friendly as possible, so ideally it could be simply called from model.fit() without any need for extra callbacks.

If possible, I think it would be nice if Loss.__call__() had an extra parameter (training) which would automatically be raised by model.fit() in the validation/training phase, similar to the training argument in tf.keras.layers.Layer.call().

Will this change the current api? How?
It would change the parameters in tf.keras.losses.Loss.__call__ by adding an extra (optional) training parameter.

Who will benefit from this feature?
Anyone who writes a custom loss function that requires different behaviour in training from testing.

Contributing

  • Do you want to contribute a PR? (yes/no): No
  • If yes, please read this page for instructions
  • Briefly describe your candidate solution(if contributing):

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions