Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additions #1

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Additions #1

wants to merge 2 commits into from

Conversation

Kathleendeboer
Copy link

No description provided.

@Kathleendeboer Kathleendeboer requested a review from Veldhoen April 2, 2024 13:09
Copy link
Member

@Veldhoen Veldhoen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've looked at the separate files now, and added quite some remarks. In general, I think your code needs some refactoring and cleaning up.

I am not sure whether you really need to create this fairseq release, or whether you could also add your own functionality in the worker repo. Most items you added are additions rather than modifications, except the fairseq/modules/transformer_layer, in which your modifications are so impactful that it seems like you are actually creating new/different objects.

I did not try to run your code yet. Please provide a description of what/how to run when you create a PR :)

preds = list(logging_output)[0] # First value
probabilities = list(logging_output)[1] # Second value

return preds, probabilities
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are modifying the blueprint of the function here, which affects the compatibility. Is that really necessary, or could you also create a new function with your desired return type?

loss, sample_size, logging_output = criterion(model, sample)
return loss, sample_size, logging_output
logging_output = criterion(model, sample) #calling the same criterion loss function #loss deleted MODIFIED
#print("Prediction", sample_size) #MODIFIED
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove spurious print comments (old print statements)

@@ -0,0 +1,130 @@
# Copyright (c) Facebook, Inc. and its affiliates.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this Facebook code or your own?


from fairseq import models

model = models.build_model(args, self) #/home/gsir059/Documents/PhD/MulFie/fairseq/fairseq/models/__init__.py'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weird comment

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you based your version on an existing example file. It's more clear if you first commit theexample file, and then commit your changes to it. That makes it easier to review your additions/modifications.
Could you point me to the file you used as a starting point? Or even better, try to insert it post hoc? See https://printf2linux.wordpress.com/2012/04/09/insert-a-commit-in-the-past-git/ for an instruction of inserting a commit in your history

Comment on lines +189 to +193
# self.emotion_dictionary = { #modei senti 2 class

# '0': 0,
# '1':1
# }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative encoding is commented out? Should this be configurable?

Comment on lines +304 to +306
#print("DEBUG dataset sources length")
#print(len(sources))
#print(sources)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spurious comments

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here as well, please provide the basis that you extended explicitly.

Also, the functions are quite complex: refactor/break them down into meaningful, testable bits. Provide a description (ideally docstring) explaining what each part should be doing.

Comment on lines +88 to +93
# pred_real_i='pred_mos_real'
# truth_real_i='truth_mos_real'

# logging_output.update({truth_real_i : targets.view(-1).cpu().detach().numpy()})
# logging_output.update({pred_real_i : logits.view(-1).cpu().detach().numpy()})

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spurious comments


exclude_zero=True

#This gives a problem when running with with batch size of one and that batch consist of a '0' as the truth
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear what you are trying to accomplish with exclude_zero

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants