Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use resultElementType for tensor creation for AtenMemoryFormatOp with none dtype. #3941

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

sahas3
Copy link
Contributor

@sahas3 sahas3 commented Jan 6, 2025

Prior to the change in this PR torch-mlir-opt --convert-torch-to-linalg was running into the following error:

error: 'tensor.cast' op operand type 'tensor<200x200x26xf32>' and result type 'tensor<200x200x26xf64>' are cast incompatible
    %1 = torch.aten.empty.memory_format %0, %none, %none, %none, %false, %none : !torch.list<int>, !torch.none, !torch.none, !torch.none, !torch.bool, !torch.none -> !torch.vtensor<[200,200,26],f64>
         ^
note: see current operation: %12 = "tensor.cast"(%11) : (tensor<200x200x26xf32>) -> tensor<200x200x26xf64

This is because when dtype of the aten.empty.memory_format is none, by default f32 was being selected as the element type of the resulting tensor which doesn't match with the actual element type of the result.

@sahas3 sahas3 requested a review from ramiro050 January 6, 2025 04:53
@sahas3
Copy link
Contributor Author

sahas3 commented Jan 6, 2025

Hi @ramiro050, I see that I am reverting your change here. It's not clear to me why the resultElementType was changed in that commit. I verified that the test you added passes with my change. Can you please take a look at this PR? Thanks!

@sahas3
Copy link
Contributor Author

sahas3 commented Jan 22, 2025

Hello @ramiro050 and @vivekkhandelwal1 can you please review this PR when you have a chance? Thanks!

Comment on lines +365 to +373
func.func @torch.aten.empty.memory_format$noneDtype() -> !torch.vtensor<[200,200,26],f64> attributes {torch.assume_strict_symbolic_shapes} {
%int200 = torch.constant.int 200
%int26 = torch.constant.int 26
%false = torch.constant.bool false
%none = torch.constant.none
%0 = torch.prim.ListConstruct %int200, %int200, %int26 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int>
%1 = torch.aten.empty.memory_format %0, %none, %none, %none, %false, %none : !torch.list<int>, !torch.none, !torch.none, !torch.none, !torch.bool, !torch.none -> !torch.vtensor<[200,200,26],f64>
return %1 : !torch.vtensor<[200,200,26],f64>
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @sahas3, the above test case is an invalid test since the tensor element type is f32 when none is specifed for the aten.empty.memory_format op.

I just tried it. See:

>>> a = torch.ops.aten.empty.memory_format([2, 3])
>>> a
tensor([[-3.4054e+29,  4.2543e-41, -3.4085e+29],
        [ 4.2543e-41, -3.4086e+29,  4.2543e-41]])
>>> a.dtype
torch.float32

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the changes that you've done are not correct and hence this patch should not be merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants