-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use module attribute to specify target arch. #3387
base: main
Are you sure you want to change the base?
Conversation
c531872
to
b5eeb9e
Compare
op = op->getParentOfType<ModuleOp>(); | ||
auto arch = op->getAttrOfType<StringAttr>( | ||
triton::gpu::intel::TritonIntelGPUDialect::getTargetArchAttrName()); | ||
assert(arch); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an assert message pls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -46,6 +47,16 @@ inline unsigned getNumElementsPerThread( | |||
inline bool applyTransposedReduction() { | |||
return tools::getBoolEnv("TRITON_INTEL_REDUCE_TRANSPOSE"); | |||
} | |||
|
|||
// Check if module's target arch is SPIRV. | |||
inline bool hasSpirvTargetArch(Operation *op) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we require the user to pass a ModuleOp
rather than a generic operation ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought accepting any nested op would be more convenient
Signed-off-by: Ilya Enkovich <[email protected]>
b5eeb9e
to
a7f5a86
Compare
My preference is to fall back to the optional attribute, as the original design is to rely on features attributes given either by PyTorch or |
This new attribute is orthogonal to the set of features. There is a set of features and a target arch that affects how you get access to those features. We shouldn't use this attribute to assume HW features. I don't see how optionality can resolve your concern. We should just see that nobody tries to use the target arch attribute to make any feature assumptions because different target archs can be used for the same target HW. |
Having target arch attribute increases the chance one uses it to assume HW features, but we can try to prevent that with code reviews, so I am not against this change. |
@whitneywhtsang @etiotto Layout-related tests fail because they use TTGIR as an input and it doesn't have the required attribute. I can add more test modifications to resolve it, but I think we better get back to the optional attribute. What do you think? |
This variant makes the target arch attribute mandatory for conversion to the LLVM dialect. We can fall back to the optional attribute if it looks too intrusive.