Skip to content

Conversation

jorickert
Copy link
Contributor

@jorickert jorickert commented Sep 3, 2025

This is done to reduce the rank of tile ops to make downstream consumption easier

Comment on lines +112 to +114
/* Rewrites reshapes that are adding 1-sized dims and are followed by a tile on
* the one-sized dim to a tile on the original shape followed by a reshape. This
* is done to reduce the rank of tile ops. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the core canonicalization to minimize the dimensions needed to express the tile operation? If so it would be better to do so regardless of a directly connected reshape or not and instead insert reshape before and after as appropriate.

If this is not the core goal then I question having this as a canonicalization as matching tile will still have a guarantee that the tile op is in canonicalized form.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think its a good idea to always do the tile rank reduction, as the inserted reshapes may prevent other patterns from matching.

I originally wrote this to target a specific case:

reshape 1x2x3 to 1x2x1x3
tile 1x2x1x3 to 1x2x4x3
reshape to 1x2x12

this could be canonicalized to :
tile 1x2x3 to 1x2x12

In this case the rank of the tile would be reduced and the reshapes completely canceled.

I marked this Pr as draft, as I think it generally needs more investigation when it makes sense to do this and when not

…tile on the one-sized dim to a tile on the original shape followed by a reshape.

 This is done to reduce the rank of tile ops.
@jorickert jorickert force-pushed the jrickert.canonicalize_tile branch from e11eb99 to 7276ee6 Compare October 6, 2025 13:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants