Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run Llama3.1_(8B)-GRPO.ipynb on Titan V #1705

Open
simusid opened this issue Feb 14, 2025 · 1 comment
Open

Unable to run Llama3.1_(8B)-GRPO.ipynb on Titan V #1705

simusid opened this issue Feb 14, 2025 · 1 comment

Comments

@simusid
Copy link

simusid commented Feb 14, 2025

Trying to run the notebook announced here on a recently updated Ubuntu 22.04 system. GPUs are two Titan V with Volta architecture, not Ampere.

The notebook runs fine unmodified all the way to trainer.train() where I get the error:
python: /project/lib/Analysis/Allocation.cpp:47: std::pair<llvm::SmallVector, llvm::SmallVector > mlir::triton::getCvtOrder(mlir::Attribute, mlir::Attribute): Assertion `!(srcMmaLayout && dstMmaLayout && !srcMmaLayout.isAmpere()) && "mma -> mma layout conversion is only supported on Ampere"' failed.

As stated I have Volta not Ampere, so I just thought "oh well, I'm out of luck" but per Reddit user yoracale "Mmm odd it should definitely work. I'm unsure why there is an error, would you be kind enough to make an issue on Github? Thanks "

@danielhanchen
Copy link
Contributor

So we do support Tesla T4 (Turing CUDA 7.5) 100%.

I think V100s work (CUDA 7.0?) I think Titan V is also 7.0, so I thought it should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants