Enable fine tuning on HPU#59
Enable fine tuning on HPU#59splotnikv wants to merge 2 commits intoRed-Hat-AI-Innovation-Team:mainfrom
Conversation
Signed-off-by: Sergey Plotnikov <sergey.plotnikov@intel.com>
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
HPU does not natively support torch.linalg.svd(), last commit added CPU level parallelization. |
|
Thanks for the contribution and for thinking of mini_trainer! After some consideration, we've decided to keep this project narrowly scoped to CUDA/NVIDIA hardware. The main reason is maintainability; we don't have access to HPU hardware for testing, and adding support for additional accelerators creates code paths we can't realistically validate or maintain long-term. Untested code paths tend to bit-rot and create a poor experience for users who rely on them. If you're interested in HPU support, you're welcome to maintain a fork, or if there's community interest, a separate mini_trainer-hpu project could be a good approach. Appreciate your understanding, and thanks again for the interest in the project! |
This PR adds HPU support to mini trainer. It should be used with Red-Hat-AI-Innovation-Team/training_hub#10.