Skip to content

Dependency issues with fairseq==0.2.1 and CUDA sm_120 (RTX 5090 Blackwell) for Seamless M4T TTS #567

@aishwary-intellifAI

Description

@aishwary-intellifAI

Hi team,

I am trying to run Seamless M4T TTS on my system with an NVIDIA RTX 5090 Blackwell GPU. The repository requires fairseq==0.2.1, which is only compatible with torch==2.2.2. However, torch 2.2.2 appears to be incompatible with CUDA sm_120 architecture used in the 5090, resulting in dependency issues and failed builds.

Could you please advise:

  • If there is a recommended workaround or patch for running Seamless M4T TTS on recent Blackwell GPUs?
  • Whether fairseq can be upgraded for newer torch/CUDA compatibility without breaking integration?
  • Any suggested steps for making the current pipeline work on sm_120 CUDA?

Thanks in advance for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions