With AMD’s most powerful GPU now supporting the hugely popular open-source AI programme, just think of the possibilities if an AMD APU could run Stable Diffusion right out of the box.

With AMD’s most powerful GPU now supporting the hugely popular open-source AI programme, just think of the possibilities if an AMD APU could run Stable Diffusion right out of the box.

AMD is using open source technology to challenge Nvidia.

AMD has integrated PyTorch functionality into its Radeon RX 7900 XTX and Radeon Pro W7900 graphics cards in an effort to increase the accessibility of AI for developers and researchers.

These are some of the best GPUs available, based on the RDNA 3 GPU architecture, and they may now enable customers to create a private and economical workflow for machine learning training and inference. For such AI tasks in the past, customers might have had to depend on cloud access to appropriate GPUs.

Dan Wood, vice president of Radeon product management, stated, “We are excited to offer the AI community new support for machine learning development using PyTorch built on the AMD Radeon RX 7900 XTX and Radeon Pro W7900 GPUs and the ROCm open software platform.” This is our first implementation of the RDNA 3 architecture, and we are excited to work with the community on it.

With AMD's most powerful GPU now supporting the hugely popular open-source AI programme, just think of the possibilities if an AMD APU could run Stable Diffusion right out of the box.
With AMD’s most powerful GPU now supporting the hugely popular open-source AI programme, just think of the possibilities if an AMD APU could run Stable Diffusion right out of the box.

Maximising ROCm’s Potential
AMD hopes to make AI workloads more accessible to people without the resources or infrastructure necessary by supporting the PyTorch machine learning framework on its most powerful graphics cards.

The Radeon Open Compute (ROCm) software stack for GPUs, which covers heterogeneous computing, high-performance computing (HPC), and general-purpose computing, is also available to anyone seeking to utilise PyTorch.

Users of computers with AMD Instinct MI series accelerators, CDNA GPUs, and RDNA 3-based GPUs can also run PyTorch with AMD ROCm 5.7.

Since ROCm is open source, programmers are free to add support for their own unique AI processing requirements and explore a wide range of avenues. For instance, there is a great deal of interest in running Stable Diffusion on AMD accelerated processing units (APUs).

In a video they uploaded on YouTube, one user, for instance, demonstrated how to convert a 4600G APU into a 16GB VRAM GPU that could execute AI workloads, even on Stable Diffusion, with little to no difficulty.

Leave a Comment