AMD adds support for super-popular open source AI tool to its most powerful GPU – now imagine what would happen if an AMD APU could run Stable Diffusion out of the box
AMD is taking the fight to Nvidia’s with open source
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
AMDis hoping to make AI more accessible to developers and researchers by adding support for PyTorch to itsRadeon RX 7900 XTXandRadeon Pro W7900graphics cards.
Based on its RDNA 3 GPU architecture, these are some of thebest GPUsout there, and they can now let users establish a private and cost-effective workflow for machine learning training and inference. Previously, users may have needed to rely on cloud access to compatible GPUs for such AI workloads.
“We are excited to offer the AI community new support for machine learning development using PyTorch built on the AMD Radeon RX 7900 XTX and Radeon Pro W7900 GPUs and the ROCm open software platform,” said Dan Wood, VP of Radeon product management. “This is our first RDNA 3 architecture-based implementation, and we are looking forward to partnering with the community.”
Making the most of ROCm
With the PyTorch machine learning framework now supported on itsmost powerful graphics cards, AMD is hoping to crack open access to AI workloads for users who don’t have the means or infrastructure that you’d otherwise need.
Anybody hoping to take advantage of PyTorch can also use the Radeon Open Compute (ROCm) software stack for GPUs – which spans general-purpose computing, high-performance computing (HPC), as well as heterogeneous computing.
With AMD ROCm 5.7, users of machines with RDNA 3-based GPUs, as well as CDNA GPU and AMD Instinct MI series accelerators, can also use PyTorch.
Because ROCm is open source, developers may wish to take in all kinds of different directions and add support for their own particular AI processing needs. There is, for example, a huge amount of appetite out there to getStable Diffusionrunning on AMD accelerated processing units (APUs).
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
One user, for example, turned a4600G APUinto a 16GB VRAM GPU that could run AI workloads – including on Stable Diffusion – without too much of a hitch, according to a video theyposted on YouTube.
More from TechRadar Pro
Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.
New fanless cooling technology enhances energy efficiency for AI workloads by achieving a 90% reduction in cooling power consumption
Samsung plans record-breaking 400-layer NAND chip that could be key to breaking 200TB barrier for ultra large capacity AI hyperscaler SSDs
NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)