Real-time predictions with a Feature Store: low latency and no training/serving skew
Watch now →
30
Written by
Robin Andersson
Software engineer
Jim Dowling
CEO
Theofilos Kakantousis
VP of Product
October 14, 2019

Share & Star us:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
More Blogs
Edited: First published 

Welcoming AMD/ROCm to Hopsworks

With Hopsworks 1.0, we have now added support for AMD GPUs with ROCm. This enables you to take your TensorFlow programs and run them, unchanged, on Hopsworks. RadeonOpenCompute (ROCm) is an open-source framework for GPU computing that supports multi-GPU computing to scale out training and reduce the time needed to train models.

ROCm is signficant for data scientists as, until now, they have had a lack of choice in GPU hardware when training models. With the recent upstreaming of ROCm changes to TensorFlow, ROCm is now a first-class citizen in the TensorFlow ecosystem. This enable Enterprise AI platforms, such as Hopsworks, to allow TensorFlow applications to run, unchanged, on AMD GPU hardware. To further enable deep learning on many GPUs in a cluster, Logical Clocks have also added support for resource scheduling of AMD GPUs in Hopsworks clusters, with Hops YARN. With Hopsworks and AMD GPUs, developers can now train deep learning models much faster on frameworks like TensorFlow using tens or hundreds of GPUs in parallel.

To learn more, read our whitepaper ROCm in Hopsworks, see our talk and demo from the Databricks Summit 2019 or the O'Reilly AI Conference 2019.

Hopsworks adds support for AMD GPUs by adding ROCm support to YARN, Hopsworks' resource scheduler.