Welcoming AMD/ROCm to Hopsworks

October 14, 2019
in
Product Updates
by
Robin Andersson
Jim Dowling
Theofilos Kakantousis
Edited: First published 

ROCm support now in Hopsworks 1.0

With Hopsworks 1.0, we have now added support for AMD GPUs with ROCm. This enables you to take your TensorFlow programs and run them, unchanged, on Hopsworks. RadeonOpenCompute (ROCm) is an open-source framework for GPU computing that supports multi-GPU computing to scale out training and reduce the time needed to train models.

ROCm is signficant for data scientists as, until now, they have had a lack of choice in GPU hardware when training models. With the recent upstreaming of ROCm changes to TensorFlow, ROCm is now a first-class citizen in the TensorFlow ecosystem. This enable Enterprise AI platforms, such as Hopsworks, to allow TensorFlow applications to run, unchanged, on AMD GPU hardware. To further enable deep learning on many GPUs in a cluster, Logical Clocks have also added support for resource scheduling of AMD GPUs in Hopsworks clusters, with Hops YARN. With Hopsworks and AMD GPUs, developers can now train deep learning models much faster on frameworks like TensorFlow using tens or hundreds of GPUs in parallel.

To learn more, read our whitepaper ROCm in Hopsworks, see our talk and demo from the Databricks Summit 2019 or the O'Reilly AI Conference 2019.

Hopsworks adds support for AMD GPUs by adding ROCm support to YARN, Hopsworks' resource scheduler.


Star us !

Get Started with Hopsworks

AWSOn-Premises

Book a demo

Get an introduction to Hopsworks and Hopsworks Feature Store for your Machine Learning projects together with one of our engineers.

A comprehensive walk-through
• How Hopsworks can align with your current ML pipelines
• How to manage Features within Hopsworks feature store
• The benefits of Hopsworks Feature Store for your teams

Let us know if your specific wishes and pre-requisites for your personal demonstration.