Published by dowlingj on



Distributed Filesystems for Deep Learning

More training data gives predictable gains in prediction accuracy

tl;dr When you train deep learning models with lots of high quality training data, you can beat state-of-the-art prediction models in a wide array of domains (image classification, voice recognition, and machine translation). Distributed filesystems are becoming increasingly indispensable as a central store for training data, logs, model serving, and checkpoints. HopsFS is a great choice, as it has native support for the main Python frameworks for Data Science: Pandas, TensorFlow/Keras, PySpark, and Arrow.

Prediction Performance Improves Predictably with Dataset Size

Baidu showed that the improvement in prediction accuracy (or reduction in generalization error) for deep learning models was predictable based on the amount of training data. The decrease in generalization error with increasing training dataset size follows a power-law distribution(as seen by the straight lines in the log-log graph below). This astonishing result came from a large-scale study in the different application domains of machine translation, language modeling, image classification, and speech recognition. Given that this result holds true in vastly different application domains, there is a good chance the same result holds true for your particular application domain. This result is important for companies considering investing in deep learning – if it costs $X to collect or generate a new GB of high quality training data, you can predict the improvement of prediction accuracy for your model, given the slope, Y, of the log-log graph you have observed while training.

[Baidu Research ]

Predictable ROI in the Power-Law Region

This predictable return-on-investment (ROI) for collecting/generating more training data is slightly more complex that the one described above. You first need to collect enough training data to get beyond the “Small Data Region” in the diagram below. That is, you can only make predictions if you have enough data that you are in the “Power-Law Region”.

[Baidu18 ]

You can determine this by graphing the reduction in your generalization error as a function of your training data size on a log-log scale. After you start observing the straight line on your model, calculate the exponent of your power-law graph (the slope of the graph). Baidu’s empirically-collected learning curves showed exponents in the range [-0.35, -0.07] – suggesting models learn real-world data more slowly than suggested by theory (theoretical models indicate the power-law exponent is expected to be -0.5).

Still, if you observe the power-law region, increasing your training data set size will give you a predictable decrease in generalization error. For example, if you are training an image classifier for a self-driving vehicle, the number of hours your cars have driven autonomously determines your training data size. So, going from 2m hours to 6m hours of autonomous driving should reduce errors in your image classifier by a predictable amount. This is important in giving businesses a level of certainty in the improvements they can expect when making large investments in new data collection or generation.

Need for a Distributed Filesystem

The TensorFlow team say a distributed filesystem is a must for deep learning. Datasets are getting larger, GPUs are disaggregated from storage, workers with GPUs need to coordinate for model checkpointing, hyperparameter optimization, and model-architecture search. Your system may grow beyond a single server, or you may have different servers for serving your models from the servers you have for training your models. A distributed filesystem is the glue that holds together the different stages of your machine learning workflows, and it enables teams to share both GPU hardware and data. What is important is that the distributed filesystem works with your choice of programming language and deep learning framework(s).

A distributed filesystem is needed for managing logs, tensorboard, coordinating GPUs for experiments, storing checkpoints during training, and storing/serving models.

HopsFS is a great choice as a distributed filesystem, due to it being a drop-in replacement for HDFS. HopsFS/HDFS are supported in major Python frameworks: Pandas, PySpark DataFrames, TensorFlow Data, and so on. In Hopsworks, we provide built-in HopsFS/HDFS support with the pydoop library. HopsFS has one additional feature that is aimed at machine learning workloads: improved throughput and lower latency reading/writing for small files. In a peer reviewed paper at Middleware 2018, we showed throughput improvements of up to 66X compared to HDFS for small files.

Python Support in Distributed Filesystems

As we can see from the table below, the choice of distributed filesystem will affect how you can

Filesystem/Store PySpark Pandas TensorFlow NVMe Support






S3 / GCS




Local FS





Python Support in HopsFS

We now give some simple examples of how to write Python code to use datasets in HopsFS. Complete notebooks can be found here.

Pandas with HopsFS

import hops.hdfs as hdfs
cols = [
“Age”, “Occupation”, “Sex”, …, “Country”]
h = hdfs.get_fs()
with h.open_file(hdfs.project_path()+“/TestJob/data/census/”, “r”) as f:
train_data=pd.read_csv(f, names=cols, sep=

In Pandas, the only change we need to make to our code, compared to a local filesystem, is to replace open_file(..) with h.open_file(..), where h is a file handle to HDFS/HopsFS.

PySpark with HopsFS

from mmlspark import ImageTransformer
images = spark.readImages(IMAGE_PATH, recursive =
True, sampleRatio = 0.1).cache()
tr = (ImageTransformer().setOutputCol(
.resize(height =
200, width = 200)
0, 0, height = 180, width = 180) )
smallImgs = tr.transform(images).select(
“/Projects/myProj/Resources/small_imgs”, format=“parquet”)

TensorFlow Datasets with HopsFS

def input_fn(batch_sz):
files =
def tfrecord_dataset(f):
return, num_parallel_reads=32, buffer_size=8*1024*1024)
dataset = files.apply(,cycle_length=
dataset = dataset.prefetch(
return dataset


Follow us on twitter.
Hops on GitHub.