How we secure your data with Hopsworks
Integrate with third-party security standards and take advantage from our project-based multi-tenancy model to host data in one single shared cluster.
Unifying Single-host and Distributed Machine Learning with Maggy
Try out Maggy for hyperparameter optimization or ablation studies now on Hopsworks.ai to access a new way of writing machine learning applications.
Manage your own Feature Store on Kubeflow with Hopsworks
Learn how to integrate Kubeflow with Hopsworks and take advantage of its Feature Store and scale-out deep learning capabilities.
How to Build your own Feature Store
Given the increasing interest in feature stores, we share our own experience of building one to help others who are considering following us down the same path.
Hopsworks Feature Store for AWS SageMaker
Integrate AWS SageMaker with Hopsworks to manage, discover and use features for creating training datasets and for serving features to operational models.
Hopsworks Feature Store for Databricks
This article introduces the Hopsworks Feature Store for Databricks, and how it can accelerate and govern your model development and operations on Databricks.
ExtremeEarth scales AI to the Earth Observation Community with Hopsworks
How ExtremeEarth Brings Large-scale AI to the Earth Observation Community with Hopsworks, the Data-intensive AI Platform
MLOps with a Feature Store
How the Feature Store enables monolithic end-to-end ML pipelines to be decomposed into feature pipelines and model training pipelines.
AI & Deep Learning for Fraud & AML
Anomaly detection and Deep learning for identifying money laundering . Less false positives and higher accuracy than traditional rule-based approaches.
Guide to File Formats for Machine Learning: Columnar, Training, and Inferencing
This is a guide to file formats for ml in Python. The Feature Store can store training/test data in a file format of choice on a file system of choice.
Hello Asynchronous Search for PySpark
Hopsworks supports easy hyperparameter optimization (both synchronous and asynchronous search), distributed training using PySpark, TensorFlow and GPUs.
Optimizing GPU utilization in Hops
How we use dynamic executors in PySpark to ensure GPUs are only allocated to executors only when they are training neural networks.
Why you need a Distributed Filesystem for Deep Learning
When you train deep learning models with lots of high quality training data, you can beat state-of-the-art prediction models in a wide array of domains.
When Deep Learning with GPUs, use a Cluster Manager
If you are employing a team of Data Scientists for Deep Learning, a cluster manager to share GPUs between your team will maximize utilization of your GPUs.
Feature Store: The Missing Data Layer in ML Pipelines?
A Feature Store stores features. We go through data management for deep learning and present the first open-source feature store now in Hopsworks' ML Platform.