No items found.

Healthcare & Pharmaceuticals

Secure storage, analytics and machine learning for sensitive data in a
user-friendly platform that runs on your infrastructure or in the cloud.

Secure and low cost platform to manage large and sensitive data

Karolinska Institutet is Scandinavia’s largest university hospital and one of the world's leading medical universities known for its high quality research and education, accounting  for over 40% of the medical academic research in Sweden.

Hopsworks Multi-tenant Security Model helped Karolinska Institutet’s to provide collaboration between researchers to manage, share and use genomic data without compromising data security and GDPR.

SECURE, LOW-COST, AND SCALABLE GENOMIC DATA

Challenge: Data preparation, cataloging, and feature management for a massive genomic dataset containing sensitive information.

At the Karolinska Institutet’s center for cervical cancer prevention, sequencing machines have generated 800+ TBs of next-generation sequencing data, requiring both low-cost storage and secure large-scale processing by researchers.

The organisation utilizes large-scale processing on Apache Spark and deep learning on TensorFlow to analyze these scale sensitive datasets to identify novel viruses, perform large cohort studies, and identify genetic mutations causing diseases. However:

  • Neither Kubernetes nor Hadoop based platforms support storing and processing sensitive data on a shared cluster required by research studies to avoid cross-link with data outside the study or copy data in/out from a study. One cluster per research study introduces excessive cost and administration overhead.
  • Infrastructure was too complex and expensive to administer without a dedicated IT operations team.
  • Researchers required a data science platform that provided them with the ability to do everything from small scale analyses in Python on notebooks, to large-scale processing with Spark/PySpark, to deep learning with GPUs.

Key Results

90% Cost Reduction

Costs savings associated with storing large volumes of data, as well as compute resources (CPU) and Graphical Processing Units (GPUs) to process this data.

Integrated Data Science Platform

Easy collaboration between researchers when managing, sharing, and processing genomic data.

Faster Data Processing

Massively parallel data processing pipeline for massive genomic datasets.

Solution: User-friendly and secure deep learning on low-cost commodity infrastructure.

Karolinska Institutet deployed Hopsworks to provide it with a secure and scalable platform to manage genomic data and perform secure research studies. Hopsworks is built around projects, providing a GDPR-compliant environment that enables secure collaboration between researchers on medical studies within a shared cluster.

Hopsworks is optimized for commodity hardware and runs on any data center. Clusters can be easily expanded by adding capacity, when needed enabling a low cost solution for up to PBs of data. Similarly, Hopsworks supports commodity or enterprise GPUs that can be used for deep learning.

Hopsworks’ user-friendly web interface enables researchers to run, manage and access data and programs without software administration knowledge and skills.


Hopsworks’ key capabilities that we used are:

  • Multi-tenant Security Model to ensure the integrity and privacy of sensitive research data in a shared cluster;
  • Python/Jupyter notebooks for small scale studies;
  • Spark for scalable processing of genomic data;
  • TensorFlow/PyTorch for deep learning on genomic data;
  • Custom Metadata Designer to manage and search for genomic data with free-text search;
  • Commodity hardware to decrease costs associated with storage of large data volume that requires many GPUs.
Hopswork Data-Intensive Machine Learning with a Feature Store
Download the Hopsworks
in Healthcare White Paper

Scaling ML with the hopsworks feature store

Hopsworks is the world’s first horizontally scalable data platform for machine learning to provide a feature store. It aids in the cleaning of data and preparation of features, and it makes features reusable by other teams.

The Hopsworks Feature Store acts as an effective API between team members who are working on data engineering (and pulling data from backend data warehouses and data lakes) versus those working on data science (model building, training, and evaluation).


Security by design: Data scientists can be given sandboxed access to sensitive data, complying with GDPR and stronger security requirements.


Scale-out deep learning: Distributed Deep Learning over 10s or 100s of GPUs for parallel experiments and distributed training.


Provenance support for ML pipelines: Enables fully reproducible models, easier debugging, and comprehensive data governance for pipelines.


Integration with third party platforms: Seamless integration with data science platforms, such as AWS Sagemaker , Databricks and Kubeflow. Hopsworks also integrates with datalakes, such as S3, Hadoop, and Delta Lake. Hopsworks also supports single sign-on for ActiveDirectory, LDAP, and OAuth2.

Download the Hopsworks
Feature Store
White Paper

Hopsworks at a glance

Efficiency & Performance

Development & Operations

Governance & Compliance

Feature Store
Data warehouse for ML
Distributed Deep Learning
Faster with more GPUs
HopsFS
NVMe Speed with Big Data
Horizontally Scalable
Ingestion, Dataprep, training, Serving
Notebooks For development
First-class Python Support
Version Everything
Code, Infrastructure, Data
Model Serving on Kubernetes
TF Serving, MLeap, SkLearn
End-to-End ML Pipelines
Orchestrated by Airflow
Secure Multi-tenancy
Project-based restricted Access
Encription At-rest, In-Motion
TLS/SSL everywhere
AI-Asset Governance
Models, Experiment, data, GPUs
Data/Model/Feature Lineage
Discover/track dependencies

Efficiency & Performance

Feature Store
Data warehouse for ML
Distributed Deep Learning
Faster with more GPUs
HopsFS
NVMe Speed with Big Data
Horizontally Scalable
Ingestion, Dataprep, training, Serving

Development & Operations

Notebooks For development
First-class Python Support
Version Everything
Code, Infrastructure, Data
Model Serving on Kubernetes
TF Serving, MLeap, SkLearn
End-to-End ML Pipelines
Orchestrated by Airflow

Governance & Compliance

Secure Multi-tenancy
Project-based restricted Access
Encription At-rest, In-Motion
TLS/SSL everywhere
AI-Asset Governance
Models, Experiment, data, GPUs
Data/Model/Feature Lineage
Discover/track dependencies