Quick Start Series
Welcome to the NVIDIA FLARE Quick Start Series! This guide provides a set of hello-world examples to help you quickly learn how to build federated learning programs using NVIDIA FLARE.
Make sure you have completed the Installation steps before proceeding.
Run Modes
FLARE supports three modes for different stages of your workflow:
Simulator (NVIDIA FLARE FL Simulator) – Runs jobs on a single system for fast testing and algorithm development.
POC (Proof Of Concept (POC) Command) – Simulates deployment on one host with separate processes for clients and server.
Production (Provisioning and startup package distribution) – Distributed deployment using startup kits from provisioning.
Start with the Simulator for development, then validate with POC before going to Production.
Convert Your ML Code to Federated
Converting existing training code to federated learning requires just 3 changes:
Step 1: Add FLARE imports to your training script
import nvflare.client as flare
Step 2: Initialize FLARE and wrap your training loop
flare.init()
while flare.is_running():
input_model = flare.receive() # receive global model
model.load_state_dict(input_model.params)
# ... your existing training code here ...
output_model = flare.FLModel(
params=model.cpu().state_dict(),
metrics={"accuracy": accuracy},
)
flare.send(output_model) # send updated model back
Step 3: Create a job recipe to define the FL workflow
from nvflare.app_opt.pt.recipes import FedAvgRecipe
recipe = FedAvgRecipe(
name="my-fedavg-job",
min_clients=2,
num_rounds=5,
train_script="train.py",
)
recipe.execute()
That’s it. Your training logic stays the same – FLARE handles the communication, aggregation, and orchestration. For the full Client API reference, see Client API. For pre-built recipes, see Available Recipes.
Hello-world Examples
The following hello-world examples demonstrate different federated learning algorithms and workflows. Each example includes instructions and code to help you get started.
Hello PyTorch - Federated averaging with PyTorch models and training loops. Hello PyTorch
Hello Lightning - Example using PyTorch Lightning for streamlined model training. Hello Pytorch Lightning
Hello Differential Privacy - Federated learning with differential privacy using PyTorch and Opacus for privacy-preserving training.
Hello TensorFlow - Federated averaging using TensorFlow models.
Hello JAX - Federated averaging using JAX, Flax, and Optax on MNIST.
Hello Logistic Regression - Federated logistic regression example using scikit-learn.
Hello Cyclic - Cyclic federated learning workflow example.
Hello Tabular Statistics - Federated statistics computation example.
Hello Flower - Running Flower apps in FLARE.
Hello XGBoost - Federated XGBoost example demonstrating gradient boosting for tabular data in a federated setting.
Let’s start with Hello PyTorch: Hello PyTorch