Quick Start Series

Welcome to the NVIDIA FLARE Quick Start Series! This guide provides a set of hello-world examples to help you quickly learn how to build federated learning programs using NVIDIA FLARE.

Make sure you have completed the Installation steps before proceeding.

Run Modes

FLARE supports three modes for different stages of your workflow:

Start with the Simulator for development, then validate with POC before going to Production.

Convert Your ML Code to Federated

Converting existing training code to federated learning requires just 3 changes:

Step 1: Add FLARE imports to your training script

import nvflare.client as flare

Step 2: Initialize FLARE and wrap your training loop

flare.init()

while flare.is_running():
    input_model = flare.receive()           # receive global model
    model.load_state_dict(input_model.params)

    # ... your existing training code here ...

    output_model = flare.FLModel(
        params=model.cpu().state_dict(),
        metrics={"accuracy": accuracy},
    )
    flare.send(output_model)                # send updated model back

Step 3: Create a job recipe to define the FL workflow

from nvflare.app_opt.pt.recipes import FedAvgRecipe

recipe = FedAvgRecipe(
    name="my-fedavg-job",
    min_clients=2,
    num_rounds=5,
    train_script="train.py",
)
recipe.execute()

That’s it. Your training logic stays the same – FLARE handles the communication, aggregation, and orchestration. For the full Client API reference, see Client API. For pre-built recipes, see Available Recipes.

Hello-world Examples

The following hello-world examples demonstrate different federated learning algorithms and workflows. Each example includes instructions and code to help you get started.

  1. Hello PyTorch - Federated averaging with PyTorch models and training loops. Hello PyTorch

  2. Hello Lightning - Example using PyTorch Lightning for streamlined model training. Hello Pytorch Lightning

  3. Hello Differential Privacy - Federated learning with differential privacy using PyTorch and Opacus for privacy-preserving training.

  4. Hello TensorFlow - Federated averaging using TensorFlow models.

  5. Hello JAX - Federated averaging using JAX, Flax, and Optax on MNIST.

  6. Hello Logistic Regression - Federated logistic regression example using scikit-learn.

  7. Hello Cyclic - Cyclic federated learning workflow example.

  8. Hello Tabular Statistics - Federated statistics computation example.

  9. Hello Flower - Running Flower apps in FLARE.

  10. Hello XGBoost - Federated XGBoost example demonstrating gradient boosting for tabular data in a federated setting.

Let’s start with Hello PyTorch: Hello PyTorch