# Zero Redundancy optimizer and zero offload¶

The Zero Redundancy Optimizer (ZeRO) removes the memory redundancies across data-parallel processes by partitioning three model states (optimizer states, gradients, and parameters) instead of replicating them. By doing so, memory efficiency is boosted drastically compared to classic data parallelism while the computational granularity and communication efficiency are retained.

1. ZeRO Level 1: The optimizer states (e.g., for Adam optimizer, 32-bit weights, and the first and second momentum estimates) are partitioned across the processes, so that each process updates only its partition.

2. ZeRO Level 2: The reduced 32-bit gradients for updating the model weights are also partitioned such that each process only stores the gradients corresponding to its partition of the optimizer states.

3. ZeRO Level 3: The 16-bit model parameters are partitioned across the processes. ZeRO-3 will automatically collect and partition them during the forward and backward passes.

## Getting Started with ZeRO¶

If you are training models with Colossal-AI, enabling ZeRO DP and Offloading is easy by addding several lines in your configuration file. We support configration for level 2 and 3. You have use PyTorch native implementation for level 1 optimizer. Below are a few examples of ZeRO-3 configurations.

### Example of ZeRO-3 Configurations¶

You can refer to the DeepSpeed configuration for details. Here we use Adam as the initial optimizer.

1. Use ZeRO to partition the optimizer states, gradients (level 2), and parameters (level 3).

zero = dict(
level=3,
dynamic_loss_scale=True,
)


zero = dict(
level=3,
device='cpu',
pin_memory=True,
fast_init=True
),
...
)

3. Save even more memory by offloading parameters to the CPU memory.

zero = dict(
level=3,
device='cpu',
pin_memory=True,
fast_init=True
),
device='cpu',
pin_memory=True,
),
...
)


zero = dict(
level=3,
device='nvme',
pin_memory=True,
fast_init=True,
nvme_path='/nvme_data'
),
device='nvme',
pin_memory=True,
nvme_path='/nvme_data'
),
...
)


Note that fp16 is automatically enabled when using ZeRO. This relies on AMP_TYPE.NAIVE in Colossal-AI AMP module.

### Training¶

Note that if your model is too large to fit within the memory when using ZeRO-3, you should use colossalai.zero.zero3_model_context to construct your model:

from colossalai.zero import zero3_model_context

with zero3_model_context():
model = Model()


Once you have completed your configuration, just use colossalai.initialize() to initialize your training.