Migration
Switch to Soup from other fine-tuning tools in one command. Soup automatically converts your existing config files.
Supported Tools
| Source | Config format | Command |
|---|---|---|
| LLaMA-Factory | YAML config | soup migrate --from llamafactory config.yaml |
| Axolotl | YAML config | soup migrate --from axolotl config.yml |
| Unsloth | Jupyter notebook | soup migrate --from unsloth finetune.ipynb |
Usage
bash
# Convert LLaMA-Factory config to soup.yaml
soup migrate --from llamafactory llama3_lora_sft.yaml
# Convert Axolotl config
soup migrate --from axolotl axolotl_config.yml
# Convert Unsloth notebook (AST-only, no code execution)
soup migrate --from unsloth finetune.ipynb
# Preview without writing (dry-run)
soup migrate --from llamafactory config.yaml --dry-run
# Custom output path
soup migrate --from axolotl config.yml --output my-soup.yamlWhat Gets Mapped
Soup maps all major config fields automatically:
- Model:
model_name_or_path/base_model->base - Task:
stage/rl->task(pt->pretrain, rm->reward_model, sft/dpo/kto/ppo/grpo) - LoRA: rank, alpha, dropout, target modules, DoRA
- Training: epochs, lr, batch size, optimizer, scheduler, warmup, quantization
- Data: dataset path, format, max length
- Output: output directory
Unsupported Features
Soup will warn about features that don't have a direct equivalent:
- sample_packing (Axolotl) — no Soup equivalent yet
- freeze training (LLaMA-Factory) — partial parameter freezing
- use_rslora (Unsloth) — now supported via
training.use_rslorain Soup v0.21.0+ - max_steps — Soup uses epochs; max_steps emitted as comment
- DeepSpeed / W&B — emitted as CLI flag comments
Security
- Input files are validated to be within the current working directory
- Unsloth notebooks are parsed via AST only — no code is executed
- Output paths are confined to the current directory