Skip to content

Commit d4b0a9d

Browse files
committed
Reorganized the documentation.
1 parent d4bcf7e commit d4b0a9d

File tree

4 files changed

+8
-4
lines changed

4 files changed

+8
-4
lines changed

docs/docs/examples/Getting Started.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ Plato supports both Linux with NVIDIA GPUs and macOS with M1/M2/M4/M4 GPUs. It w
6363

6464
- [Model Pruning Algorithms](algorithms/13.%20Model%20Pruning%20Algorithms.md)
6565

66-
## Examples of using Plato's API
66+
## Case Studies
6767

68-
- Composable Trainer API
68+
- [Federated LoRA Fine-Tuning](case-studies/1.%20LoRA.md)
6969

70-
A simple example located at `examples/composable_trainer` has been provided to demonstrate the composable trainer API design with strategy composition.
70+
- [Composable Trainer API](case-studies/2.%20Composable%20Trainer.md)

docs/docs/examples/algorithms/15. LoRA Federated Fine-Tuning.md renamed to docs/docs/examples/case-studies/1. LoRA.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Federated LoRA Fine-Tuning
22

3-
Plato now provides first-class support for parameter-efficient LoRA fine-tuning of HuggingFace causal language models. The previous standalone example has been folded into the core framework.
3+
Plato now provides first-class support for parameter-efficient LoRA fine-tuning of HuggingFace causal language models.
44

55
---
66

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
A simple example located at `examples/composable_trainer` has been provided to demonstrate the composable trainer API design with strategy composition.

docs/mkdocs.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,9 @@ nav:
5454
- Poisoning Detection: examples/algorithms/12. Poisoning Detection Algorithms.md
5555
- Model Pruning: examples/algorithms/13. Model Pruning Algorithms.md
5656
- Gradient Leakage Attacks and Defences: examples/algorithms/14. Gradient Leakage Attacks and Defences.md
57+
- Case Studies:
58+
- Federated LoRA Fine-Tuning: examples/case-studies/1. LoRA.md
59+
- Composable Trainer API: examples/case-studies/2. Composable Trainer.md
5760
- Configuration Settings:
5861
- Overview: configurations/overview.md
5962
- General: configurations/general.md

0 commit comments

Comments
 (0)