From 4f26370f8e32d5ac1e76081f4880813282cbadbd Mon Sep 17 00:00:00 2001 From: Adarsh N <44583199+adarshn656@users.noreply.github.com> Date: Sun, 10 Aug 2025 22:57:58 +0530 Subject: [PATCH] docs: improve getting started readme - Fixed an incomplete sentence in the finetuning section. - Corrected typos and wrong alt text for several image badges. - Added missing alt text to improve accessibility for the Llama API badges." --- getting-started/README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/getting-started/README.md b/getting-started/README.md index 754daa62d..d7722ded1 100644 --- a/getting-started/README.md +++ b/getting-started/README.md @@ -1,7 +1,7 @@
@@ -11,8 +11,8 @@
If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama. @@ -21,5 +21,5 @@ If you are new to developing with Meta Llama models, this is where you should st * The [Build_with_Llama API](./build_with_llama_api.ipynb) notebook highlights some of the features of [Llama API](https://llama.developer.meta.com?utm_source=llama-cookbook&utm_medium=readme&utm_campaign=getting_started). * The [inference](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [3p_integrations/vllm](../3p-integrations/vllm/) and [3p_integrations/tgi](../3p-integrations/tgi/) for hosting Llama on open-source model servers. * The [RAG](./RAG/) folder contains a simple Retrieval-Augmented Generation application using Llama. -* The [finetuning](./finetuning/) folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-cookbook finetuning code found in [finetuning.py](../src/llama_cookbook/finetuning.py) which supports these features: +* The [finetuning](./finetuning/) folder contains resources to help you finetune Llama on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-cookbook finetuning code found in [finetuning.py](../src/llama_cookbook/finetuning.py) which supports these features. * The [llama-tools](./llama-tools/) folder contains resources to help you use Llama tools, such as [llama-prompt-ops](../llama-tools/llama-prompt-ops_101.ipynb).