Sanyam Bhutani 4 месяцев назад
Родитель
Сommit
f7c8a8f41e

+ 1 - 1
.github/ISSUE_TEMPLATE/bug.yml

@@ -6,7 +6,7 @@ body:
     attributes:
       value: >
         #### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the
-        existing and past issues](https://github.com/facebookresearch/llama-recipes/issues), the [FAQ](https://github.com/facebookresearch/llama-recipes/blob/main/docs/FAQ.md) 
+        existing and past issues](https://github.com/meta-llama/llama-cookbook/issues), the [FAQ](https://github.com/meta-llama/llama-cookbook/blob/main/src/docs/FAQ.md) 
 
   - type: textarea
     id: system-info

+ 1 - 1
.github/ISSUE_TEMPLATE/feature-request.yml

@@ -1,5 +1,5 @@
 name: 🚀 Feature request
-description: Submit a proposal/request for a new llama-recipes feature
+description: Submit a proposal/request for a new llama-cookbook feature
 
 body:
 - type: textarea

+ 1 - 1
.github/PULL_REQUEST_TEMPLATE.md

@@ -28,7 +28,7 @@ Logs for Test B
 
 ## Before submitting
 - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
-- [ ] Did you read the [contributor guideline](https://github.com/facebookresearch/llama-recipes/blob/main/CONTRIBUTING.md#pull-requests),
+- [ ] Did you read the [contributor guideline](https://github.com/meta-llama/llama-cookbook/blob/main/CONTRIBUTING.md),
       Pull Request section?
 - [ ] Was this discussed/approved via a Github issue? Please add a link
       to it if that's the case.

+ 5 - 5
.github/workflows/pytest_cpu_gha_runner.yaml

@@ -1,4 +1,4 @@
-name: "[GHA][CPU] llama-recipes Pytest tests on CPU GitHub hosted runner."
+name: "[GHA][CPU] llama-cookbook Pytest tests on CPU GitHub hosted runner."
 on:
   pull_request:
     branches:
@@ -35,7 +35,7 @@ jobs:
         run: |
             cat /etc/os-release
 
-      - name: "Checkout 'facebookresearch/llama-recipes' repository"
+      - name: "Checkout 'meta-llama/llama-cookbook' repository"
         id: checkout
         uses: actions/checkout@v4
 
@@ -53,10 +53,10 @@ jobs:
           pip3 install setuptools
 
 
-      - name: "Installing 'llama-recipes' project"
-        id: install_llama_recipes_package
+      - name: "Installing 'llama-cookbook' project"
+        id: install_llama_cookbook_package
         run: |
-          echo "Installing 'llama-recipes' project (re: https://github.com/facebookresearch/llama-recipes?tab=readme-ov-file#install-with-optional-dependencies)"
+          echo "Installing 'llama-cookbook' project (re: https://github.com/meta-llama/llama-cookbook/tree/main/src?tab=readme-ov-file#install-with-optional-dependencies)"
           pip install --extra-index-url ${PYTORCH_WHEEL_URL} -e '.[tests]'
 
 

+ 6 - 6
CONTRIBUTING.md

@@ -1,4 +1,4 @@
-# Contributing to llama-recipes
+# Contributing to llama-cookbook
 We want to make contributing to this project as easy and transparent as
 possible.
 
@@ -27,18 +27,18 @@ disclosure of security bugs. In those cases, please go through the process
 outlined on that page and do not file a public issue.
 
 ## License
-By contributing to llama-recipes, you agree that your contributions will be licensed
+By contributing to llama-cookbook, you agree that your contributions will be licensed
 under the LICENSE file in the root directory of this source tree.
 
 ## Tests
-Llama-recipes currently comes with a basic set of unit tests (covering the parts of the main training script and training loop) but we strive to increase our test coverage in the future in order to mitigate silent errors.
+Llama-cookbook currently comes with a basic set of unit tests (covering the parts of the main training script and training loop) but we strive to increase our test coverage in the future in order to mitigate silent errors.
 When submitting a new feature PR please make sure to cover the newly added code with a unit test.
 Run the tests locally to ensure the new feature does not break an old one.
-We use **pytest** for our unit tests and to run them locally you need to install llama-recipes with optional [tests] dependencies enabled:
+We use **pytest** for our unit tests and to run them locally you need to install llama-cookbook with optional [tests] dependencies enabled:
 ```
-pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 llama-recipes[tests]
+pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 llama-cookbook[tests]
 ```
-For development and contributing to llama-recipes please install from source with all optional dependencies:
+For development and contributing to llama-cookbook please install from source with all optional dependencies:
 ```
 pip install -U pip setuptools
 pip install --extra-index-url https://download.pytorch.org/whl/test/cu118 -e .[tests,auditnlg,vllm]

+ 3 - 3
end-to-end-use-cases/email_agent/README.md

@@ -114,8 +114,8 @@ source emailagent\Scripts\activate # on Windows
 
 Then install the required Python libraries:
 ```
-git clone https://github.com/meta-llama/llama-recipes
-cd llama-recipes/end-to-end-use-cases/email_agent
+git clone https://github.com/meta-llama/llama-cookbook
+cd llama-cookbook/end-to-end-use-cases/email_agent
 pip install -r requirements.txt
 ```
 
@@ -329,7 +329,7 @@ Tool calling returned: [{'message_id': '1936ef72ad3f30e8', 'sender': 'xxx@gmail.
 5. Letta's blog [The AI agents stack](https://www.letta.com/blog/ai-agents-stack)
 6. Microsoft's multi-agent system [Magentic-One](https://www.microsoft.com/en-us/research/articles/magentic-one-a-generalist-multi-agent-system-for-solving-complex-tasks)
 7. Amazon's [Multi-Agent Orchestrator framework](https://awslabs.github.io/multi-agent-orchestrator/)
-8. Deeplearning.ai's [agent related courses](https://www.deeplearning.ai/courses/?courses_date_desc%5Bquery%5D=agents) (Meta, AWS, Microsoft, LangChain, LlamaIndex, crewAI, AutoGen, Letta) and some [lessons ported to using Llama](https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/agents/DeepLearningai_Course_Notebooks). 
+8. Deeplearning.ai's [agent related courses](https://www.deeplearning.ai/courses/?courses_date_desc%5Bquery%5D=agents) (Meta, AWS, Microsoft, LangChain, LlamaIndex, crewAI, AutoGen, Letta) and some [lessons ported to using Llama](https://github.com/meta-llama/llama-cookbook/tree/main/end-to-end-use-cases/agents/DeepLearningai_Course_Notebooks). 
 9. Felicis's [The Agentic Web](https://www.felicis.com/insight/the-agentic-web)
 10. A pretty complete [list of AI agents](https://github.com/e2b-dev/awesome-ai-agents), not including [/dev/agents](https://sdsa.ai/), a very new startup building the next-gen OS for AI agents, though.
 11. Sequoia's [post](https://www.linkedin.com/posts/konstantinebuhler_the-ai-landscape-is-shifting-from-simple-activity-7270111755710672897-ZHnr/) on 2024 being the year of AI agents and 2025 networks of AI agents.

+ 4 - 4
pyproject.toml

@@ -3,14 +3,14 @@ requires = ["hatchling", "hatch-requirements-txt"]
 build-backend = "hatchling.build"
 
 [project]
-name = "llama-recipes"
+name = "llama-cookbook"
 version = "0.0.4.post1"
 authors = [
   { name="Hamid Shojanazeri", email="hamidnazeri@meta.com" },
   { name="Matthias Reso", email="mreso@meta.com" },
   { name="Geeta Chauhan", email="gchauhan@meta.com" },
 ]
-description = "Llama-recipes is a companion project to the Llama models. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models."
+description = "Llama-cookbook is a companion project to the Llama models. It's goal is to provide examples to quickly get started with fine-tuning for domain adaptation and how to run inference for the fine-tuned models."
 readme = "README.md"
 requires-python = ">=3.8"
 classifiers = [
@@ -27,8 +27,8 @@ auditnlg = ["auditnlg"]
 langchain = ["langchain_openai", "langchain", "langchain_community"]
 
 [project.urls]
-"Homepage" = "https://github.com/facebookresearch/llama-recipes/"
-"Bug Tracker" = "https://github.com/facebookresearch/llama-recipes/issues"
+"Homepage" = "https://github.com/meta-llama/llama-cookbook"
+"Bug Tracker" = "https://github.com/meta-llama/llama-cookbook/issues"
 
 [tool.hatch.build]
 exclude = [

+ 1 - 1
src/llama_recipes/configs/datasets.py

@@ -28,7 +28,7 @@ class alpaca_dataset:
 @dataclass
 class custom_dataset:
     dataset: str = "custom_dataset"
-    file: str = "recipes/quickstart/finetuning/datasets/custom_dataset.py"
+    file: str = "getting-started/finetuning/datasets/custom_dataset.py"
     train_split: str = "train"
     test_split: str = "validation"
     data_path: str = ""

+ 3 - 3
src/tests/datasets/test_custom_dataset.py

@@ -52,7 +52,7 @@ def test_custom_dataset(step_lr, optimizer, get_model, tokenizer, train, mocker,
     kwargs = {
         "dataset": "custom_dataset",
         "model_name": llama_version,
-        "custom_dataset.file": "recipes/quickstart/finetuning/datasets/custom_dataset.py",
+        "custom_dataset.file": "getting-started/finetuning/datasets/custom_dataset.py",
         "custom_dataset.train_split": "validation",
         "batch_size_training": 2,
         "val_batch_size": 4,
@@ -111,7 +111,7 @@ def test_unknown_dataset_error(step_lr, optimizer, tokenizer, get_model, get_con
 
     kwargs = {
         "dataset": "custom_dataset",
-        "custom_dataset.file": "recipes/quickstart/finetuning/datasets/custom_dataset.py:get_unknown_dataset",
+        "custom_dataset.file": "getting-started/finetuning/datasets/custom_dataset.py:get_unknown_dataset",
         "batch_size_training": 1,
         "use_peft": False,
         }
@@ -121,7 +121,7 @@ def test_unknown_dataset_error(step_lr, optimizer, tokenizer, get_model, get_con
 @pytest.mark.skip_missing_tokenizer
 @patch('llama_recipes.finetuning.AutoTokenizer')
 def test_tokenize_dialog(tokenizer, monkeypatch, setup_tokenizer, llama_version):
-    monkeypatch.syspath_prepend("recipes/quickstart/finetuning/datasets/")
+    monkeypatch.syspath_prepend("getting-started/finetuning/datasets/")
     from custom_dataset import tokenize_dialog
 
     setup_tokenizer(tokenizer)

+ 1 - 1
src/tests/test_chat_completion.py

@@ -8,7 +8,7 @@ import torch
 from llama_recipes.inference.chat_utils import read_dialogs_from_file
 
 ROOT_DIR = Path(__file__).parents[2]
-CHAT_COMPLETION_DIR = ROOT_DIR / "recipes/quickstart/inference/local_inference/chat_completion/"
+CHAT_COMPLETION_DIR = ROOT_DIR / "getting-started/inference/local_inference/chat_completion/"
 
 sys.path = [CHAT_COMPLETION_DIR.as_posix()] + sys.path