|
@@ -49,9 +49,9 @@
|
|
|
"1. **Set up an account with Llama**: You can use the LLAMA API key with a model like `Llama-4-Maverick-17B-128E-Instruct-FP8`. However, you're not limited to this; you can choose any other inference provider's endpoint and respective LLAMA models that suit your needs.\n",
|
|
|
"2. **Choose a Llama model or alternative**: Select a suitable Llama model for inference, such as `Llama-4-Maverick-17B-128E-Instruct-FP8`, or explore other available LLAMA models from your chosen inference provider.\n",
|
|
|
"3. **Create a Qdrant account**: Sign up for a Qdrant account and generate an access token.\n",
|
|
|
- "4. **Set up a Qdrant collection**: Use the provided script (`qdrant_setup_partial.py`) to create and populate a Qdrant collection.\n",
|
|
|
+ "4. **Set up a Qdrant collection**: Use the provided script (`setup_qdrant_collection.py`) to create and populate a Qdrant collection.\n",
|
|
|
"\n",
|
|
|
- "For more information on setting up a Qdrant collection, refer to the `qdrant_setup_partial.py` script. This script demonstrates how to process files, split them into chunks, and store them in a Qdrant collection.\n",
|
|
|
+ "For more information on setting up a Qdrant collection, refer to the `setup_qdrant_collection.py` script. This script demonstrates how to process files, split them into chunks, and store them in a Qdrant collection.\n",
|
|
|
"\n",
|
|
|
"Once you've completed these steps, you can define your configuration variables as follows:"
|
|
|
]
|