Bez popisu

Daemon 2c3412d9e7 Update Dockerfile před 1 rokem
containerapp 2233cbc386 modified config.ts to fix containerized execution před 1 rokem
src 5fdf79baba refactor: validate config before running před 1 rokem
.DS_Store e2d3f7f089 commit one před 1 rokem
.dockerignore b8f46c4468 modified: .dockerignore před 1 rokem
.gitignore a76fd172b6 chore: add `.DS_Store` and `!package.json` to `.gitignore` před 1 rokem
Dockerfile 2c3412d9e7 Update Dockerfile před 1 rokem
LICENSE a5c833b195 Create LICENSE před 1 rokem
README.md 61fbb62273 cleanup před 1 rokem
config.ts 61fbb62273 cleanup před 1 rokem
conv_html_to_markdown.py 7b22c8966f Create conv_html_to_markdown.py před 1 rokem
package-lock.json 550f1e6b5d Merge pull request #54 from iperzic/type-validation před 1 rokem
package.json b8f46c4468 modified: .dockerignore před 1 rokem
tsconfig.json 63ad8be43d build: use strict ts mode před 1 rokem

README.md

GPT Crawler

Crawl a site to generate knowledge files to create your own custom GPT from one or multiple URLs

Gif showing the crawl run

Example

Here is a custom GPT that I quickly made to help answer questions about how to use and integrate Builder.io by simply providing the URL to the Builder docs.

This project crawled the docs and generated the file that I uploaded as the basis for the custom GPT.

Try it out yourself by asking questions about how to integrate Builder.io into a site.

Note that you may need a paid ChatGPT plan to access this feature

Get started

Running locally

Clone the repository

Be sure you have Node.js >= 16 installed.

git clone https://github.com/builderio/gpt-crawler

Install dependencies

npm i

Configure the crawler

Open config.ts and edit the url and selectors properties to match your needs.

E.g. to crawl the Builder.io docs to make our custom GPT you can use:

export const defaultConfig: Config = {
  url: "https://www.builder.io/c/docs/developers",
  match: "https://www.builder.io/c/docs/**",
  selector: `.docs-builder-container`,
  maxPagesToCrawl: 50,
  outputFileName: "output.json",
};

See config.ts for all available options. Here is a sample of the common configu options:

type Config = {
  /** URL to start the crawl */
  url: string;
  /** Pattern to match against for links on a page to subsequently crawl */
  match: string;
  /** Selector to grab the inner text from */
  selector: string;
  /** Don't crawl more than this many pages */
  maxPagesToCrawl: number;
  /** File name for the finished data */
  outputFileName: string;
};

Run your crawler

npm start

Alternative methods

Running in a container with Docker

To obtain the output.json with a containerized execution. Go into the containerapp directory. Modify the config.ts same as above, the output.jsonfile should be generated in the data folder. Note : the outputFileName property in the config.ts file in containerapp folder is configured to work with the container.

Upload your data to OpenAI

The crawl will generate a file called output.json at the root of this project. Upload that to OpenAI to create your custom assistant or custom GPT.

Create a custom GPT

Use this option for UI access to your generated knowledge that you can easily share with others

Note: you may need a paid ChatGPT plan to create and use custom GPTs right now

  1. Go to https://chat.openai.com/
  2. Click your name in the bottom left corner
  3. Choose "My GPTs" in the menu
  4. Choose "Create a GPT"
  5. Choose "Configure"
  6. Under "Knowledge" choose "Upload a file" and upload the file you generated

Gif of how to upload a custom GPT

Create a custom assistant

Use this option for API access to your generated knowledge that you can integrate into your product.

  1. Go to https://platform.openai.com/assistants
  2. Click "+ Create"
  3. Choose "upload" and upload the file you generated

Gif of how to upload to an assistant

Contributing

Know how to make this project better? Send a PR!



Made with love by Builder.io