diff --git a/.gitignore b/.gitignore index ec389aa6..452e2aa7 100644 --- a/.gitignore +++ b/.gitignore @@ -6,4 +6,7 @@ node_modules build/ dist/ out/ -.next/ \ No newline at end of file +.next/ +build +package-lock.json +.env \ No newline at end of file diff --git a/build/1.0.0/category/agent-frameworks-and-apps/index.html b/build/1.0.0/category/agent-frameworks-and-apps/index.html index 3d8184d8..0829f77d 100644 --- a/build/1.0.0/category/agent-frameworks-and-apps/index.html +++ b/build/1.0.0/category/agent-frameworks-and-apps/index.html @@ -2,7 +2,7 @@ - + Agent frameworks and apps | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
Version: 1.0.0

Agent frameworks and apps

How to use a GaiaNet node as the backend service for an agent framework or app, replacing the OpenAI API.

๐Ÿ“„๏ธ Obsidian

Obsidian is a note-taking application that enables users to create, link, and visualize ideas directly on their devices. With Obsidian, you can seamlessly sync notes across devices, publish your work, and collaborate with others. The app is highly customizable, allowing users to enhance functionality through a wide range of plugins and themes. Its unique features include a graph view to visualize connections between notes, making it ideal for managing complex information and fostering creativity. Obsidian also emphasizes data privacy by storing notes locally.

+
Skip to main content
Version: 1.0.0

Agent frameworks and apps

How to use a GaiaNet node as the backend service for an agent framework or app, replacing the OpenAI API.

๐Ÿ“„๏ธ Obsidian

Obsidian is a note-taking application that enables users to create, link, and visualize ideas directly on their devices. With Obsidian, you can seamlessly sync notes across devices, publish your work, and collaborate with others. The app is highly customizable, allowing users to enhance functionality through a wide range of plugins and themes. Its unique features include a graph view to visualize connections between notes, making it ideal for managing complex information and fostering creativity. Obsidian also emphasizes data privacy by storing notes locally.

\ No newline at end of file diff --git a/build/1.0.0/category/creator-guide/index.html b/build/1.0.0/category/creator-guide/index.html index 2eb96918..a2d179d5 100644 --- a/build/1.0.0/category/creator-guide/index.html +++ b/build/1.0.0/category/creator-guide/index.html @@ -2,7 +2,7 @@ - + Creator Guide | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/domain-operator-guide/index.html b/build/1.0.0/category/domain-operator-guide/index.html index 844b26f7..17aa53ec 100644 --- a/build/1.0.0/category/domain-operator-guide/index.html +++ b/build/1.0.0/category/domain-operator-guide/index.html @@ -2,7 +2,7 @@ - + Domain Operator Guide | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/gaianet-node-with-finetuned-llms/index.html b/build/1.0.0/category/gaianet-node-with-finetuned-llms/index.html index 2da43a93..b3b899c9 100644 --- a/build/1.0.0/category/gaianet-node-with-finetuned-llms/index.html +++ b/build/1.0.0/category/gaianet-node-with-finetuned-llms/index.html @@ -2,7 +2,7 @@ - + GaiaNet Node with finetuned LLMs | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/how-do-i-/index.html b/build/1.0.0/category/how-do-i-/index.html index 626fcc89..000cd3f1 100644 --- a/build/1.0.0/category/how-do-i-/index.html +++ b/build/1.0.0/category/how-do-i-/index.html @@ -2,7 +2,7 @@ - + How do I ... | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/knowledge-bases/index.html b/build/1.0.0/category/knowledge-bases/index.html index 00929b86..f943af4a 100644 --- a/build/1.0.0/category/knowledge-bases/index.html +++ b/build/1.0.0/category/knowledge-bases/index.html @@ -2,7 +2,7 @@ - + Knowledge bases | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
Version: 1.0.0

Knowledge bases

How to create vector collections based on your own proprietary and private knowledge

+
Skip to main content
Version: 1.0.0

Knowledge bases

How to create vector collections based on your own proprietary and private knowledge

\ No newline at end of file diff --git a/build/1.0.0/category/node-operator-guide/index.html b/build/1.0.0/category/node-operator-guide/index.html index 691c6da1..f1479180 100644 --- a/build/1.0.0/category/node-operator-guide/index.html +++ b/build/1.0.0/category/node-operator-guide/index.html @@ -2,7 +2,7 @@ - + Node Operator Guide | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/tutorial/index.html b/build/1.0.0/category/tutorial/index.html index 0b007dc3..74b26820 100644 --- a/build/1.0.0/category/tutorial/index.html +++ b/build/1.0.0/category/tutorial/index.html @@ -2,7 +2,7 @@ - + Tutorial | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/category/user-guide/index.html b/build/1.0.0/category/user-guide/index.html index 5894d9c1..bb4614c8 100644 --- a/build/1.0.0/category/user-guide/index.html +++ b/build/1.0.0/category/user-guide/index.html @@ -2,7 +2,7 @@ - + User Guide | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,6 +31,6 @@ -
Skip to main content
+
Skip to main content
\ No newline at end of file diff --git a/build/1.0.0/creator-guide/finetune/intro/index.html b/build/1.0.0/creator-guide/finetune/intro/index.html index e498383c..bdd2ed30 100644 --- a/build/1.0.0/creator-guide/finetune/intro/index.html +++ b/build/1.0.0/creator-guide/finetune/intro/index.html @@ -2,7 +2,7 @@ - + Fine-tune LLMs | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
Skip to main content
Version: 1.0.0

Fine-tune LLMs

+
Version: 1.0.0

Fine-tune LLMs

You could fine-tune an open-source LLM to

  • Teach it to follow conversations.
  • diff --git a/build/1.0.0/creator-guide/finetune/llamacpp/index.html b/build/1.0.0/creator-guide/finetune/llamacpp/index.html index aced42f8..9389c107 100644 --- a/build/1.0.0/creator-guide/finetune/llamacpp/index.html +++ b/build/1.0.0/creator-guide/finetune/llamacpp/index.html @@ -2,7 +2,7 @@ - + llama.cpp | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
    Version: 1.0.0

    llama.cpp

    +
    Version: 1.0.0

    llama.cpp

    The popular llama.cpp tool comes with a finetune utility. It works well on CPUs! This fine-tune guide is reproduced with permission from Tony Yuan's Finetune an open-source LLM for the chemistry subject project.

    Build the fine-tune utility from llama.cppโ€‹

    diff --git a/build/1.0.0/creator-guide/knowledge/concepts/index.html b/build/1.0.0/creator-guide/knowledge/concepts/index.html index 988d7409..a0de99b5 100644 --- a/build/1.0.0/creator-guide/knowledge/concepts/index.html +++ b/build/1.0.0/creator-guide/knowledge/concepts/index.html @@ -2,7 +2,7 @@ - + Gaia nodes with long-term knowledge | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
    Version: 1.0.0

    Gaia nodes with long-term knowledge

    +
    Version: 1.0.0

    Gaia nodes with long-term knowledge

    The LLM app requires both long-term and short-term memories. Long-term memory includes factual knowledge, historical facts, background stories etc. They are best added to the context as complete chapters instead of small chunks of text to maintain the internal consistency of the knowledge.

    RAG is an important technique to inject contextual knowledge into an LLM application. It improves accuracy and reduces the hallucination of LLMs. @@ -46,7 +47,7 @@

    For example, if you ask ChatGPT the question What is Layer 2, the answer is that Layer 2 is a concept from the computer network. However, if you ask a blockchain person, they answer that Layer 2 is a way to scale the original Ethereum network. That's the difference between a generic LLM and knowledge-supplemented LLMs.

    -

    We will cover the external knowledge preparation and how a knowledge-supplemented LLM completes a conversation. If you have learned how a RAG application works, go to Build a RAG application with Gaia to start building one.

    +

    We will cover the external knowledge preparation and how a knowledge-supplemented LLM completes a conversation. If you have learned how a RAG application works, go to Build a RAG application with Gaia to start building one.

    1. Create embeddings for your own knowledge as the long-term memory.
    2. Lifecycle of a user query on a knowledge-supplemented LLM.
    3. diff --git a/build/1.0.0/creator-guide/knowledge/csv/index.html b/build/1.0.0/creator-guide/knowledge/csv/index.html index 5e65ecfd..5c5dda04 100644 --- a/build/1.0.0/creator-guide/knowledge/csv/index.html +++ b/build/1.0.0/creator-guide/knowledge/csv/index.html @@ -2,7 +2,7 @@ - + Knowledge base from source / summary pairs | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
      Version: 1.0.0

      Knowledge base from source / summary pairs

      +
      Version: 1.0.0

      Knowledge base from source / summary pairs

      In this section, we will discuss how to create a vector collection snapshot for optimal retrieval of long-form text documents. The approach is to create two columns of text in a CSV file.

        diff --git a/build/1.0.0/creator-guide/knowledge/firecrawl/index.html b/build/1.0.0/creator-guide/knowledge/firecrawl/index.html index cdc9c633..88c82401 100644 --- a/build/1.0.0/creator-guide/knowledge/firecrawl/index.html +++ b/build/1.0.0/creator-guide/knowledge/firecrawl/index.html @@ -2,7 +2,7 @@ - + Knowledge base from a URL | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
        Version: 1.0.0

        Knowledge base from a URL

        +
        Version: 1.0.0

        Knowledge base from a URL

        In this section, we will discuss how to create a vector collection snapshot from a Web URL. First, we will parse the URL to a structured markdown file. Then, we will follow the steps from Knowledge base from a markdown file to create embedding for your URL.

        Parse the URL content to a markdown fileโ€‹

        Firecrawl can crawl and convert any website into LLM-ready markdown or structured data. It also supports crawling a URL and all accessible subpages.

        diff --git a/build/1.0.0/creator-guide/knowledge/markdown/index.html b/build/1.0.0/creator-guide/knowledge/markdown/index.html index ad106180..6d7c877f 100644 --- a/build/1.0.0/creator-guide/knowledge/markdown/index.html +++ b/build/1.0.0/creator-guide/knowledge/markdown/index.html @@ -2,7 +2,7 @@ - + Knowledge base from a markdown file | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
        Version: 1.0.0

        Knowledge base from a markdown file

        +
        Version: 1.0.0

        Knowledge base from a markdown file

        In this section, we will discuss how to create a vector collection snapshot from a markdown file. The snapshot file can then be loaded by a Gaia node as its knowledge base.

        The markdown file is segmented into multiple sections by headings. See an example. Each section is turned into a vector, and when diff --git a/build/1.0.0/creator-guide/knowledge/pdf/index.html b/build/1.0.0/creator-guide/knowledge/pdf/index.html index 500eca92..42aff690 100644 --- a/build/1.0.0/creator-guide/knowledge/pdf/index.html +++ b/build/1.0.0/creator-guide/knowledge/pdf/index.html @@ -2,7 +2,7 @@ - + Knowledge base from a PDF file | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -

        Version: 1.0.0

        Knowledge base from a PDF file

        +
        Version: 1.0.0

        Knowledge base from a PDF file

        In this section, we will discuss how to create a vector collection snapshot from a PDF file. First, we will parse the unstructured PDF file to a structured markdown file. Then, we will follow the steps from Knowledge base from a markdown file to create embedding for your PDF files.

        Tools to convert a PDF file to a markdown fileโ€‹

        Tool #1: LlamaParseโ€‹

        diff --git a/build/1.0.0/creator-guide/knowledge/text/index.html b/build/1.0.0/creator-guide/knowledge/text/index.html index 610a4393..c535955b 100644 --- a/build/1.0.0/creator-guide/knowledge/text/index.html +++ b/build/1.0.0/creator-guide/knowledge/text/index.html @@ -2,7 +2,7 @@ - + Knowledge base from a plain text file | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
        Version: 1.0.0

        Knowledge base from a plain text file

        +
        Version: 1.0.0

        Knowledge base from a plain text file

        In this section, we will discuss how to create a vector collection snapshot from a plain text file. The snapshot file can then be loaded by a Gaia node as its knowledge base.

        The text file is segmented into multiple chunks by blank lines. See an example. Each chunk is turned into a vector, and when diff --git a/build/1.0.0/creator-guide/knowledge/web-tool/index.html b/build/1.0.0/creator-guide/knowledge/web-tool/index.html index 1ab4a371..d4566455 100644 --- a/build/1.0.0/creator-guide/knowledge/web-tool/index.html +++ b/build/1.0.0/creator-guide/knowledge/web-tool/index.html @@ -2,7 +2,7 @@ - + Build a knowledge base using Gaia web tool | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -

        Version: 1.0.0

        Build a knowledge base using Gaia web tool

        +
        Version: 1.0.0

        Build a knowledge base using Gaia web tool

        GaiaNet has developed a tool for making vector collection snapshot files, so everyone can easily create their own knowledge base.

        Access it here: https://tools.gaianet.xyz/

        Segment your text fileโ€‹

        diff --git a/build/1.0.0/domain-guide/quick-start/index.html b/build/1.0.0/domain-guide/quick-start/index.html index f4971fb9..9901a5c5 100644 --- a/build/1.0.0/domain-guide/quick-start/index.html +++ b/build/1.0.0/domain-guide/quick-start/index.html @@ -2,7 +2,7 @@ - + Quick Start with Launching Gaia Domain | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
        Version: 1.0.0

        Quick Start with Launching Gaia Domain

        +
        Version: 1.0.0

        Quick Start with Launching Gaia Domain

        This guide provides all the information you need to quickly set up and run a Gaia Domain.

        Note: Ensure that you are the owner of a Gaia Domain Name before proceeding. You can verify your Gaia Domain Name in the "Assets" section of your profile.

        diff --git a/build/1.0.0/intro/index.html b/build/1.0.0/intro/index.html index 9d52e8d5..21dfabdf 100644 --- a/build/1.0.0/intro/index.html +++ b/build/1.0.0/intro/index.html @@ -2,7 +2,7 @@ - + Overview | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
        Version: 1.0.0

        Overview

        +
        Version: 1.0.0

        Overview

        GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides:

          diff --git a/build/1.0.0/litepaper/index.html b/build/1.0.0/litepaper/index.html index 15d996e3..a1a4c384 100644 --- a/build/1.0.0/litepaper/index.html +++ b/build/1.0.0/litepaper/index.html @@ -2,7 +2,7 @@ - + GaiaNet: GenAI Agent Network | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
          Version: 1.0.0

          GaiaNet: GenAI Agent Network

          +
          Version: 1.0.0

          GaiaNet: GenAI Agent Network

          Abstractโ€‹

          Specialized, finetuned and RAG-enhanced open-source Large Language Models are key elements in emerging AI agent applications. However, those agent apps also present unique challenges to the traditional cloud computing and SaaS infrastructure, including new requirements for application portability, virtualization, security isolation, costs, data privacy, and ownership.

          GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. A GaiaNet node consists of a high-performance and cross-platform application runtime, a finetuned LLM, a knowledge embedding model, a vector database, a prompt manager, an open API server, and a plugin system for calling external tools and functions using LLM outputs. It can be deployed by any knowledge worker as a digital twin and offered as a web API service. A new class of tradeable assets and a marketplace could be created from individualized knowledge bases and components. Similar GaiaNet nodes are organized into GaiaNet domains, which offer trusted and reliable AI agent services to the public. The GaiaNet node and domains are governed by the GaiaNet DAO (Decentralized Autonomous Organization). Through Purpose Bound Money smart contracts, the GaiaNet network is a decentralized marketplace for AI agent services.

          diff --git a/build/1.0.0/node-guide/cli-options/index.html b/build/1.0.0/node-guide/cli-options/index.html index 97b75707..456d45dd 100644 --- a/build/1.0.0/node-guide/cli-options/index.html +++ b/build/1.0.0/node-guide/cli-options/index.html @@ -2,7 +2,7 @@ - + GaiaNet CLI options | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
          Version: 1.0.0

          GaiaNet CLI options

          +
          Version: 1.0.0

          GaiaNet CLI options

          After installing the GaiaNet software, you can use the gaianet CLI to manage the node. The following are the CLI options.

          helpโ€‹

          You can use gaianet --help to check all the available CLI options.

          diff --git a/build/1.0.0/node-guide/customize/index.html b/build/1.0.0/node-guide/customize/index.html index 2178483b..aa646d18 100644 --- a/build/1.0.0/node-guide/customize/index.html +++ b/build/1.0.0/node-guide/customize/index.html @@ -2,7 +2,7 @@ - + Customize Your GaiaNet Node | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
          Version: 1.0.0

          Customize Your GaiaNet Node

          +
          Version: 1.0.0

          Customize Your GaiaNet Node

          A key goal of the GaiaNet project is to enable each individual to create and run his or her own agent service node using finetuned LLMs and proprietary knowledge. In all likelihood, you are not going to run a node with the default Llama 3.2 3B LLM and Paris guidebook knowledge base. diff --git a/build/1.0.0/node-guide/install_uninstall/index.html b/build/1.0.0/node-guide/install_uninstall/index.html index e86fb840..dff30839 100644 --- a/build/1.0.0/node-guide/install_uninstall/index.html +++ b/build/1.0.0/node-guide/install_uninstall/index.html @@ -2,7 +2,7 @@ - + Install and uninstall | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -

          Version: 1.0.0

          Install and uninstall

          +
          Version: 1.0.0

          Install and uninstall

          The GaiaNet node utilizes version control from its source GitHub repo. You can check out the GaiaNet node versions from the release page.

          Installโ€‹

          You can install the WasmEdge Runtime on any generic Linux and MacOS platform.

          diff --git a/build/1.0.0/node-guide/quick-start/index.html b/build/1.0.0/node-guide/quick-start/index.html index 05e4a44d..44a75200 100644 --- a/build/1.0.0/node-guide/quick-start/index.html +++ b/build/1.0.0/node-guide/quick-start/index.html @@ -2,7 +2,7 @@ - + Quick start with GaiaNet Node | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
          Version: 1.0.0

          Quick start with GaiaNet Node

          +
          Version: 1.0.0

          Quick start with GaiaNet Node

          This guide provides the requisite knowledge necessary to quickly get started with installing a GaiaNet node.

          Prerequisitesโ€‹

          Before you get started, ensure that you have the following on your system:

          diff --git a/build/1.0.0/node-guide/register/index.html b/build/1.0.0/node-guide/register/index.html index 8204f145..b8d42c7b 100644 --- a/build/1.0.0/node-guide/register/index.html +++ b/build/1.0.0/node-guide/register/index.html @@ -2,7 +2,7 @@ - + Joining the Gaia Protocol | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
          Version: 1.0.0

          Joining the Gaia Protocol

          +
          Version: 1.0.0

          Joining the Gaia Protocol

          To join the Gaia protocol, you will need to complete the following two tasks.

          • Bind your node by connecting your node ID and device ID.
          • diff --git a/build/1.0.0/node-guide/system-requirements/index.html b/build/1.0.0/node-guide/system-requirements/index.html index 6a86f7ba..9c6eea28 100644 --- a/build/1.0.0/node-guide/system-requirements/index.html +++ b/build/1.0.0/node-guide/system-requirements/index.html @@ -2,7 +2,7 @@ - + System requirements | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
            Version: 1.0.0

            System requirements

            +
            Version: 1.0.0

            System requirements

            You can install the GaiaNet on a wide variety of devices and operating systems with or without GPUs. The node installing and operating instructions work on devices ranging from Raspberry Pi, MacBooks, Linux servers, Windows Desktop, to cloud-based Nvidia H100 clusters. For institutional operators, we recommend EITHER of the following for a GaiaNet node.

            • Mac desktop or server computers (i.e., iMac, Mini, Studio or Pro) with Apple Silicon (M1 to M4), and at least 16GB of RAM (32GB or more recommended).
            • diff --git a/build/1.0.0/node-guide/tasks/aws/index.html b/build/1.0.0/node-guide/tasks/aws/index.html index 45c2cce4..19c416c4 100644 --- a/build/1.0.0/node-guide/tasks/aws/index.html +++ b/build/1.0.0/node-guide/tasks/aws/index.html @@ -2,7 +2,7 @@ - + Start a node on AWS using AMI images | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
              Version: 1.0.0

              Start a node on AWS using AMI images

              +
              Version: 1.0.0

              Start a node on AWS using AMI images

              We have created a series of public AMIs for you to start GaiaNet nodes in AWS with just a few clicks.

              Now we have three AMI images available in the Asia Pacific (Osaka) and all the US regions including N. Virginia, Ohio, N. California, and Oregon.

              AMI Images NameArchitectureRegions
              GaiaNet_ubuntu22.04_amd64_cuda12GPUN. Virginia, Ohio, N. California, Oregon, and Osaka
              GaiaNet_ubuntu22.04_amd64x86 CPU machinesN. Virginia, Ohio, N. California, Oregon, and Osaka
              GaiaNet_ubuntu22.04_arm64ARM CPU machinesN. Virginia, Ohio, N. California, Oregon, and Osaka
              diff --git a/build/1.0.0/node-guide/tasks/cuda/index.html b/build/1.0.0/node-guide/tasks/cuda/index.html index 62e112b1..8ab09d51 100644 --- a/build/1.0.0/node-guide/tasks/cuda/index.html +++ b/build/1.0.0/node-guide/tasks/cuda/index.html @@ -2,7 +2,7 @@ - + Install CUDA on Linux | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
              Version: 1.0.0

              Install CUDA on Linux

              +
              Version: 1.0.0

              Install CUDA on Linux

              If you are using an Nvidia-enabled VM instance from a public cloud, you should probably use the VM image provided by the cloud. It typically has the correct versions of Nvidia driver and CUDA toolkit already installed. Read on if you need to install Nvidia driver and CUDA toolkit on your own machine.

              Ubuntu 22.04โ€‹

              diff --git a/build/1.0.0/node-guide/tasks/docker/index.html b/build/1.0.0/node-guide/tasks/docker/index.html index a85ffa6e..645d68a7 100644 --- a/build/1.0.0/node-guide/tasks/docker/index.html +++ b/build/1.0.0/node-guide/tasks/docker/index.html @@ -2,7 +2,7 @@ - + Start a node with Docker | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
              Version: 1.0.0

              Start a node with Docker

              +
              Version: 1.0.0

              Start a node with Docker

              You can run all the commands in this document without any change on any machine with the latest Docker and at least 8GB of RAM available to the container. By default, the container uses the CPU to perform computations, which could be slow for large LLMs. For GPUs,

                diff --git a/build/1.0.0/node-guide/tasks/local/index.html b/build/1.0.0/node-guide/tasks/local/index.html index a770364b..21601112 100644 --- a/build/1.0.0/node-guide/tasks/local/index.html +++ b/build/1.0.0/node-guide/tasks/local/index.html @@ -2,7 +2,7 @@ - + Run a local-only node | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
                Version: 1.0.0

                Run a local-only node

                +
                Version: 1.0.0

                Run a local-only node

                By default, the GaiaNet node registers itself with a GaiaNet domain and is accesible from the public. For many users, it could also be important to start a local server for testing. To do that, you just need to pass the --local-only option.

                diff --git a/build/1.0.0/node-guide/tasks/multiple/index.html b/build/1.0.0/node-guide/tasks/multiple/index.html index ce88e27d..e2e8182b 100644 --- a/build/1.0.0/node-guide/tasks/multiple/index.html +++ b/build/1.0.0/node-guide/tasks/multiple/index.html @@ -2,7 +2,7 @@ - + Install multiple nodes on a single machine | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
                Version: 1.0.0

                Install multiple nodes on a single machine

                +
                Version: 1.0.0

                Install multiple nodes on a single machine

                The default GaiaNet installer installs the node into the $HOME/gaianet base directory. You could install multiple nodes on the same machine. Each node has its own "base directory". To do that, you just need to use the --base option.

                diff --git a/build/1.0.0/node-guide/tasks/protect/index.html b/build/1.0.0/node-guide/tasks/protect/index.html index f6d315d2..665cdf94 100644 --- a/build/1.0.0/node-guide/tasks/protect/index.html +++ b/build/1.0.0/node-guide/tasks/protect/index.html @@ -2,7 +2,7 @@ - + Protect the server process | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
                Version: 1.0.0

                Protect the server process

                +
                Version: 1.0.0

                Protect the server process

                Sometimes, the OS could kill the wasmedge process on the GaiaNet node if it consumes too much resources. For production servers, you should protect the server process.

                Use Superviseโ€‹

                diff --git a/build/1.0.0/node-guide/troubleshooting/index.html b/build/1.0.0/node-guide/troubleshooting/index.html index 7b39d7ea..f34e9dbd 100644 --- a/build/1.0.0/node-guide/troubleshooting/index.html +++ b/build/1.0.0/node-guide/troubleshooting/index.html @@ -2,7 +2,7 @@ - + Troubleshooting | Gaia @@ -11,18 +11,19 @@ + + + - - - - - + + + @@ -30,7 +31,7 @@ -
                Version: 1.0.0

                Troubleshooting

                +
                Version: 1.0.0

                Troubleshooting

                The system cannot find CUDA librariesโ€‹

                Sometimes, the CUDA toolkit is installed in a non-standard location. The error message here is often not able to find libcu*12. For example, you might have CUDA installed with your Python setup. The following command would install CUDA into Python's environment.

                sudo apt install python3-pip -y
                pip3 install --upgrade fschat accelerate autoawq vllm
                diff --git a/build/1.0.0/search-index.json b/build/1.0.0/search-index.json index f9cabbec..1b247cce 100644 --- a/build/1.0.0/search-index.json +++ b/build/1.0.0/search-index.json @@ -1 +1 @@ -[{"documents":[{"i":884,"t":"Knowledge base from a URL","u":"/1.0.0/creator-guide/knowledge/firecrawl","b":["Creator Guide","Knowledge bases"]},{"i":890,"t":"Knowledge base from source / summary pairs","u":"/1.0.0/creator-guide/knowledge/csv","b":["Creator Guide","Knowledge bases"]},{"i":904,"t":"Gaia nodes with long-term knowledge","u":"/1.0.0/creator-guide/knowledge/concepts","b":["Creator Guide","Knowledge bases"]},{"i":916,"t":"Knowledge base from a markdown file","u":"/1.0.0/creator-guide/knowledge/markdown","b":["Creator Guide","Knowledge bases"]},{"i":930,"t":"Knowledge base from a PDF file","u":"/1.0.0/creator-guide/knowledge/pdf","b":["Creator Guide","Knowledge bases"]},{"i":939,"t":"Fine-tune LLMs","u":"/1.0.0/creator-guide/finetune/intro","b":["Creator Guide","GaiaNet Node with finetuned LLMs"]},{"i":941,"t":"Overview","u":"/1.0.0/intro","b":[]},{"i":952,"t":"Quick Start with Launching Gaia Domain","u":"/1.0.0/domain-guide/quick-start","b":["Domain Operator Guide"]},{"i":956,"t":"llama.cpp","u":"/1.0.0/creator-guide/finetune/llamacpp","b":["Creator Guide","GaiaNet Node with finetuned LLMs"]},{"i":968,"t":"Build a knowledge base using Gaia web tool","u":"/1.0.0/creator-guide/knowledge/web-tool","b":["Creator Guide","Knowledge bases"]},{"i":976,"t":"GaiaNet: GenAI Agent Network","u":"/1.0.0/litepaper","b":[]},{"i":993,"t":"Knowledge base from a plain text file","u":"/1.0.0/creator-guide/knowledge/text","b":["Creator Guide","Knowledge bases"]},{"i":1007,"t":"GaiaNet CLI options","u":"/1.0.0/node-guide/cli-options","b":["Node Operator Guide"]},{"i":1023,"t":"Install and uninstall","u":"/1.0.0/node-guide/install_uninstall","b":["Node Operator Guide"]},{"i":1039,"t":"System requirements","u":"/1.0.0/node-guide/system-requirements","b":["Node Operator Guide"]},{"i":1049,"t":"Quick start with GaiaNet Node","u":"/1.0.0/node-guide/quick-start","b":["Node Operator Guide"]},{"i":1057,"t":"Start a node on AWS using AMI images","u":"/1.0.0/node-guide/tasks/aws","b":["Node Operator Guide","How do I ..."]},{"i":1063,"t":"Customize Your GaiaNet Node","u":"/1.0.0/node-guide/customize","b":["Node Operator Guide"]},{"i":1077,"t":"Run a local-only node","u":"/1.0.0/node-guide/tasks/local","b":["Node Operator Guide","How do I ..."]},{"i":1079,"t":"Install CUDA on Linux","u":"/1.0.0/node-guide/tasks/cuda","b":["Node Operator Guide","How do I ..."]},{"i":1088,"t":"Joining the Gaia Protocol","u":"/1.0.0/node-guide/register","b":["Node Operator Guide"]},{"i":1102,"t":"Troubleshooting","u":"/1.0.0/node-guide/troubleshooting","b":["Node Operator Guide"]},{"i":1121,"t":"Start a node with Docker","u":"/1.0.0/node-guide/tasks/docker","b":["Node Operator Guide","How do I ..."]},{"i":1133,"t":"Working with Coinbase AgentKit","u":"/1.0.0/tutorial/coinbase","b":["Tutorial"]},{"i":1139,"t":"Agentic translation on GaiaNet","u":"/1.0.0/tutorial/translator-agent","b":["Tutorial"]},{"i":1165,"t":"Protect the server process","u":"/1.0.0/node-guide/tasks/protect","b":["Node Operator Guide","How do I ..."]},{"i":1171,"t":"Install multiple nodes on a single machine","u":"/1.0.0/node-guide/tasks/multiple","b":["Node Operator Guide","How do I ..."]},{"i":1173,"t":"Working with eliza","u":"/1.0.0/tutorial/eliza","b":["Tutorial"]},{"i":1179,"t":"Anything LLM","u":"/1.0.0/user-guide/apps/anything_llm","b":["User Guide","Agent frameworks and apps"]},{"i":1183,"t":"Calling external tools","u":"/1.0.0/tutorial/tool-call","b":["Tutorial"]},{"i":1193,"t":"CodeGPT","u":"/1.0.0/user-guide/apps/codegpt","b":["User Guide","Agent frameworks and apps"]},{"i":1203,"t":"Cursor AI IDE","u":"/1.0.0/user-guide/apps/cursor","b":["User Guide","Agent frameworks and apps"]},{"i":1211,"t":"Dify + GaiaNet","u":"/1.0.0/user-guide/apps/dify","b":["User Guide","Agent frameworks and apps"]},{"i":1215,"t":"FlowiseAI RAG chat","u":"/1.0.0/user-guide/apps/flowiseai","b":["User Guide","Agent frameworks and apps"]},{"i":1235,"t":"API Reference","u":"/1.0.0/user-guide/api-reference","b":["User Guide"]},{"i":1251,"t":"A planning agent","u":"/1.0.0/user-guide/apps/gpt-planner","b":["User Guide","Agent frameworks and apps"]},{"i":1257,"t":"OpenAI ecosystem apps","u":"/1.0.0/user-guide/apps/intro","b":["User Guide","Agent frameworks and apps"]},{"i":1263,"t":"AI coding assistant: Continue","u":"/1.0.0/user-guide/apps/continue","b":["User Guide","Agent frameworks and apps"]},{"i":1273,"t":"Agent Zero","u":"/1.0.0/user-guide/apps/agent-zero","b":["User Guide","Agent frameworks and apps"]},{"i":1291,"t":"FlowiseAI tool call","u":"/1.0.0/user-guide/apps/flowiseai-tool-call","b":["User Guide","Agent frameworks and apps"]},{"i":1301,"t":"LlamaCloud","u":"/1.0.0/user-guide/apps/llamaparse","b":["User Guide","Agent frameworks and apps"]},{"i":1307,"t":"LlamaEdgeBook","u":"/1.0.0/user-guide/apps/llamaedgebook","b":["User Guide","Agent frameworks and apps"]},{"i":1311,"t":"LobeChat","u":"/1.0.0/user-guide/apps/lobechat","b":["User Guide","Agent frameworks and apps"]},{"i":1315,"t":"LlamaTutor","u":"/1.0.0/user-guide/apps/llamatutor","b":["User Guide","Agent frameworks and apps"]},{"i":1321,"t":"Obsidian","u":"/1.0.0/user-guide/apps/obsidian","b":["User Guide","Agent frameworks and apps"]},{"i":1346,"t":"Open WebUI","u":"/1.0.0/user-guide/apps/openwebui","b":["User Guide","Agent frameworks and apps"]},{"i":1356,"t":"Stockbot","u":"/1.0.0/user-guide/apps/stockbot","b":["User Guide","Agent frameworks and apps"]},{"i":1362,"t":"Translation Agent + GaiaNet","u":"/1.0.0/user-guide/apps/translation-agent","b":["User Guide","Agent frameworks and apps"]},{"i":1370,"t":"Zed","u":"/1.0.0/user-guide/apps/zed","b":["User Guide","Agent frameworks and apps"]},{"i":1378,"t":"LlamaCoder","u":"/1.0.0/user-guide/apps/llamacoder","b":["User Guide","Agent frameworks and apps"]},{"i":1384,"t":"Public GaiaNet nodes","u":"/1.0.0/user-guide/nodes","b":["User Guide"]},{"i":1407,"t":"Use my GaiaNet node","u":"/1.0.0/user-guide/mynode","b":["User Guide"]}],"index":{"version":"2.3.9","fields":["t"],"fieldVectors":[["t/884",[0,1.902,1,2.041,2,3.468]],["t/890",[0,1.482,1,1.59,3,2.702,4,2.702,5,2.702]],["t/904",[0,1.482,6,1.869,7,1.303,8,2.702,9,2.702]],["t/916",[0,1.666,1,1.788,10,3.037,11,2.315]],["t/930",[0,1.666,1,1.788,11,2.315,12,3.037]],["t/939",[13,3.468,14,3.468,15,2.971]],["t/941",[16,4.839]],["t/952",[6,1.869,17,2.315,18,1.869,19,2.702,20,2.702]],["t/956",[21,4.04,22,4.04]],["t/968",[0,1.214,1,1.303,6,1.531,23,2.213,24,1.687,25,2.213,26,1.687]],["t/976",[27,1.465,28,3.037,29,1.93,30,3.037]],["t/993",[0,1.482,1,1.59,11,2.06,31,2.702,32,2.702]],["t/1007",[27,1.672,33,3.468,34,3.468]],["t/1023",[35,3.08,36,4.04]],["t/1039",[37,4.04,38,4.04]],["t/1049",[7,1.465,17,2.602,18,2.101,27,1.465]],["t/1057",[7,1.173,18,1.683,24,1.855,39,2.433,40,2.433,41,2.433]],["t/1063",[7,1.672,27,1.672,42,3.468]],["t/1077",[7,1.672,43,3.468,44,3.468]],["t/1079",[35,2.643,45,3.468,46,3.468]],["t/1088",[6,2.399,47,3.468,48,3.468]],["t/1102",[49,4.839]],["t/1121",[7,1.672,18,2.399,50,3.468]],["t/1133",[51,2.971,52,3.468,53,3.468]],["t/1139",[27,1.672,29,2.204,54,2.971]],["t/1165",[55,3.468,56,3.468,57,3.468]],["t/1171",[7,1.303,35,2.06,58,2.702,59,2.702,60,2.702]],["t/1173",[51,3.461,61,4.04]],["t/1179",[15,3.461,62,4.04]],["t/1183",[26,2.643,63,2.971,64,3.468]],["t/1193",[65,4.839]],["t/1203",[66,3.468,67,2.971,68,3.468]],["t/1211",[27,1.948,69,4.04]],["t/1215",[70,2.971,71,3.468,72,3.468]],["t/1235",[73,4.04,74,4.04]],["t/1251",[29,2.568,75,4.04]],["t/1257",[76,3.468,77,3.468,78,3.468]],["t/1263",[67,2.602,79,3.037,80,3.037,81,3.037]],["t/1273",[29,2.568,82,4.04]],["t/1291",[26,2.643,63,2.971,70,2.971]],["t/1301",[83,4.839]],["t/1307",[84,4.839]],["t/1311",[85,4.839]],["t/1315",[86,4.839]],["t/1321",[87,4.839]],["t/1346",[88,4.04,89,4.04]],["t/1356",[90,4.839]],["t/1362",[27,1.672,29,2.204,54,2.971]],["t/1370",[91,4.839]],["t/1378",[92,4.839]],["t/1384",[7,1.672,27,1.672,93,3.468]],["t/1407",[7,1.672,24,2.643,27,1.672]]],"invertedIndex":[["agent",{"_index":29,"t":{"976":{"position":[[15,5]]},"1139":{"position":[[0,7]]},"1251":{"position":[[11,5]]},"1273":{"position":[[0,5]]},"1362":{"position":[[12,5]]}}}],["agentkit",{"_index":53,"t":{"1133":{"position":[[22,8]]}}}],["ai",{"_index":67,"t":{"1203":{"position":[[7,2]]},"1263":{"position":[[0,2]]}}}],["ami",{"_index":40,"t":{"1057":{"position":[[26,3]]}}}],["anyth",{"_index":62,"t":{"1179":{"position":[[0,8]]}}}],["api",{"_index":73,"t":{"1235":{"position":[[0,3]]}}}],["app",{"_index":78,"t":{"1257":{"position":[[17,4]]}}}],["assist",{"_index":80,"t":{"1263":{"position":[[10,9]]}}}],["aw",{"_index":39,"t":{"1057":{"position":[[16,3]]}}}],["base",{"_index":1,"t":{"884":{"position":[[10,4]]},"890":{"position":[[10,4]]},"916":{"position":[[10,4]]},"930":{"position":[[10,4]]},"968":{"position":[[18,4]]},"993":{"position":[[10,4]]}}}],["build",{"_index":23,"t":{"968":{"position":[[0,5]]}}}],["call",{"_index":63,"t":{"1183":{"position":[[0,7]]},"1291":{"position":[[15,4]]}}}],["chat",{"_index":72,"t":{"1215":{"position":[[14,4]]}}}],["cli",{"_index":33,"t":{"1007":{"position":[[8,3]]}}}],["code",{"_index":79,"t":{"1263":{"position":[[3,6]]}}}],["codegpt",{"_index":65,"t":{"1193":{"position":[[0,7]]}}}],["coinbas",{"_index":52,"t":{"1133":{"position":[[13,8]]}}}],["continu",{"_index":81,"t":{"1263":{"position":[[21,8]]}}}],["cpp",{"_index":22,"t":{"956":{"position":[[6,3]]}}}],["cuda",{"_index":45,"t":{"1079":{"position":[[8,4]]}}}],["cursor",{"_index":66,"t":{"1203":{"position":[[0,6]]}}}],["custom",{"_index":42,"t":{"1063":{"position":[[0,9]]}}}],["difi",{"_index":69,"t":{"1211":{"position":[[0,4]]}}}],["docker",{"_index":50,"t":{"1121":{"position":[[18,6]]}}}],["domain",{"_index":20,"t":{"952":{"position":[[32,6]]}}}],["ecosystem",{"_index":77,"t":{"1257":{"position":[[7,9]]}}}],["eliza",{"_index":61,"t":{"1173":{"position":[[13,5]]}}}],["extern",{"_index":64,"t":{"1183":{"position":[[8,8]]}}}],["file",{"_index":11,"t":{"916":{"position":[[31,4]]},"930":{"position":[[26,4]]},"993":{"position":[[33,4]]}}}],["fine",{"_index":13,"t":{"939":{"position":[[0,4]]}}}],["flowiseai",{"_index":70,"t":{"1215":{"position":[[0,9]]},"1291":{"position":[[0,9]]}}}],["gaia",{"_index":6,"t":{"904":{"position":[[0,4]]},"952":{"position":[[27,4]]},"968":{"position":[[29,4]]},"1088":{"position":[[12,4]]}}}],["gaianet",{"_index":27,"t":{"976":{"position":[[0,7]]},"1007":{"position":[[0,7]]},"1049":{"position":[[17,7]]},"1063":{"position":[[15,7]]},"1139":{"position":[[23,7]]},"1211":{"position":[[7,7]]},"1362":{"position":[[21,7]]},"1384":{"position":[[7,7]]},"1407":{"position":[[7,7]]}}}],["genai",{"_index":28,"t":{"976":{"position":[[9,5]]}}}],["id",{"_index":68,"t":{"1203":{"position":[[10,3]]}}}],["imag",{"_index":41,"t":{"1057":{"position":[[30,6]]}}}],["instal",{"_index":35,"t":{"1023":{"position":[[0,7]]},"1079":{"position":[[0,7]]},"1171":{"position":[[0,7]]}}}],["join",{"_index":47,"t":{"1088":{"position":[[0,7]]}}}],["knowledg",{"_index":0,"t":{"884":{"position":[[0,9]]},"890":{"position":[[0,9]]},"904":{"position":[[26,9]]},"916":{"position":[[0,9]]},"930":{"position":[[0,9]]},"968":{"position":[[8,9]]},"993":{"position":[[0,9]]}}}],["launch",{"_index":19,"t":{"952":{"position":[[17,9]]}}}],["linux",{"_index":46,"t":{"1079":{"position":[[16,5]]}}}],["llama",{"_index":21,"t":{"956":{"position":[[0,5]]}}}],["llamacloud",{"_index":83,"t":{"1301":{"position":[[0,10]]}}}],["llamacod",{"_index":92,"t":{"1378":{"position":[[0,10]]}}}],["llamaedgebook",{"_index":84,"t":{"1307":{"position":[[0,13]]}}}],["llamatutor",{"_index":86,"t":{"1315":{"position":[[0,10]]}}}],["llm",{"_index":15,"t":{"939":{"position":[[10,4]]},"1179":{"position":[[9,3]]}}}],["lobechat",{"_index":85,"t":{"1311":{"position":[[0,8]]}}}],["local",{"_index":44,"t":{"1077":{"position":[[6,5]]}}}],["long",{"_index":8,"t":{"904":{"position":[[16,4]]}}}],["machin",{"_index":60,"t":{"1171":{"position":[[35,7]]}}}],["markdown",{"_index":10,"t":{"916":{"position":[[22,8]]}}}],["multipl",{"_index":58,"t":{"1171":{"position":[[8,8]]}}}],["network",{"_index":30,"t":{"976":{"position":[[21,7]]}}}],["node",{"_index":7,"t":{"904":{"position":[[5,5]]},"1049":{"position":[[25,4]]},"1057":{"position":[[8,4]]},"1063":{"position":[[23,4]]},"1077":{"position":[[17,4]]},"1121":{"position":[[8,4]]},"1171":{"position":[[17,5]]},"1384":{"position":[[15,5]]},"1407":{"position":[[15,4]]}}}],["obsidian",{"_index":87,"t":{"1321":{"position":[[0,8]]}}}],["open",{"_index":88,"t":{"1346":{"position":[[0,4]]}}}],["openai",{"_index":76,"t":{"1257":{"position":[[0,6]]}}}],["option",{"_index":34,"t":{"1007":{"position":[[12,7]]}}}],["overview",{"_index":16,"t":{"941":{"position":[[0,8]]}}}],["pair",{"_index":5,"t":{"890":{"position":[[37,5]]}}}],["pdf",{"_index":12,"t":{"930":{"position":[[22,3]]}}}],["plain",{"_index":31,"t":{"993":{"position":[[22,5]]}}}],["plan",{"_index":75,"t":{"1251":{"position":[[2,8]]}}}],["process",{"_index":57,"t":{"1165":{"position":[[19,7]]}}}],["protect",{"_index":55,"t":{"1165":{"position":[[0,7]]}}}],["protocol",{"_index":48,"t":{"1088":{"position":[[17,8]]}}}],["public",{"_index":93,"t":{"1384":{"position":[[0,6]]}}}],["quick",{"_index":17,"t":{"952":{"position":[[0,5]]},"1049":{"position":[[0,5]]}}}],["rag",{"_index":71,"t":{"1215":{"position":[[10,3]]}}}],["refer",{"_index":74,"t":{"1235":{"position":[[4,9]]}}}],["requir",{"_index":38,"t":{"1039":{"position":[[7,12]]}}}],["run",{"_index":43,"t":{"1077":{"position":[[0,3]]}}}],["server",{"_index":56,"t":{"1165":{"position":[[12,6]]}}}],["singl",{"_index":59,"t":{"1171":{"position":[[28,6]]}}}],["sourc",{"_index":3,"t":{"890":{"position":[[20,6]]}}}],["start",{"_index":18,"t":{"952":{"position":[[6,5]]},"1049":{"position":[[6,5]]},"1057":{"position":[[0,5]]},"1121":{"position":[[0,5]]}}}],["stockbot",{"_index":90,"t":{"1356":{"position":[[0,8]]}}}],["summari",{"_index":4,"t":{"890":{"position":[[29,7]]}}}],["system",{"_index":37,"t":{"1039":{"position":[[0,6]]}}}],["term",{"_index":9,"t":{"904":{"position":[[21,4]]}}}],["text",{"_index":32,"t":{"993":{"position":[[28,4]]}}}],["tool",{"_index":26,"t":{"968":{"position":[[38,4]]},"1183":{"position":[[17,5]]},"1291":{"position":[[10,4]]}}}],["translat",{"_index":54,"t":{"1139":{"position":[[8,11]]},"1362":{"position":[[0,11]]}}}],["troubleshoot",{"_index":49,"t":{"1102":{"position":[[0,15]]}}}],["tune",{"_index":14,"t":{"939":{"position":[[5,4]]}}}],["uninstal",{"_index":36,"t":{"1023":{"position":[[12,9]]}}}],["url",{"_index":2,"t":{"884":{"position":[[22,3]]}}}],["us",{"_index":24,"t":{"968":{"position":[[23,5]]},"1057":{"position":[[20,5]]},"1407":{"position":[[0,3]]}}}],["web",{"_index":25,"t":{"968":{"position":[[34,3]]}}}],["webui",{"_index":89,"t":{"1346":{"position":[[5,5]]}}}],["work",{"_index":51,"t":{"1133":{"position":[[0,7]]},"1173":{"position":[[0,7]]}}}],["zed",{"_index":91,"t":{"1370":{"position":[[0,3]]}}}],["zero",{"_index":82,"t":{"1273":{"position":[[6,4]]}}}]],"pipeline":["stemmer"]}},{"documents":[{"i":886,"t":"Parse the URL content to a markdown file","u":"/1.0.0/creator-guide/knowledge/firecrawl","h":"#parse-the-url-content-to-a-markdown-file","p":884},{"i":888,"t":"Create embeddings from the markdown files","u":"/1.0.0/creator-guide/knowledge/firecrawl","h":"#create-embeddings-from-the-markdown-files","p":884},{"i":892,"t":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/csv","h":"#prerequisites","p":890},{"i":894,"t":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/csv","h":"#start-a-vector-database","p":890},{"i":896,"t":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/csv","h":"#create-the-vector-collection-snapshot","p":890},{"i":898,"t":"Options","u":"/1.0.0/creator-guide/knowledge/csv","h":"#options","p":890},{"i":900,"t":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/csv","h":"#create-a-vector-snapshot","p":890},{"i":902,"t":"Next steps","u":"/1.0.0/creator-guide/knowledge/csv","h":"#next-steps","p":890},{"i":906,"t":"Workflow for creating knowledge embeddings","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#workflow-for-creating-knowledge-embeddings","p":904},{"i":908,"t":"Lifecycle of a user query on a knowledge-supplemented LLM","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#lifecycle-of-a-user-query-on-a-knowledge-supplemented-llm","p":904},{"i":910,"t":"Ask a question","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#ask-a-question","p":904},{"i":912,"t":"Retrieve similar embeddings","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#retrieve-similar-embeddings","p":904},{"i":914,"t":"Response to the user query","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#response-to-the-user-query","p":904},{"i":918,"t":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#prerequisites","p":916},{"i":920,"t":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#start-a-vector-database","p":916},{"i":922,"t":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#create-the-vector-collection-snapshot","p":916},{"i":924,"t":"Options","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#options","p":916},{"i":926,"t":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#create-a-vector-snapshot","p":916},{"i":928,"t":"Next steps","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#next-steps","p":916},{"i":932,"t":"Tools to convert a PDF file to a markdown file","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#tools-to-convert-a-pdf-file-to-a-markdown-file","p":930},{"i":933,"t":"Tool #1: LlamaParse","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#tool-1-llamaparse","p":930},{"i":935,"t":"Tool #2: GPTPDF","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#tool-2-gptpdf","p":930},{"i":937,"t":"Create embeddings from the markdown files","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#create-embeddings-from-the-markdown-files","p":930},{"i":943,"t":"Next steps:","u":"/1.0.0/intro","h":"#next-steps","p":941},{"i":944,"t":"Users","u":"/1.0.0/intro","h":"#users","p":941},{"i":946,"t":"Node operators","u":"/1.0.0/intro","h":"#node-operators","p":941},{"i":948,"t":"Domain operators","u":"/1.0.0/intro","h":"#domain-operators","p":941},{"i":950,"t":"Creators","u":"/1.0.0/intro","h":"#creators","p":941},{"i":954,"t":"Steps to Launch Your Gaia Domain","u":"/1.0.0/domain-guide/quick-start","h":"#steps-to-launch-your-gaia-domain","p":952},{"i":958,"t":"Build the fine-tune utility from llama.cpp","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#build-the-fine-tune-utility-from-llamacpp","p":956},{"i":960,"t":"Get the base model","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#get-the-base-model","p":956},{"i":962,"t":"Create a question and answer set for fine-tuning","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#create-a-question-and-answer-set-for-fine-tuning","p":956},{"i":964,"t":"Finetune!","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#finetune","p":956},{"i":966,"t":"Merge","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#merge","p":956},{"i":970,"t":"Segment your text file","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#segment-your-text-file","p":968},{"i":972,"t":"Generate the snapshot file","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#generate-the-snapshot-file","p":968},{"i":974,"t":"Update the node config","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#update-the-node-config","p":968},{"i":977,"t":"Abstract","u":"/1.0.0/litepaper","h":"#abstract","p":976},{"i":979,"t":"Introduction","u":"/1.0.0/litepaper","h":"#introduction","p":976},{"i":981,"t":"Open-source and decentralization","u":"/1.0.0/litepaper","h":"#open-source-and-decentralization","p":976},{"i":983,"t":"GaiaNet node","u":"/1.0.0/litepaper","h":"#gaianet-node","p":976},{"i":985,"t":"GaiaNet network","u":"/1.0.0/litepaper","h":"#gaianet-network","p":976},{"i":987,"t":"GaiaNet token","u":"/1.0.0/litepaper","h":"#gaianet-token","p":976},{"i":989,"t":"Component marketplace for AI assets","u":"/1.0.0/litepaper","h":"#component-marketplace-for-ai-assets","p":976},{"i":991,"t":"Conclusion","u":"/1.0.0/litepaper","h":"#conclusion","p":976},{"i":995,"t":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/text","h":"#prerequisites","p":993},{"i":997,"t":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/text","h":"#start-a-vector-database","p":993},{"i":999,"t":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/text","h":"#create-the-vector-collection-snapshot","p":993},{"i":1001,"t":"Options","u":"/1.0.0/creator-guide/knowledge/text","h":"#options","p":993},{"i":1003,"t":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/text","h":"#create-a-vector-snapshot","p":993},{"i":1005,"t":"Next steps","u":"/1.0.0/creator-guide/knowledge/text","h":"#next-steps","p":993},{"i":1009,"t":"help","u":"/1.0.0/node-guide/cli-options","h":"#help","p":1007},{"i":1011,"t":"version","u":"/1.0.0/node-guide/cli-options","h":"#version","p":1007},{"i":1013,"t":"init","u":"/1.0.0/node-guide/cli-options","h":"#init","p":1007},{"i":1015,"t":"start","u":"/1.0.0/node-guide/cli-options","h":"#start","p":1007},{"i":1017,"t":"stop","u":"/1.0.0/node-guide/cli-options","h":"#stop","p":1007},{"i":1019,"t":"config","u":"/1.0.0/node-guide/cli-options","h":"#config","p":1007},{"i":1021,"t":"base","u":"/1.0.0/node-guide/cli-options","h":"#base","p":1007},{"i":1025,"t":"Install","u":"/1.0.0/node-guide/install_uninstall","h":"#install","p":1023},{"i":1027,"t":"Install the latest version of GaiaNet node","u":"/1.0.0/node-guide/install_uninstall","h":"#install-the-latest-version-of-gaianet-node","p":1023},{"i":1029,"t":"Install the specific version of GaiaNet Node","u":"/1.0.0/node-guide/install_uninstall","h":"#install-the-specific-version-of-gaianet-node","p":1023},{"i":1031,"t":"Update the current Gaianet node","u":"/1.0.0/node-guide/install_uninstall","h":"#update-the-current-gaianet-node","p":1023},{"i":1033,"t":"Uninstall","u":"/1.0.0/node-guide/install_uninstall","h":"#uninstall","p":1023},{"i":1035,"t":"What's installed","u":"/1.0.0/node-guide/install_uninstall","h":"#whats-installed","p":1023},{"i":1037,"t":"CLI options for the installer","u":"/1.0.0/node-guide/install_uninstall","h":"#cli-options-for-the-installer","p":1023},{"i":1041,"t":"Supported on","u":"/1.0.0/node-guide/system-requirements","h":"#supported-on","p":1039},{"i":1043,"t":"GPU","u":"/1.0.0/node-guide/system-requirements","h":"#gpu","p":1039},{"i":1045,"t":"CPU","u":"/1.0.0/node-guide/system-requirements","h":"#cpu","p":1039},{"i":1047,"t":"Oses","u":"/1.0.0/node-guide/system-requirements","h":"#oses","p":1039},{"i":1051,"t":"Prerequisites","u":"/1.0.0/node-guide/quick-start","h":"#prerequisites","p":1049},{"i":1053,"t":"Installing the node","u":"/1.0.0/node-guide/quick-start","h":"#installing-the-node","p":1049},{"i":1055,"t":"Next steps","u":"/1.0.0/node-guide/quick-start","h":"#next-steps","p":1049},{"i":1059,"t":"Running an Nvidia GPU-enabled AWS instance","u":"/1.0.0/node-guide/tasks/aws","h":"#running-an-nvidia-gpu-enabled-aws-instance","p":1057},{"i":1061,"t":"Running a CPU-only AWS instance","u":"/1.0.0/node-guide/tasks/aws","h":"#running-a-cpu-only-aws-instance","p":1057},{"i":1065,"t":"Pre-set configurations","u":"/1.0.0/node-guide/customize","h":"#pre-set-configurations","p":1063},{"i":1067,"t":"The config subcommand","u":"/1.0.0/node-guide/customize","h":"#the-config-subcommand","p":1063},{"i":1069,"t":"Select an LLM","u":"/1.0.0/node-guide/customize","h":"#select-an-llm","p":1063},{"i":1071,"t":"Select a knowledge base","u":"/1.0.0/node-guide/customize","h":"#select-a-knowledge-base","p":1063},{"i":1073,"t":"Customize prompts","u":"/1.0.0/node-guide/customize","h":"#customize-prompts","p":1063},{"i":1075,"t":"Next steps","u":"/1.0.0/node-guide/customize","h":"#next-steps","p":1063},{"i":1081,"t":"Ubuntu 22.04","u":"/1.0.0/node-guide/tasks/cuda","h":"#ubuntu-2204","p":1079},{"i":1082,"t":"1 Install the Nvidia driver.","u":"/1.0.0/node-guide/tasks/cuda","h":"#1-install-the-nvidia-driver","p":1079},{"i":1084,"t":"2 Install the CUDA toolkit.","u":"/1.0.0/node-guide/tasks/cuda","h":"#2-install-the-cuda-toolkit","p":1079},{"i":1086,"t":"More resources","u":"/1.0.0/node-guide/tasks/cuda","h":"#more-resources","p":1079},{"i":1090,"t":"Bind your node","u":"/1.0.0/node-guide/register","h":"#bind-your-node","p":1088},{"i":1092,"t":"Protect your node ID and device ID","u":"/1.0.0/node-guide/register","h":"#protect-your-node-id-and-device-id","p":1088},{"i":1094,"t":"Join a Domain","u":"/1.0.0/node-guide/register","h":"#join-a-domain","p":1088},{"i":1096,"t":"Steps to Join a Domain from Your Node Management Page","u":"/1.0.0/node-guide/register","h":"#steps-to-join-a-domain-from-your-node-management-page","p":1088},{"i":1098,"t":"Steps to Join a Domain from the AI Agent Domains page","u":"/1.0.0/node-guide/register","h":"#steps-to-join-a-domain-from-the-ai-agent-domains-page","p":1088},{"i":1100,"t":"Important Notes","u":"/1.0.0/node-guide/register","h":"#important-notes","p":1088},{"i":1103,"t":"The system cannot find CUDA libraries","u":"/1.0.0/node-guide/troubleshooting","h":"#the-system-cannot-find-cuda-libraries","p":1102},{"i":1105,"t":"Failed to recover from collection snapshot on Windows WSL","u":"/1.0.0/node-guide/troubleshooting","h":"#failed-to-recover-from-collection-snapshot-on-windows-wsl","p":1102},{"i":1107,"t":"Failed to start the node with an error message Port 8080 is in use. Exit ...","u":"/1.0.0/node-guide/troubleshooting","h":"#failed-to-start-the-node-with-an-error-message-port-8080-is-in-use-exit-","p":1102},{"i":1109,"t":"Load library failed: libgomp.so.1: cannot open shared object file: No such file or directory","u":"/1.0.0/node-guide/troubleshooting","h":"#load-library-failed-libgompso1-cannot-open-shared-object-file-no-such-file-or-directory","p":1102},{"i":1111,"t":"Failed to remove the default collection","u":"/1.0.0/node-guide/troubleshooting","h":"#failed-to-remove-the-default-collection","p":1102},{"i":1113,"t":"File I/O error","u":"/1.0.0/node-guide/troubleshooting","h":"#file-io-error","p":1102},{"i":1115,"t":"The \"Failed to open the file\" Error","u":"/1.0.0/node-guide/troubleshooting","h":"#the-failed-to-open-the-file-error","p":1102},{"i":1117,"t":"The \"Too many open files\" Error on macOS","u":"/1.0.0/node-guide/troubleshooting","h":"#the-too-many-open-files-error-on-macos","p":1102},{"i":1119,"t":"Permission denied when use the installer script to install WasmEdge","u":"/1.0.0/node-guide/troubleshooting","h":"#permission-denied-when-use-the-installer-script-to-install-wasmedge","p":1102},{"i":1123,"t":"Quick start","u":"/1.0.0/node-guide/tasks/docker","h":"#quick-start","p":1121},{"i":1125,"t":"Stop and re-start","u":"/1.0.0/node-guide/tasks/docker","h":"#stop-and-re-start","p":1121},{"i":1127,"t":"Make changes to the node","u":"/1.0.0/node-guide/tasks/docker","h":"#make-changes-to-the-node","p":1121},{"i":1129,"t":"Change the node ID","u":"/1.0.0/node-guide/tasks/docker","h":"#change-the-node-id","p":1121},{"i":1131,"t":"Build a node image locally","u":"/1.0.0/node-guide/tasks/docker","h":"#build-a-node-image-locally","p":1121},{"i":1135,"t":"Quickstart","u":"/1.0.0/tutorial/coinbase","h":"#quickstart","p":1133},{"i":1137,"t":"A Telegram bot for AgentKit","u":"/1.0.0/tutorial/coinbase","h":"#a-telegram-bot-for-agentkit","p":1133},{"i":1141,"t":"Introduction to the LLM Translation Agent","u":"/1.0.0/tutorial/translator-agent","h":"#introduction-to-the-llm-translation-agent","p":1139},{"i":1143,"t":"Demo 1: Running Translation Agents with Llama-3-8B","u":"/1.0.0/tutorial/translator-agent","h":"#demo-1-running-translation-agents-with-llama-3-8b","p":1139},{"i":1145,"t":"Step 1.1: Run a Llama-3-8B GaiaNet node","u":"/1.0.0/tutorial/translator-agent","h":"#step-11-run-a-llama-3-8b-gaianet-node","p":1139},{"i":1147,"t":"Step 1.2 Run the Translation Agent on top of Llama-3-8B","u":"/1.0.0/tutorial/translator-agent","h":"#step-12-run-the-translation-agent-on-top-of-llama-3-8b","p":1139},{"i":1149,"t":"Demo 2: Running Translation Agents with gemma-2-27b","u":"/1.0.0/tutorial/translator-agent","h":"#demo-2-running-translation-agents-with-gemma-2-27b","p":1139},{"i":1151,"t":"Step 2.1 Run a gemma-2-27b GaiaNet node","u":"/1.0.0/tutorial/translator-agent","h":"#step-21-run-a-gemma-2-27b-gaianet-node","p":1139},{"i":1153,"t":"Step 2.2 Run the Translation Agent to run on top of gemma-2-27b","u":"/1.0.0/tutorial/translator-agent","h":"#step-22-run-the-translation-agent-to-run-on-top-of-gemma-2-27b","p":1139},{"i":1155,"t":"Demo 3: Running Translation Agents with Phi-3-Medium long context model","u":"/1.0.0/tutorial/translator-agent","h":"#demo-3-running-translation-agents-with-phi-3-medium-long-context-model","p":1139},{"i":1157,"t":"Step 3.1: Run a Phi-3-medium-128k GaiaNet node","u":"/1.0.0/tutorial/translator-agent","h":"#step-31-run-a-phi-3-medium-128k-gaianet-node","p":1139},{"i":1159,"t":"Step 3.2 Clone and run the Translation Agent on top of Phi-3-medium-128k","u":"/1.0.0/tutorial/translator-agent","h":"#step-32-clone-and-run-the-translation-agent-on-top-of-phi-3-medium-128k","p":1139},{"i":1161,"t":"Evaluation of Translation Quality","u":"/1.0.0/tutorial/translator-agent","h":"#evaluation-of-translation-quality","p":1139},{"i":1163,"t":"Conclusion","u":"/1.0.0/tutorial/translator-agent","h":"#conclusion","p":1139},{"i":1167,"t":"Use Supervise","u":"/1.0.0/node-guide/tasks/protect","h":"#use-supervise","p":1165},{"i":1169,"t":"Reduce the nice value","u":"/1.0.0/node-guide/tasks/protect","h":"#reduce-the-nice-value","p":1165},{"i":1175,"t":"Build a Trump agent with eliza and Gaia","u":"/1.0.0/tutorial/eliza","h":"#build-a-trump-agent-with-eliza-and-gaia","p":1173},{"i":1177,"t":"Advanced use case","u":"/1.0.0/tutorial/eliza","h":"#advanced-use-case","p":1173},{"i":1181,"t":"Steps","u":"/1.0.0/user-guide/apps/anything_llm","h":"#steps","p":1179},{"i":1185,"t":"Prerequisites","u":"/1.0.0/tutorial/tool-call","h":"#prerequisites","p":1183},{"i":1187,"t":"Run the demo agent","u":"/1.0.0/tutorial/tool-call","h":"#run-the-demo-agent","p":1183},{"i":1189,"t":"Use the agent","u":"/1.0.0/tutorial/tool-call","h":"#use-the-agent","p":1183},{"i":1191,"t":"Make it robust","u":"/1.0.0/tutorial/tool-call","h":"#make-it-robust","p":1183},{"i":1195,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/codegpt","h":"#prerequisites","p":1193},{"i":1197,"t":"Install CodeGPT","u":"/1.0.0/user-guide/apps/codegpt","h":"#install-codegpt","p":1193},{"i":1199,"t":"Configure CodeGPT","u":"/1.0.0/user-guide/apps/codegpt","h":"#configure-codegpt","p":1193},{"i":1201,"t":"Use the plugin","u":"/1.0.0/user-guide/apps/codegpt","h":"#use-the-plugin","p":1193},{"i":1205,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/cursor","h":"#prerequisites","p":1203},{"i":1207,"t":"Configure Cursor","u":"/1.0.0/user-guide/apps/cursor","h":"#configure-cursor","p":1203},{"i":1209,"t":"Use Cursor","u":"/1.0.0/user-guide/apps/cursor","h":"#use-cursor","p":1203},{"i":1213,"t":"Steps","u":"/1.0.0/user-guide/apps/dify","h":"#steps","p":1211},{"i":1217,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/flowiseai","h":"#prerequisites","p":1215},{"i":1219,"t":"Start a FlowiseAI server","u":"/1.0.0/user-guide/apps/flowiseai","h":"#start-a-flowiseai-server","p":1215},{"i":1221,"t":"Build a documents QnA chatbot","u":"/1.0.0/user-guide/apps/flowiseai","h":"#build-a-documents-qna-chatbot","p":1215},{"i":1223,"t":"Get the Flowise Docs QnA template","u":"/1.0.0/user-guide/apps/flowiseai","h":"#get-the-flowise-docs-qna-template","p":1215},{"i":1225,"t":"Connect the chat model API","u":"/1.0.0/user-guide/apps/flowiseai","h":"#connect-the-chat-model-api","p":1215},{"i":1227,"t":"Connect the embedding model API","u":"/1.0.0/user-guide/apps/flowiseai","h":"#connect-the-embedding-model-api","p":1215},{"i":1229,"t":"Set up your documents","u":"/1.0.0/user-guide/apps/flowiseai","h":"#set-up-your-documents","p":1215},{"i":1231,"t":"Give it a try","u":"/1.0.0/user-guide/apps/flowiseai","h":"#give-it-a-try","p":1215},{"i":1233,"t":"More examples","u":"/1.0.0/user-guide/apps/flowiseai","h":"#more-examples","p":1215},{"i":1236,"t":"Introduction","u":"/1.0.0/user-guide/api-reference","h":"#introduction","p":1235},{"i":1238,"t":"Endpoints","u":"/1.0.0/user-guide/api-reference","h":"#endpoints","p":1235},{"i":1239,"t":"Chat","u":"/1.0.0/user-guide/api-reference","h":"#chat","p":1235},{"i":1241,"t":"Embedding","u":"/1.0.0/user-guide/api-reference","h":"#embedding","p":1235},{"i":1243,"t":"Retrieve","u":"/1.0.0/user-guide/api-reference","h":"#retrieve","p":1235},{"i":1245,"t":"Get the model","u":"/1.0.0/user-guide/api-reference","h":"#get-the-model","p":1235},{"i":1247,"t":"Get the node info","u":"/1.0.0/user-guide/api-reference","h":"#get-the-node-info","p":1235},{"i":1249,"t":"Status Codes","u":"/1.0.0/user-guide/api-reference","h":"#status-codes","p":1235},{"i":1253,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/gpt-planner","h":"#prerequisites","p":1251},{"i":1255,"t":"Run the agent","u":"/1.0.0/user-guide/apps/gpt-planner","h":"#run-the-agent","p":1251},{"i":1259,"t":"The OpenAI Python library","u":"/1.0.0/user-guide/apps/intro","h":"#the-openai-python-library","p":1257},{"i":1261,"t":"The OpenAI Node API library","u":"/1.0.0/user-guide/apps/intro","h":"#the-openai-node-api-library","p":1257},{"i":1265,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/continue","h":"#prerequisites","p":1263},{"i":1267,"t":"Install Continue","u":"/1.0.0/user-guide/apps/continue","h":"#install-continue","p":1263},{"i":1269,"t":"Configure Continue","u":"/1.0.0/user-guide/apps/continue","h":"#configure-continue","p":1263},{"i":1271,"t":"Use the plugin","u":"/1.0.0/user-guide/apps/continue","h":"#use-the-plugin","p":1263},{"i":1275,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/agent-zero","h":"#prerequisites","p":1273},{"i":1277,"t":"Configure the agent","u":"/1.0.0/user-guide/apps/agent-zero","h":"#configure-the-agent","p":1273},{"i":1279,"t":"Run the agent","u":"/1.0.0/user-guide/apps/agent-zero","h":"#run-the-agent","p":1273},{"i":1281,"t":"Example 1","u":"/1.0.0/user-guide/apps/agent-zero","h":"#example-1","p":1273},{"i":1283,"t":"Example 2","u":"/1.0.0/user-guide/apps/agent-zero","h":"#example-2","p":1273},{"i":1285,"t":"Example 3","u":"/1.0.0/user-guide/apps/agent-zero","h":"#example-3","p":1273},{"i":1287,"t":"Example 4","u":"/1.0.0/user-guide/apps/agent-zero","h":"#example-4","p":1273},{"i":1289,"t":"Example 5","u":"/1.0.0/user-guide/apps/agent-zero","h":"#example-5","p":1273},{"i":1293,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/flowiseai-tool-call","h":"#prerequisites","p":1291},{"i":1295,"t":"Start a FlowiseAI server","u":"/1.0.0/user-guide/apps/flowiseai-tool-call","h":"#start-a-flowiseai-server","p":1291},{"i":1297,"t":"Build a chatbot for realtime IP lookup","u":"/1.0.0/user-guide/apps/flowiseai-tool-call","h":"#build-a-chatbot-for-realtime-ip-lookup","p":1291},{"i":1299,"t":"Give it a try","u":"/1.0.0/user-guide/apps/flowiseai-tool-call","h":"#give-it-a-try","p":1291},{"i":1303,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/llamaparse","h":"#prerequisites","p":1301},{"i":1305,"t":"Steps","u":"/1.0.0/user-guide/apps/llamaparse","h":"#steps","p":1301},{"i":1309,"t":"Steps","u":"/1.0.0/user-guide/apps/llamaedgebook","h":"#steps","p":1307},{"i":1313,"t":"Steps","u":"/1.0.0/user-guide/apps/lobechat","h":"#steps","p":1311},{"i":1317,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/llamatutor","h":"#prerequisites","p":1315},{"i":1319,"t":"Run the agent","u":"/1.0.0/user-guide/apps/llamatutor","h":"#run-the-agent","p":1315},{"i":1323,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/obsidian","h":"#prerequisites","p":1321},{"i":1325,"t":"Obsidian-local-gpt Plugin Setup","u":"/1.0.0/user-guide/apps/obsidian","h":"#obsidian-local-gpt-plugin-setup","p":1321},{"i":1327,"t":"Install the Obsidian-local-gpt Plugin","u":"/1.0.0/user-guide/apps/obsidian","h":"#install-the-obsidian-local-gpt-plugin","p":1321},{"i":1329,"t":"Configure the Plugin","u":"/1.0.0/user-guide/apps/obsidian","h":"#configure-the-plugin","p":1321},{"i":1331,"t":"Configure Obsidian Hotkey","u":"/1.0.0/user-guide/apps/obsidian","h":"#configure-obsidian-hotkey","p":1321},{"i":1333,"t":"Use Cases","u":"/1.0.0/user-guide/apps/obsidian","h":"#use-cases","p":1321},{"i":1334,"t":"Text Continuation","u":"/1.0.0/user-guide/apps/obsidian","h":"#text-continuation","p":1321},{"i":1336,"t":"Summarization","u":"/1.0.0/user-guide/apps/obsidian","h":"#summarization","p":1321},{"i":1338,"t":"Spelling and Grammar Check","u":"/1.0.0/user-guide/apps/obsidian","h":"#spelling-and-grammar-check","p":1321},{"i":1340,"t":"Extract Action Items","u":"/1.0.0/user-guide/apps/obsidian","h":"#extract-action-items","p":1321},{"i":1342,"t":"General Assistance","u":"/1.0.0/user-guide/apps/obsidian","h":"#general-assistance","p":1321},{"i":1344,"t":"Try it now!","u":"/1.0.0/user-guide/apps/obsidian","h":"#try-it-now","p":1321},{"i":1348,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/openwebui","h":"#prerequisites","p":1346},{"i":1350,"t":"Start the Open WebUI on your machine","u":"/1.0.0/user-guide/apps/openwebui","h":"#start-the-open-webui-on-your-machine","p":1346},{"i":1352,"t":"Use Open WebUI as a Chatbot UI","u":"/1.0.0/user-guide/apps/openwebui","h":"#use-open-webui-as-a-chatbot-ui","p":1346},{"i":1354,"t":"Use Open WebUI as a client-side RAG tool","u":"/1.0.0/user-guide/apps/openwebui","h":"#use-open-webui-as-a-client-side-rag-tool","p":1346},{"i":1358,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/stockbot","h":"#prerequisites","p":1356},{"i":1360,"t":"Run the agent","u":"/1.0.0/user-guide/apps/stockbot","h":"#run-the-agent","p":1356},{"i":1364,"t":"Prepare the environment","u":"/1.0.0/user-guide/apps/translation-agent","h":"#prepare-the-environment","p":1362},{"i":1366,"t":"Prepare your translation task","u":"/1.0.0/user-guide/apps/translation-agent","h":"#prepare-your-translation-task","p":1362},{"i":1368,"t":"Translate","u":"/1.0.0/user-guide/apps/translation-agent","h":"#translate","p":1362},{"i":1372,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/zed","h":"#prerequisites","p":1370},{"i":1374,"t":"Configure Zed","u":"/1.0.0/user-guide/apps/zed","h":"#configure-zed","p":1370},{"i":1376,"t":"Use Zed","u":"/1.0.0/user-guide/apps/zed","h":"","p":1370},{"i":1380,"t":"Prerequisites","u":"/1.0.0/user-guide/apps/llamacoder","h":"#prerequisites","p":1378},{"i":1382,"t":"Run the agent","u":"/1.0.0/user-guide/apps/llamacoder","h":"#run-the-agent","p":1378},{"i":1386,"t":"Public Gaia domains","u":"/1.0.0/user-guide/nodes","h":"#public-gaia-domains","p":1384},{"i":1387,"t":"LLM: Llama 8b","u":"/1.0.0/user-guide/nodes","h":"#llm-llama-8b","p":1384},{"i":1389,"t":"Voice-to-text: Whisper v2 large","u":"/1.0.0/user-guide/nodes","h":"#voice-to-text-whisper-v2-large","p":1384},{"i":1391,"t":"Text-to-image: Realistic vision","u":"/1.0.0/user-guide/nodes","h":"#text-to-image-realistic-vision","p":1384},{"i":1393,"t":"Text-to-voice: GPT-SoVITS","u":"/1.0.0/user-guide/nodes","h":"#text-to-voice-gpt-sovits","p":1384},{"i":1395,"t":"Coding assistant agents","u":"/1.0.0/user-guide/nodes","h":"#coding-assistant-agents","p":1384},{"i":1396,"t":"Coder","u":"/1.0.0/user-guide/nodes","h":"#coder","p":1384},{"i":1398,"t":"Rust Coder","u":"/1.0.0/user-guide/nodes","h":"#rust-coder","p":1384},{"i":1400,"t":"Alternative LLM domains","u":"/1.0.0/user-guide/nodes","h":"#alternative-llm-domains","p":1384},{"i":1401,"t":"Llama 3b","u":"/1.0.0/user-guide/nodes","h":"#llama-3b","p":1384},{"i":1403,"t":"Qwen 7b","u":"/1.0.0/user-guide/nodes","h":"#qwen-7b","p":1384},{"i":1405,"t":"Qwen 72b","u":"/1.0.0/user-guide/nodes","h":"#qwen-72b","p":1384},{"i":1409,"t":"Web-based chatbot","u":"/1.0.0/user-guide/mynode","h":"#web-based-chatbot","p":1407},{"i":1411,"t":"OpenAI API replacement","u":"/1.0.0/user-guide/mynode","h":"#openai-api-replacement","p":1407}],"index":{"version":"2.3.9","fields":["t"],"fieldVectors":[["t/886",[0,3.844,1,3.844,2,3.844,3,2.997,4,2.343]],["t/888",[3,3.363,4,2.629,5,2.629,6,3.044]],["t/892",[7,3.304]],["t/894",[8,2.995,9,3.094,10,4.078]],["t/896",[5,2.629,9,2.716,11,3.189,12,2.812]],["t/898",[13,5.307]],["t/900",[5,2.995,9,3.094,12,3.203]],["t/902",[14,4.028,15,2.713]],["t/906",[5,2.629,6,3.044,16,4.314,17,3.58]],["t/908",[17,2.877,18,3.467,19,2.877,20,3.111,21,3.467,22,2.563]],["t/910",[23,5.708,24,5.123]],["t/912",[6,3.468,25,4.41,26,4.914]],["t/914",[19,4.078,20,4.41,27,4.914]],["t/918",[7,3.304]],["t/920",[8,2.995,9,3.094,10,4.078]],["t/922",[5,2.629,9,2.716,11,3.189,12,2.812]],["t/924",[13,5.307]],["t/926",[5,2.995,9,3.094,12,3.203]],["t/928",[14,4.028,15,2.713]],["t/932",[3,2.702,4,3.211,28,2.702,29,3.467,30,3.467]],["t/933",[28,3.83,31,3.094,32,4.914]],["t/935",[28,3.83,33,3.203,34,4.914]],["t/937",[3,3.363,4,2.629,5,2.629,6,3.044]],["t/943",[14,4.028,15,2.713]],["t/944",[19,5.651]],["t/946",[35,2.77,36,5.123]],["t/948",[36,5.123,37,3.864]],["t/950",[38,6.808]],["t/954",[15,2.05,37,2.92,39,4.314,40,3.58]],["t/958",[41,2.563,42,3.111,43,3.111,44,3.467,45,2.447,46,3.467]],["t/960",[47,4.45,48,4.22]],["t/962",[5,2.113,24,3.111,42,3.111,43,3.111,49,3.467,50,2.877]],["t/964",[51,6.808]],["t/966",[52,6.808]],["t/970",[4,2.995,53,4.914,54,3.633]],["t/972",[4,2.995,12,3.203,55,4.41]],["t/974",[35,2.385,56,4.41,57,4.078]],["t/977",[58,6.808]],["t/979",[59,5.651]],["t/981",[60,3.327,61,4.914,62,4.914]],["t/983",[35,2.77,63,3.594]],["t/985",[63,3.594,64,5.708]],["t/987",[63,3.594,65,5.708]],["t/989",[66,4.314,67,4.314,68,3.871,69,4.314]],["t/991",[70,6.11]],["t/995",[7,3.304]],["t/997",[8,2.995,9,3.094,10,4.078]],["t/999",[5,2.629,9,2.716,11,3.189,12,2.812]],["t/1001",[13,5.307]],["t/1003",[5,2.995,9,3.094,12,3.203]],["t/1005",[14,4.028,15,2.713]],["t/1009",[71,6.808]],["t/1011",[72,5.651]],["t/1013",[73,6.808]],["t/1015",[8,4.15]],["t/1017",[74,6.11]],["t/1019",[57,5.651]],["t/1021",[47,5.307]],["t/1025",[75,3.912]],["t/1027",[35,1.866,63,2.42,72,3.19,75,2.209,76,3.844]],["t/1029",[35,1.866,63,2.42,72,3.19,75,2.209,77,3.844]],["t/1031",[35,2.093,56,3.871,63,2.716,78,4.314]],["t/1033",[79,6.808]],["t/1035",[75,3.279,80,5.708]],["t/1037",[13,3.83,75,2.823,81,4.914]],["t/1041",[82,6.808]],["t/1043",[83,6.11]],["t/1045",[84,6.11]],["t/1047",[85,6.808]],["t/1051",[7,3.304]],["t/1053",[35,2.77,75,3.279]],["t/1055",[14,4.028,15,2.713]],["t/1059",[83,3.111,86,1.758,87,3.111,88,3.467,89,3.111,90,3.111]],["t/1061",[84,3.871,86,2.187,89,3.871,90,3.871]],["t/1065",[50,4.078,91,4.914,92,3.203]],["t/1067",[57,4.737,93,5.708]],["t/1069",[22,4.22,94,5.123]],["t/1071",[17,4.078,47,3.83,94,4.41]],["t/1073",[95,5.708,96,5.708]],["t/1075",[14,4.028,15,2.713]],["t/1081",[97,4.914,98,4.914,99,4.914]],["t/1082",[31,2.716,75,2.478,87,3.871,100,4.314]],["t/1084",[33,2.812,75,2.478,101,3.871,102,4.314]],["t/1086",[103,5.123,104,5.708]],["t/1090",[35,2.77,105,5.708]],["t/1092",[35,1.866,106,3.844,107,5.109,108,3.844]],["t/1094",[37,3.864,109,4.737]],["t/1096",[15,1.648,35,1.682,37,2.347,109,2.877,110,3.467,111,3.111]],["t/1098",[15,1.5,37,3.319,68,2.833,109,2.62,111,2.833,112,1.565]],["t/1100",[113,5.708,114,5.708]],["t/1103",[101,3.871,115,4.314,116,4.314,117,3.363]],["t/1105",[11,2.563,12,2.26,118,2.563,119,3.467,120,3.467,121,3.467]],["t/1107",[8,1.632,35,1.3,118,1.98,122,2.088,123,2.678,124,2.678,125,2.678,126,1.539,127,2.678]],["t/1109",[4,2.218,31,1.374,60,1.477,117,1.701,118,1.613,128,2.182,129,2.182,130,2.182,131,2.182,132,2.182,133,2.182]],["t/1111",[11,3.189,118,3.189,134,4.314,135,4.314]],["t/1113",[4,2.995,122,3.83,136,4.914]],["t/1115",[4,2.629,60,2.92,118,3.189,122,3.363]],["t/1117",[4,2.343,60,2.603,122,2.997,137,3.844,138,3.844]],["t/1119",[75,2.817,126,1.814,139,3.157,140,3.157,141,3.157,142,3.157]],["t/1123",[8,3.479,143,5.708]],["t/1125",[8,2.995,74,4.41,144,4.914]],["t/1127",[35,2.385,145,4.41,146,4.41]],["t/1129",[35,2.385,107,4.41,146,4.41]],["t/1131",[35,2.093,41,3.189,147,3.871,148,3.58]],["t/1135",[149,6.808]],["t/1137",[150,4.914,151,4.914,152,4.914]],["t/1141",[22,3.189,59,3.58,112,2.139,153,2.629]],["t/1143",[31,1.825,45,2.045,86,1.469,112,1.437,153,1.766,154,2.259,155,1.962,156,2.259]],["t/1145",[15,1.273,31,2.71,35,1.3,45,1.89,63,1.686,86,1.358,155,1.813,156,2.088]],["t/1147",[15,1.183,31,1.567,33,1.623,45,1.757,86,1.262,112,1.234,153,1.517,155,1.685,156,1.941,157,2.066]],["t/1149",[33,2.989,86,1.469,112,1.437,153,1.766,154,2.259,158,2.405,159,2.405]],["t/1151",[15,1.273,31,1.686,33,2.806,35,1.3,63,1.686,86,1.358,158,2.223,159,2.223]],["t/1153",[15,1.105,33,3.193,86,1.945,112,1.153,153,1.417,157,1.93,158,1.93,159,1.93]],["t/1155",[48,1.719,86,1.179,112,1.153,153,1.417,154,1.813,155,2.598,160,1.93,161,1.93,162,2.325,163,2.325]],["t/1157",[15,1.183,31,1.567,35,1.208,63,1.567,86,1.262,155,2.747,160,2.066,161,2.066,164,2.234]],["t/1159",[15,1.037,33,1.422,86,1.106,112,1.082,153,1.33,155,2.464,157,1.811,160,1.811,161,1.811,164,1.958,165,2.182]],["t/1161",[153,2.995,166,4.914,167,4.914]],["t/1163",[70,6.11]],["t/1167",[126,3.279,168,5.708]],["t/1169",[169,4.914,170,4.914,171,4.914]],["t/1175",[40,3.19,41,2.842,112,1.906,172,3.844,173,3.844]],["t/1177",[126,2.823,174,4.914,175,4.41]],["t/1181",[15,3.236]],["t/1185",[7,3.304]],["t/1187",[86,2.491,112,2.437,154,3.83]],["t/1189",[112,2.83,126,3.279]],["t/1191",[145,5.123,176,5.708]],["t/1195",[7,3.304]],["t/1197",[75,3.279,177,5.123]],["t/1199",[92,3.721,177,5.123]],["t/1201",[126,3.279,178,4.22]],["t/1205",[7,3.304]],["t/1207",[92,3.721,179,5.123]],["t/1209",[126,3.279,179,5.123]],["t/1213",[15,3.236]],["t/1217",[7,3.304]],["t/1219",[8,2.995,180,4.41,181,4.41]],["t/1221",[41,3.189,182,3.871,183,3.871,184,3.363]],["t/1223",[183,3.871,185,4.314,186,4.314,187,4.314]],["t/1225",[48,3.189,188,3.871,189,3.871,190,3.363]],["t/1227",[6,3.044,48,3.189,188,3.871,190,3.363]],["t/1229",[50,4.078,182,4.41,191,4.914]],["t/1231",[192,5.123,193,4.737]],["t/1233",[103,5.123,194,4.028]],["t/1236",[59,5.651]],["t/1238",[195,6.808]],["t/1239",[189,6.11]],["t/1241",[6,4.805]],["t/1243",[25,6.11]],["t/1245",[48,5.033]],["t/1247",[35,2.77,196,5.708]],["t/1249",[197,5.708,198,5.123]],["t/1253",[7,3.304]],["t/1255",[86,2.894,112,2.83]],["t/1259",[117,3.83,199,4.078,200,4.914]],["t/1261",[35,2.093,117,3.363,190,3.363,199,3.58]],["t/1265",[7,3.304]],["t/1267",[75,3.279,201,4.737]],["t/1269",[92,3.721,201,4.737]],["t/1271",[126,3.279,178,4.22]],["t/1275",[7,3.304]],["t/1277",[92,3.721,112,2.83]],["t/1279",[86,2.894,112,2.83]],["t/1281",[31,3.594,194,4.028]],["t/1283",[33,3.721,194,4.028]],["t/1285",[155,3.864,194,4.028]],["t/1287",[194,4.028,202,5.708]],["t/1289",[194,4.028,203,5.708]],["t/1293",[7,3.304]],["t/1295",[8,2.995,180,4.41,181,4.41]],["t/1297",[41,2.842,184,2.997,204,3.844,205,3.844,206,3.844]],["t/1299",[192,5.123,193,4.737]],["t/1303",[7,3.304]],["t/1305",[15,3.236]],["t/1309",[15,3.236]],["t/1313",[15,3.236]],["t/1317",[7,3.304]],["t/1319",[86,2.894,112,2.83]],["t/1323",[7,3.304]],["t/1325",[148,3.19,178,2.842,207,3.19,208,3.19,209,3.844]],["t/1327",[75,2.209,148,3.19,178,2.842,207,3.19,208,3.19]],["t/1329",[92,3.721,178,4.22]],["t/1331",[92,3.203,207,4.078,210,4.914]],["t/1333",[126,3.279,175,5.123]],["t/1334",[54,4.22,201,4.737]],["t/1336",[211,6.808]],["t/1338",[212,4.914,213,4.914,214,4.914]],["t/1340",[215,4.914,216,4.914,217,4.914]],["t/1342",[55,5.123,218,5.123]],["t/1344",[193,4.737,219,5.708]],["t/1348",[7,3.304]],["t/1350",[8,2.629,60,2.92,220,3.58,221,4.314]],["t/1352",[60,2.603,126,2.209,184,2.997,220,3.19,222,3.844]],["t/1354",[28,2.461,60,2.137,126,1.814,220,2.62,223,3.157,224,3.157,225,3.157]],["t/1358",[7,3.304]],["t/1360",[86,2.894,112,2.83]],["t/1364",[226,5.123,227,5.708]],["t/1366",[153,2.995,226,4.41,228,4.914]],["t/1368",[153,4.15]],["t/1372",[7,3.304]],["t/1374",[92,3.721,229,5.123]],["t/1376",[126,3.279,229,5.123]],["t/1380",[7,3.304]],["t/1382",[86,2.894,112,2.83]],["t/1386",[37,3.327,40,4.078,230,4.914]],["t/1387",[22,3.633,45,3.468,156,3.83]],["t/1389",[54,2.842,231,3.45,232,3.844,233,3.844,234,3.844]],["t/1391",[54,3.189,147,3.871,235,4.314,236,4.314]],["t/1393",[54,3.189,208,3.58,231,3.871,237,4.314]],["t/1395",[112,2.437,198,4.41,218,4.41]],["t/1396",[238,6.11]],["t/1398",[238,5.123,239,5.708]],["t/1400",[22,3.633,37,3.327,240,4.914]],["t/1401",[45,4.028,241,5.708]],["t/1403",[242,5.123,243,5.708]],["t/1405",[242,5.123,244,5.708]],["t/1409",[47,3.83,184,3.83,245,4.914]],["t/1411",[190,3.83,199,4.078,246,4.914]]],"invertedIndex":[["04",{"_index":99,"t":{"1081":{"position":[[10,2]]}}}],["1",{"_index":31,"t":{"933":{"position":[[6,1]]},"1082":{"position":[[0,1]]},"1109":{"position":[[32,1]]},"1143":{"position":[[5,1]]},"1145":{"position":[[5,1],[7,1]]},"1147":{"position":[[5,1]]},"1151":{"position":[[7,1]]},"1157":{"position":[[7,1]]},"1281":{"position":[[8,1]]}}}],["128k",{"_index":164,"t":{"1157":{"position":[[29,4]]},"1159":{"position":[[68,4]]}}}],["2",{"_index":33,"t":{"935":{"position":[[6,1]]},"1084":{"position":[[0,1]]},"1147":{"position":[[7,1]]},"1149":{"position":[[5,1],[46,1]]},"1151":{"position":[[5,1],[21,1]]},"1153":{"position":[[5,1],[7,1],[58,1]]},"1159":{"position":[[7,1]]},"1283":{"position":[[8,1]]}}}],["22",{"_index":98,"t":{"1081":{"position":[[7,2]]}}}],["27b",{"_index":159,"t":{"1149":{"position":[[48,3]]},"1151":{"position":[[23,3]]},"1153":{"position":[[60,3]]}}}],["3",{"_index":155,"t":{"1143":{"position":[[46,1]]},"1145":{"position":[[22,1]]},"1147":{"position":[[51,1]]},"1155":{"position":[[5,1],[44,1]]},"1157":{"position":[[5,1],[20,1]]},"1159":{"position":[[5,1],[59,1]]},"1285":{"position":[[8,1]]}}}],["3b",{"_index":241,"t":{"1401":{"position":[[6,2]]}}}],["4",{"_index":202,"t":{"1287":{"position":[[8,1]]}}}],["5",{"_index":203,"t":{"1289":{"position":[[8,1]]}}}],["72b",{"_index":244,"t":{"1405":{"position":[[5,3]]}}}],["7b",{"_index":243,"t":{"1403":{"position":[[5,2]]}}}],["8080",{"_index":125,"t":{"1107":{"position":[[52,4]]}}}],["8b",{"_index":156,"t":{"1143":{"position":[[48,2]]},"1145":{"position":[[24,2]]},"1147":{"position":[[53,2]]},"1387":{"position":[[11,2]]}}}],["abstract",{"_index":58,"t":{"977":{"position":[[0,8]]}}}],["action",{"_index":216,"t":{"1340":{"position":[[8,6]]}}}],["advanc",{"_index":174,"t":{"1177":{"position":[[0,8]]}}}],["agent",{"_index":112,"t":{"1098":{"position":[[35,5]]},"1141":{"position":[[36,5]]},"1143":{"position":[[28,6]]},"1147":{"position":[[29,5]]},"1149":{"position":[[28,6]]},"1153":{"position":[[29,5]]},"1155":{"position":[[28,6]]},"1159":{"position":[[39,5]]},"1175":{"position":[[14,5]]},"1187":{"position":[[13,5]]},"1189":{"position":[[8,5]]},"1255":{"position":[[8,5]]},"1277":{"position":[[14,5]]},"1279":{"position":[[8,5]]},"1319":{"position":[[8,5]]},"1360":{"position":[[8,5]]},"1382":{"position":[[8,5]]},"1395":{"position":[[17,6]]}}}],["agentkit",{"_index":152,"t":{"1137":{"position":[[19,8]]}}}],["ai",{"_index":68,"t":{"989":{"position":[[26,2]]},"1098":{"position":[[32,2]]}}}],["altern",{"_index":240,"t":{"1400":{"position":[[0,11]]}}}],["answer",{"_index":49,"t":{"962":{"position":[[22,6]]}}}],["api",{"_index":190,"t":{"1225":{"position":[[23,3]]},"1227":{"position":[[28,3]]},"1261":{"position":[[16,3]]},"1411":{"position":[[7,3]]}}}],["ask",{"_index":23,"t":{"910":{"position":[[0,3]]}}}],["asset",{"_index":69,"t":{"989":{"position":[[29,6]]}}}],["assist",{"_index":218,"t":{"1342":{"position":[[8,10]]},"1395":{"position":[[7,9]]}}}],["aw",{"_index":89,"t":{"1059":{"position":[[30,3]]},"1061":{"position":[[19,3]]}}}],["base",{"_index":47,"t":{"960":{"position":[[8,4]]},"1021":{"position":[[0,4]]},"1071":{"position":[[19,4]]},"1409":{"position":[[4,5]]}}}],["bind",{"_index":105,"t":{"1090":{"position":[[0,4]]}}}],["bot",{"_index":151,"t":{"1137":{"position":[[11,3]]}}}],["build",{"_index":41,"t":{"958":{"position":[[0,5]]},"1131":{"position":[[0,5]]},"1175":{"position":[[0,5]]},"1221":{"position":[[0,5]]},"1297":{"position":[[0,5]]}}}],["case",{"_index":175,"t":{"1177":{"position":[[13,4]]},"1333":{"position":[[4,5]]}}}],["chang",{"_index":146,"t":{"1127":{"position":[[5,7]]},"1129":{"position":[[0,6]]}}}],["chat",{"_index":189,"t":{"1225":{"position":[[12,4]]},"1239":{"position":[[0,4]]}}}],["chatbot",{"_index":184,"t":{"1221":{"position":[[22,7]]},"1297":{"position":[[8,7]]},"1352":{"position":[[20,7]]},"1409":{"position":[[10,7]]}}}],["check",{"_index":214,"t":{"1338":{"position":[[21,5]]}}}],["cli",{"_index":81,"t":{"1037":{"position":[[0,3]]}}}],["client",{"_index":223,"t":{"1354":{"position":[[20,6]]}}}],["clone",{"_index":165,"t":{"1159":{"position":[[9,5]]}}}],["code",{"_index":198,"t":{"1249":{"position":[[7,5]]},"1395":{"position":[[0,6]]}}}],["codegpt",{"_index":177,"t":{"1197":{"position":[[8,7]]},"1199":{"position":[[10,7]]}}}],["coder",{"_index":238,"t":{"1396":{"position":[[0,5]]},"1398":{"position":[[5,5]]}}}],["collect",{"_index":11,"t":{"896":{"position":[[18,10]]},"922":{"position":[[18,10]]},"999":{"position":[[18,10]]},"1105":{"position":[[23,10]]},"1111":{"position":[[29,10]]}}}],["compon",{"_index":66,"t":{"989":{"position":[[0,9]]}}}],["conclus",{"_index":70,"t":{"991":{"position":[[0,10]]},"1163":{"position":[[0,10]]}}}],["config",{"_index":57,"t":{"974":{"position":[[16,6]]},"1019":{"position":[[0,6]]},"1067":{"position":[[4,6]]}}}],["configur",{"_index":92,"t":{"1065":{"position":[[8,14]]},"1199":{"position":[[0,9]]},"1207":{"position":[[0,9]]},"1269":{"position":[[0,9]]},"1277":{"position":[[0,9]]},"1329":{"position":[[0,9]]},"1331":{"position":[[0,9]]},"1374":{"position":[[0,9]]}}}],["connect",{"_index":188,"t":{"1225":{"position":[[0,7]]},"1227":{"position":[[0,7]]}}}],["content",{"_index":2,"t":{"886":{"position":[[14,7]]}}}],["context",{"_index":163,"t":{"1155":{"position":[[58,7]]}}}],["continu",{"_index":201,"t":{"1267":{"position":[[8,8]]},"1269":{"position":[[10,8]]},"1334":{"position":[[5,12]]}}}],["convert",{"_index":29,"t":{"932":{"position":[[9,7]]}}}],["cpp",{"_index":46,"t":{"958":{"position":[[39,3]]}}}],["cpu",{"_index":84,"t":{"1045":{"position":[[0,3]]},"1061":{"position":[[10,3]]}}}],["creat",{"_index":5,"t":{"888":{"position":[[0,6]]},"896":{"position":[[0,6]]},"900":{"position":[[0,6]]},"906":{"position":[[13,8]]},"922":{"position":[[0,6]]},"926":{"position":[[0,6]]},"937":{"position":[[0,6]]},"962":{"position":[[0,6]]},"999":{"position":[[0,6]]},"1003":{"position":[[0,6]]}}}],["creator",{"_index":38,"t":{"950":{"position":[[0,8]]}}}],["cuda",{"_index":101,"t":{"1084":{"position":[[14,4]]},"1103":{"position":[[23,4]]}}}],["current",{"_index":78,"t":{"1031":{"position":[[11,7]]}}}],["cursor",{"_index":179,"t":{"1207":{"position":[[10,6]]},"1209":{"position":[[4,6]]}}}],["custom",{"_index":95,"t":{"1073":{"position":[[0,9]]}}}],["databas",{"_index":10,"t":{"894":{"position":[[15,8]]},"920":{"position":[[15,8]]},"997":{"position":[[15,8]]}}}],["decentr",{"_index":62,"t":{"981":{"position":[[16,16]]}}}],["default",{"_index":135,"t":{"1111":{"position":[[21,7]]}}}],["demo",{"_index":154,"t":{"1143":{"position":[[0,4]]},"1149":{"position":[[0,4]]},"1155":{"position":[[0,4]]},"1187":{"position":[[8,4]]}}}],["deni",{"_index":140,"t":{"1119":{"position":[[11,6]]}}}],["devic",{"_index":108,"t":{"1092":{"position":[[25,6]]}}}],["directori",{"_index":133,"t":{"1109":{"position":[[83,9]]}}}],["doc",{"_index":186,"t":{"1223":{"position":[[16,4]]}}}],["document",{"_index":182,"t":{"1221":{"position":[[8,9]]},"1229":{"position":[[12,9]]}}}],["domain",{"_index":37,"t":{"948":{"position":[[0,6]]},"954":{"position":[[26,6]]},"1094":{"position":[[7,6]]},"1096":{"position":[[16,6]]},"1098":{"position":[[16,6],[41,7]]},"1386":{"position":[[12,7]]},"1400":{"position":[[16,7]]}}}],["driver",{"_index":100,"t":{"1082":{"position":[[21,6]]}}}],["eliza",{"_index":173,"t":{"1175":{"position":[[25,5]]}}}],["embed",{"_index":6,"t":{"888":{"position":[[7,10]]},"906":{"position":[[32,10]]},"912":{"position":[[17,10]]},"937":{"position":[[7,10]]},"1227":{"position":[[12,9]]},"1241":{"position":[[0,9]]}}}],["enabl",{"_index":88,"t":{"1059":{"position":[[22,7]]}}}],["endpoint",{"_index":195,"t":{"1238":{"position":[[0,9]]}}}],["environ",{"_index":227,"t":{"1364":{"position":[[12,11]]}}}],["error",{"_index":122,"t":{"1107":{"position":[[33,5]]},"1113":{"position":[[9,5]]},"1115":{"position":[[30,5]]},"1117":{"position":[[26,5]]}}}],["evalu",{"_index":166,"t":{"1161":{"position":[[0,10]]}}}],["exampl",{"_index":194,"t":{"1233":{"position":[[5,8]]},"1281":{"position":[[0,7]]},"1283":{"position":[[0,7]]},"1285":{"position":[[0,7]]},"1287":{"position":[[0,7]]},"1289":{"position":[[0,7]]}}}],["exit",{"_index":127,"t":{"1107":{"position":[[68,4]]}}}],["extract",{"_index":215,"t":{"1340":{"position":[[0,7]]}}}],["fail",{"_index":118,"t":{"1105":{"position":[[0,6]]},"1107":{"position":[[0,6]]},"1109":{"position":[[13,6]]},"1111":{"position":[[0,6]]},"1115":{"position":[[5,6]]}}}],["file",{"_index":4,"t":{"886":{"position":[[36,4]]},"888":{"position":[[36,5]]},"932":{"position":[[23,4],[42,4]]},"937":{"position":[[36,5]]},"970":{"position":[[18,4]]},"972":{"position":[[22,4]]},"1109":{"position":[[61,4],[75,4]]},"1113":{"position":[[0,4]]},"1115":{"position":[[24,4]]},"1117":{"position":[[19,5]]}}}],["find",{"_index":116,"t":{"1103":{"position":[[18,4]]}}}],["fine",{"_index":42,"t":{"958":{"position":[[10,4]]},"962":{"position":[[37,4]]}}}],["finetun",{"_index":51,"t":{"964":{"position":[[0,8]]}}}],["flowis",{"_index":185,"t":{"1223":{"position":[[8,7]]}}}],["flowiseai",{"_index":180,"t":{"1219":{"position":[[8,9]]},"1295":{"position":[[8,9]]}}}],["gaia",{"_index":40,"t":{"954":{"position":[[21,4]]},"1175":{"position":[[35,4]]},"1386":{"position":[[7,4]]}}}],["gaianet",{"_index":63,"t":{"983":{"position":[[0,7]]},"985":{"position":[[0,7]]},"987":{"position":[[0,7]]},"1027":{"position":[[30,7]]},"1029":{"position":[[32,7]]},"1031":{"position":[[19,7]]},"1145":{"position":[[27,7]]},"1151":{"position":[[27,7]]},"1157":{"position":[[34,7]]}}}],["gemma",{"_index":158,"t":{"1149":{"position":[[40,5]]},"1151":{"position":[[15,5]]},"1153":{"position":[[52,5]]}}}],["gener",{"_index":55,"t":{"972":{"position":[[0,8]]},"1342":{"position":[[0,7]]}}}],["give",{"_index":192,"t":{"1231":{"position":[[0,4]]},"1299":{"position":[[0,4]]}}}],["gpt",{"_index":208,"t":{"1325":{"position":[[15,3]]},"1327":{"position":[[27,3]]},"1393":{"position":[[15,3]]}}}],["gptpdf",{"_index":34,"t":{"935":{"position":[[9,6]]}}}],["gpu",{"_index":83,"t":{"1043":{"position":[[0,3]]},"1059":{"position":[[18,3]]}}}],["grammar",{"_index":213,"t":{"1338":{"position":[[13,7]]}}}],["help",{"_index":71,"t":{"1009":{"position":[[0,4]]}}}],["hotkey",{"_index":210,"t":{"1331":{"position":[[19,6]]}}}],["id",{"_index":107,"t":{"1092":{"position":[[18,2],[32,2]]},"1129":{"position":[[16,2]]}}}],["imag",{"_index":147,"t":{"1131":{"position":[[13,5]]},"1391":{"position":[[8,5]]}}}],["import",{"_index":113,"t":{"1100":{"position":[[0,9]]}}}],["info",{"_index":196,"t":{"1247":{"position":[[13,4]]}}}],["init",{"_index":73,"t":{"1013":{"position":[[0,4]]}}}],["instal",{"_index":75,"t":{"1025":{"position":[[0,7]]},"1027":{"position":[[0,7]]},"1029":{"position":[[0,7]]},"1035":{"position":[[7,9]]},"1037":{"position":[[20,9]]},"1053":{"position":[[0,10]]},"1082":{"position":[[2,7]]},"1084":{"position":[[2,7]]},"1119":{"position":[[31,9],[51,7]]},"1197":{"position":[[0,7]]},"1267":{"position":[[0,7]]},"1327":{"position":[[0,7]]}}}],["instanc",{"_index":90,"t":{"1059":{"position":[[34,8]]},"1061":{"position":[[23,8]]}}}],["introduct",{"_index":59,"t":{"979":{"position":[[0,12]]},"1141":{"position":[[0,12]]},"1236":{"position":[[0,12]]}}}],["ip",{"_index":205,"t":{"1297":{"position":[[29,2]]}}}],["item",{"_index":217,"t":{"1340":{"position":[[15,5]]}}}],["join",{"_index":109,"t":{"1094":{"position":[[0,4]]},"1096":{"position":[[9,4]]},"1098":{"position":[[9,4]]}}}],["knowledg",{"_index":17,"t":{"906":{"position":[[22,9]]},"908":{"position":[[31,9]]},"1071":{"position":[[9,9]]}}}],["larg",{"_index":234,"t":{"1389":{"position":[[26,5]]}}}],["latest",{"_index":76,"t":{"1027":{"position":[[12,6]]}}}],["launch",{"_index":39,"t":{"954":{"position":[[9,6]]}}}],["libgomp",{"_index":129,"t":{"1109":{"position":[[21,7]]}}}],["librari",{"_index":117,"t":{"1103":{"position":[[28,9]]},"1109":{"position":[[5,7]]},"1259":{"position":[[18,7]]},"1261":{"position":[[20,7]]}}}],["lifecycl",{"_index":18,"t":{"908":{"position":[[0,9]]}}}],["llama",{"_index":45,"t":{"958":{"position":[[33,5]]},"1143":{"position":[[40,5]]},"1145":{"position":[[16,5]]},"1147":{"position":[[45,5]]},"1387":{"position":[[5,5]]},"1401":{"position":[[0,5]]}}}],["llamapars",{"_index":32,"t":{"933":{"position":[[9,10]]}}}],["llm",{"_index":22,"t":{"908":{"position":[[54,3]]},"1069":{"position":[[10,3]]},"1141":{"position":[[20,3]]},"1387":{"position":[[0,3]]},"1400":{"position":[[12,3]]}}}],["load",{"_index":128,"t":{"1109":{"position":[[0,4]]}}}],["local",{"_index":148,"t":{"1131":{"position":[[19,7]]},"1325":{"position":[[9,5]]},"1327":{"position":[[21,5]]}}}],["long",{"_index":162,"t":{"1155":{"position":[[53,4]]}}}],["lookup",{"_index":206,"t":{"1297":{"position":[[32,6]]}}}],["machin",{"_index":221,"t":{"1350":{"position":[[29,7]]}}}],["maco",{"_index":138,"t":{"1117":{"position":[[35,5]]}}}],["make",{"_index":145,"t":{"1127":{"position":[[0,4]]},"1191":{"position":[[0,4]]}}}],["manag",{"_index":110,"t":{"1096":{"position":[[38,10]]}}}],["mani",{"_index":137,"t":{"1117":{"position":[[9,4]]}}}],["markdown",{"_index":3,"t":{"886":{"position":[[27,8]]},"888":{"position":[[27,8]]},"932":{"position":[[33,8]]},"937":{"position":[[27,8]]}}}],["marketplac",{"_index":67,"t":{"989":{"position":[[10,11]]}}}],["medium",{"_index":161,"t":{"1155":{"position":[[46,6]]},"1157":{"position":[[22,6]]},"1159":{"position":[[61,6]]}}}],["merg",{"_index":52,"t":{"966":{"position":[[0,5]]}}}],["messag",{"_index":123,"t":{"1107":{"position":[[39,7]]}}}],["model",{"_index":48,"t":{"960":{"position":[[13,5]]},"1155":{"position":[[66,5]]},"1225":{"position":[[17,5]]},"1227":{"position":[[22,5]]},"1245":{"position":[[8,5]]}}}],["more",{"_index":103,"t":{"1086":{"position":[[0,4]]},"1233":{"position":[[0,4]]}}}],["network",{"_index":64,"t":{"985":{"position":[[8,7]]}}}],["next",{"_index":14,"t":{"902":{"position":[[0,4]]},"928":{"position":[[0,4]]},"943":{"position":[[0,4]]},"1005":{"position":[[0,4]]},"1055":{"position":[[0,4]]},"1075":{"position":[[0,4]]}}}],["nice",{"_index":170,"t":{"1169":{"position":[[11,4]]}}}],["node",{"_index":35,"t":{"946":{"position":[[0,4]]},"974":{"position":[[11,4]]},"983":{"position":[[8,4]]},"1027":{"position":[[38,4]]},"1029":{"position":[[40,4]]},"1031":{"position":[[27,4]]},"1053":{"position":[[15,4]]},"1090":{"position":[[10,4]]},"1092":{"position":[[13,4]]},"1096":{"position":[[33,4]]},"1107":{"position":[[20,4]]},"1127":{"position":[[20,4]]},"1129":{"position":[[11,4]]},"1131":{"position":[[8,4]]},"1145":{"position":[[35,4]]},"1151":{"position":[[35,4]]},"1157":{"position":[[42,4]]},"1247":{"position":[[8,4]]},"1261":{"position":[[11,4]]}}}],["note",{"_index":114,"t":{"1100":{"position":[[10,5]]}}}],["now",{"_index":219,"t":{"1344":{"position":[[7,3]]}}}],["nvidia",{"_index":87,"t":{"1059":{"position":[[11,6]]},"1082":{"position":[[14,6]]}}}],["o",{"_index":136,"t":{"1113":{"position":[[7,1]]}}}],["object",{"_index":131,"t":{"1109":{"position":[[54,6]]}}}],["obsidian",{"_index":207,"t":{"1325":{"position":[[0,8]]},"1327":{"position":[[12,8]]},"1331":{"position":[[10,8]]}}}],["open",{"_index":60,"t":{"981":{"position":[[0,4]]},"1109":{"position":[[42,4]]},"1115":{"position":[[15,4]]},"1117":{"position":[[14,4]]},"1350":{"position":[[10,4]]},"1352":{"position":[[4,4]]},"1354":{"position":[[4,4]]}}}],["openai",{"_index":199,"t":{"1259":{"position":[[4,6]]},"1261":{"position":[[4,6]]},"1411":{"position":[[0,6]]}}}],["oper",{"_index":36,"t":{"946":{"position":[[5,9]]},"948":{"position":[[7,9]]}}}],["option",{"_index":13,"t":{"898":{"position":[[0,7]]},"924":{"position":[[0,7]]},"1001":{"position":[[0,7]]},"1037":{"position":[[4,7]]}}}],["os",{"_index":85,"t":{"1047":{"position":[[0,4]]}}}],["page",{"_index":111,"t":{"1096":{"position":[[49,4]]},"1098":{"position":[[49,4]]}}}],["pars",{"_index":0,"t":{"886":{"position":[[0,5]]}}}],["pdf",{"_index":30,"t":{"932":{"position":[[19,3]]}}}],["permiss",{"_index":139,"t":{"1119":{"position":[[0,10]]}}}],["phi",{"_index":160,"t":{"1155":{"position":[[40,3]]},"1157":{"position":[[16,3]]},"1159":{"position":[[55,3]]}}}],["plugin",{"_index":178,"t":{"1201":{"position":[[8,6]]},"1271":{"position":[[8,6]]},"1325":{"position":[[19,6]]},"1327":{"position":[[31,6]]},"1329":{"position":[[14,6]]}}}],["port",{"_index":124,"t":{"1107":{"position":[[47,4]]}}}],["pre",{"_index":91,"t":{"1065":{"position":[[0,3]]}}}],["prepar",{"_index":226,"t":{"1364":{"position":[[0,7]]},"1366":{"position":[[0,7]]}}}],["prerequisit",{"_index":7,"t":{"892":{"position":[[0,13]]},"918":{"position":[[0,13]]},"995":{"position":[[0,13]]},"1051":{"position":[[0,13]]},"1185":{"position":[[0,13]]},"1195":{"position":[[0,13]]},"1205":{"position":[[0,13]]},"1217":{"position":[[0,13]]},"1253":{"position":[[0,13]]},"1265":{"position":[[0,13]]},"1275":{"position":[[0,13]]},"1293":{"position":[[0,13]]},"1303":{"position":[[0,13]]},"1317":{"position":[[0,13]]},"1323":{"position":[[0,13]]},"1348":{"position":[[0,13]]},"1358":{"position":[[0,13]]},"1372":{"position":[[0,13]]},"1380":{"position":[[0,13]]}}}],["prompt",{"_index":96,"t":{"1073":{"position":[[10,7]]}}}],["protect",{"_index":106,"t":{"1092":{"position":[[0,7]]}}}],["public",{"_index":230,"t":{"1386":{"position":[[0,6]]}}}],["python",{"_index":200,"t":{"1259":{"position":[[11,6]]}}}],["qna",{"_index":183,"t":{"1221":{"position":[[18,3]]},"1223":{"position":[[21,3]]}}}],["qualiti",{"_index":167,"t":{"1161":{"position":[[26,7]]}}}],["queri",{"_index":20,"t":{"908":{"position":[[20,5]]},"914":{"position":[[21,5]]}}}],["question",{"_index":24,"t":{"910":{"position":[[6,8]]},"962":{"position":[[9,8]]}}}],["quick",{"_index":143,"t":{"1123":{"position":[[0,5]]}}}],["quickstart",{"_index":149,"t":{"1135":{"position":[[0,10]]}}}],["qwen",{"_index":242,"t":{"1403":{"position":[[0,4]]},"1405":{"position":[[0,4]]}}}],["rag",{"_index":225,"t":{"1354":{"position":[[32,3]]}}}],["re",{"_index":144,"t":{"1125":{"position":[[9,2]]}}}],["realist",{"_index":235,"t":{"1391":{"position":[[15,9]]}}}],["realtim",{"_index":204,"t":{"1297":{"position":[[20,8]]}}}],["recov",{"_index":119,"t":{"1105":{"position":[[10,7]]}}}],["reduc",{"_index":169,"t":{"1169":{"position":[[0,6]]}}}],["remov",{"_index":134,"t":{"1111":{"position":[[10,6]]}}}],["replac",{"_index":246,"t":{"1411":{"position":[[11,11]]}}}],["resourc",{"_index":104,"t":{"1086":{"position":[[5,9]]}}}],["respons",{"_index":27,"t":{"914":{"position":[[0,8]]}}}],["retriev",{"_index":25,"t":{"912":{"position":[[0,8]]},"1243":{"position":[[0,8]]}}}],["robust",{"_index":176,"t":{"1191":{"position":[[8,6]]}}}],["run",{"_index":86,"t":{"1059":{"position":[[0,7]]},"1061":{"position":[[0,7]]},"1143":{"position":[[8,7]]},"1145":{"position":[[10,3]]},"1147":{"position":[[9,3]]},"1149":{"position":[[8,7]]},"1151":{"position":[[9,3]]},"1153":{"position":[[9,3],[38,3]]},"1155":{"position":[[8,7]]},"1157":{"position":[[10,3]]},"1159":{"position":[[19,3]]},"1187":{"position":[[0,3]]},"1255":{"position":[[0,3]]},"1279":{"position":[[0,3]]},"1319":{"position":[[0,3]]},"1360":{"position":[[0,3]]},"1382":{"position":[[0,3]]}}}],["rust",{"_index":239,"t":{"1398":{"position":[[0,4]]}}}],["s",{"_index":80,"t":{"1035":{"position":[[5,1]]}}}],["script",{"_index":141,"t":{"1119":{"position":[[41,6]]}}}],["segment",{"_index":53,"t":{"970":{"position":[[0,7]]}}}],["select",{"_index":94,"t":{"1069":{"position":[[0,6]]},"1071":{"position":[[0,6]]}}}],["server",{"_index":181,"t":{"1219":{"position":[[18,6]]},"1295":{"position":[[18,6]]}}}],["set",{"_index":50,"t":{"962":{"position":[[29,3]]},"1065":{"position":[[4,3]]},"1229":{"position":[[0,3]]}}}],["setup",{"_index":209,"t":{"1325":{"position":[[26,5]]}}}],["share",{"_index":130,"t":{"1109":{"position":[[47,6]]}}}],["side",{"_index":224,"t":{"1354":{"position":[[27,4]]}}}],["similar",{"_index":26,"t":{"912":{"position":[[9,7]]}}}],["snapshot",{"_index":12,"t":{"896":{"position":[[29,8]]},"900":{"position":[[16,8]]},"922":{"position":[[29,8]]},"926":{"position":[[16,8]]},"972":{"position":[[13,8]]},"999":{"position":[[29,8]]},"1003":{"position":[[16,8]]},"1105":{"position":[[34,8]]}}}],["sourc",{"_index":61,"t":{"981":{"position":[[5,6]]}}}],["sovit",{"_index":237,"t":{"1393":{"position":[[19,6]]}}}],["specif",{"_index":77,"t":{"1029":{"position":[[12,8]]}}}],["spell",{"_index":212,"t":{"1338":{"position":[[0,8]]}}}],["start",{"_index":8,"t":{"894":{"position":[[0,5]]},"920":{"position":[[0,5]]},"997":{"position":[[0,5]]},"1015":{"position":[[0,5]]},"1107":{"position":[[10,5]]},"1123":{"position":[[6,5]]},"1125":{"position":[[12,5]]},"1219":{"position":[[0,5]]},"1295":{"position":[[0,5]]},"1350":{"position":[[0,5]]}}}],["statu",{"_index":197,"t":{"1249":{"position":[[0,6]]}}}],["step",{"_index":15,"t":{"902":{"position":[[5,5]]},"928":{"position":[[5,5]]},"943":{"position":[[5,5]]},"954":{"position":[[0,5]]},"1005":{"position":[[5,5]]},"1055":{"position":[[5,5]]},"1075":{"position":[[5,5]]},"1096":{"position":[[0,5]]},"1098":{"position":[[0,5]]},"1145":{"position":[[0,4]]},"1147":{"position":[[0,4]]},"1151":{"position":[[0,4]]},"1153":{"position":[[0,4]]},"1157":{"position":[[0,4]]},"1159":{"position":[[0,4]]},"1181":{"position":[[0,5]]},"1213":{"position":[[0,5]]},"1305":{"position":[[0,5]]},"1309":{"position":[[0,5]]},"1313":{"position":[[0,5]]}}}],["stop",{"_index":74,"t":{"1017":{"position":[[0,4]]},"1125":{"position":[[0,4]]}}}],["subcommand",{"_index":93,"t":{"1067":{"position":[[11,10]]}}}],["such",{"_index":132,"t":{"1109":{"position":[[70,4]]}}}],["summar",{"_index":211,"t":{"1336":{"position":[[0,13]]}}}],["supervis",{"_index":168,"t":{"1167":{"position":[[4,9]]}}}],["supplement",{"_index":21,"t":{"908":{"position":[[41,12]]}}}],["support",{"_index":82,"t":{"1041":{"position":[[0,9]]}}}],["system",{"_index":115,"t":{"1103":{"position":[[4,6]]}}}],["task",{"_index":228,"t":{"1366":{"position":[[25,4]]}}}],["telegram",{"_index":150,"t":{"1137":{"position":[[2,8]]}}}],["templat",{"_index":187,"t":{"1223":{"position":[[25,8]]}}}],["text",{"_index":54,"t":{"970":{"position":[[13,4]]},"1334":{"position":[[0,4]]},"1389":{"position":[[9,4]]},"1391":{"position":[[0,4]]},"1393":{"position":[[0,4]]}}}],["token",{"_index":65,"t":{"987":{"position":[[8,5]]}}}],["tool",{"_index":28,"t":{"932":{"position":[[0,5]]},"933":{"position":[[0,4]]},"935":{"position":[[0,4]]},"1354":{"position":[[36,4]]}}}],["toolkit",{"_index":102,"t":{"1084":{"position":[[19,7]]}}}],["top",{"_index":157,"t":{"1147":{"position":[[38,3]]},"1153":{"position":[[45,3]]},"1159":{"position":[[48,3]]}}}],["translat",{"_index":153,"t":{"1141":{"position":[[24,11]]},"1143":{"position":[[16,11]]},"1147":{"position":[[17,11]]},"1149":{"position":[[16,11]]},"1153":{"position":[[17,11]]},"1155":{"position":[[16,11]]},"1159":{"position":[[27,11]]},"1161":{"position":[[14,11]]},"1366":{"position":[[13,11]]},"1368":{"position":[[0,9]]}}}],["tri",{"_index":193,"t":{"1231":{"position":[[10,3]]},"1299":{"position":[[10,3]]},"1344":{"position":[[0,3]]}}}],["trump",{"_index":172,"t":{"1175":{"position":[[8,5]]}}}],["tune",{"_index":43,"t":{"958":{"position":[[15,4]]},"962":{"position":[[42,6]]}}}],["ubuntu",{"_index":97,"t":{"1081":{"position":[[0,6]]}}}],["ui",{"_index":222,"t":{"1352":{"position":[[28,2]]}}}],["uninstal",{"_index":79,"t":{"1033":{"position":[[0,9]]}}}],["up",{"_index":191,"t":{"1229":{"position":[[4,2]]}}}],["updat",{"_index":56,"t":{"974":{"position":[[0,6]]},"1031":{"position":[[0,6]]}}}],["url",{"_index":1,"t":{"886":{"position":[[10,3]]}}}],["us",{"_index":126,"t":{"1107":{"position":[[63,3]]},"1119":{"position":[[23,3]]},"1167":{"position":[[0,3]]},"1177":{"position":[[9,3]]},"1189":{"position":[[0,3]]},"1201":{"position":[[0,3]]},"1209":{"position":[[0,3]]},"1271":{"position":[[0,3]]},"1333":{"position":[[0,3]]},"1352":{"position":[[0,3]]},"1354":{"position":[[0,3]]},"1376":{"position":[[0,3]]}}}],["user",{"_index":19,"t":{"908":{"position":[[15,4]]},"914":{"position":[[16,4]]},"944":{"position":[[0,5]]}}}],["util",{"_index":44,"t":{"958":{"position":[[20,7]]}}}],["v2",{"_index":233,"t":{"1389":{"position":[[23,2]]}}}],["valu",{"_index":171,"t":{"1169":{"position":[[16,5]]}}}],["vector",{"_index":9,"t":{"894":{"position":[[8,6]]},"896":{"position":[[11,6]]},"900":{"position":[[9,6]]},"920":{"position":[[8,6]]},"922":{"position":[[11,6]]},"926":{"position":[[9,6]]},"997":{"position":[[8,6]]},"999":{"position":[[11,6]]},"1003":{"position":[[9,6]]}}}],["version",{"_index":72,"t":{"1011":{"position":[[0,7]]},"1027":{"position":[[19,7]]},"1029":{"position":[[21,7]]}}}],["vision",{"_index":236,"t":{"1391":{"position":[[25,6]]}}}],["voic",{"_index":231,"t":{"1389":{"position":[[0,5]]},"1393":{"position":[[8,5]]}}}],["wasmedg",{"_index":142,"t":{"1119":{"position":[[59,8]]}}}],["web",{"_index":245,"t":{"1409":{"position":[[0,3]]}}}],["webui",{"_index":220,"t":{"1350":{"position":[[15,5]]},"1352":{"position":[[9,5]]},"1354":{"position":[[9,5]]}}}],["whisper",{"_index":232,"t":{"1389":{"position":[[15,7]]}}}],["window",{"_index":120,"t":{"1105":{"position":[[46,7]]}}}],["workflow",{"_index":16,"t":{"906":{"position":[[0,8]]}}}],["wsl",{"_index":121,"t":{"1105":{"position":[[54,3]]}}}],["zed",{"_index":229,"t":{"1374":{"position":[[10,3]]},"1376":{"position":[[4,3]]}}}]],"pipeline":["stemmer"]}},{"documents":[{"i":885,"t":"In this section, we will discuss how to create a vector collection snapshot from a Web URL. First, we will parse the URL to a structured markdown file. Then, we will follow the steps from Knowledge base from a markdown file to create embedding for your URL.","s":"Knowledge base from a URL","u":"/1.0.0/creator-guide/knowledge/firecrawl","h":"","p":884},{"i":887,"t":"Firecrawl can crawl and convert any website into LLM-ready markdown or structured data. It also supports crawling a URL and all accessible subpages. To use Firecrawl, you need to sign up on Firecrawl and get an API key. First, install the dependencies. We are assuming that you already have Node.JS 20+ installed. git clone https://github.com/JYC0413/firecrawl-integration.git cd firecrawl-integration npm install Then, export the API key in the terminal. export FIRECRAWL_KEY=\"your_api_key_here\" next, we can use the following command line to run the service. node crawlWebToMd.js After the application is running successfully, you will see the prompt appear on the Terminal. You can type your URL in the terminal right now. Here we have two choices. Multiple pages: input your link with / at the end, the program will crawl and convert the page and its subpages to one single markdown file. This way will cost lots of API token usage. One single page: input your link without / at the end. the program will crawl and convert the current page to one single markdown file. The output markdown file will be located in this folder named output.md.","s":"Parse the URL content to a markdown file","u":"/1.0.0/creator-guide/knowledge/firecrawl","h":"#parse-the-url-content-to-a-markdown-file","p":884},{"i":889,"t":"Please follow the tutorial Knowledge base from a markdown file to convert your markdown file to a snapshot of embeddings that can be imported into a GaiaNet node.","s":"Create embeddings from the markdown files","u":"/1.0.0/creator-guide/knowledge/firecrawl","h":"#create-embeddings-from-the-markdown-files","p":884},{"i":891,"t":"In this section, we will discuss how to create a vector collection snapshot for optimal retrieval of long-form text documents. The approach is to create two columns of text in a CSV file. The first column is the long-form source text from the knowledge document, such as a book chapter or a markdown section. The long-form source text is difficult to search. The second column is a \"search-friendly\" summary of the source text. It could contain a list of questions that can be answered by the first column source text. We will create a vector snapshot where each vector is computed from the summary text (second column), but the retrieved source text for that vector is from the first column. The snapshot file can then be loaded by a Gaia node as its knowledge base. We have a simple Python script to build properly formatted CSV files from a set of articles or chapters. See how it works.","s":"Knowledge base from source / summary pairs","u":"/1.0.0/creator-guide/knowledge/csv","h":"","p":890},{"i":893,"t":"Install the WasmEdge Runtime, the cross-platform LLM runtime. curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install_v2.sh | bash -s Download an embedding model. curl -LO https://huggingface.co/gaianet/Nomic-embed-text-v1.5-Embedding-GGUF/resolve/main/nomic-embed-text-v1.5.f16.gguf The embedding model is a special kind of LLM that turns sentences into vectors. The vectors can then be stored in a vector database and searched later. When the sentences are from a body of text that represents a knowledge domain, that vector database becomes our RAG knowledge base.","s":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/csv","h":"#prerequisites","p":890},{"i":895,"t":"By default, we use Qdrant as the vector database. You can start a Qdrant instance by starting a Gaia node with a knowledge snapshot. note Or, you can start a Qdrant server using Docker. The following command starts it in the background. mkdir qdrant_storage mkdir qdrant_snapshots nohup docker run -d -p 6333:6333 -p 6334:6334 \\ -v $(pwd)/qdrant_storage:/qdrant/storage:z \\ -v $(pwd)/qdrant_snapshots:/qdrant/snapshots:z \\ qdrant/qdrant","s":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/csv","h":"#start-a-vector-database","p":890},{"i":897,"t":"Delete the default collection if it exists. curl -X DELETE 'http://localhost:6333/collections/default' Create a new collection called default. Notice that it is 768 dimensions. That is the output vector size of the embedding model nomic-embed-text-v1.5. If you are using a different embedding model, you should use a dimension that fits the model. curl -X PUT 'http://localhost:6333/collections/default' \\ -H 'Content-Type: application/json' \\ --data-raw '{ \"vectors\": { \"size\": 768, \"distance\": \"Cosine\", \"on_disk\": true } }' Download a program to create embeddings from the CSV file. curl -LO https://github.com/GaiaNet-AI/embedding-tools/raw/main/csv_embed/csv_embed.wasm You can check out the Rust source code here and modify it if you need to use a different CSV layout. Next, you can run the program by passing a collection name, vector dimension, and the CSV document. The --ctx_size option matches the embedding model's context window size, which in this case is 8192 tokens allowing it to process long sections of text. Make sure that Qdrant is running on your local machine. The model is preloaded under the name embedding. The wasm app then uses the embedding model to create the 768-dimension vectors from paris.csv and saves them into the default collection. curl -LO https://huggingface.co/datasets/gaianet/paris/raw/main/paris.csv wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ csv_embed.wasm embedding default 768 paris.csv --ctx_size 8192","s":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/csv","h":"#create-the-vector-collection-snapshot","p":890},{"i":899,"t":"You can pass the following options to the program. Using -c or --ctx_size to specify the context size of the input. This defaults to 512. Using -m or --maximum_context_length to specify a context length in the CLI argument. That is to truncate and warn for each text segment that goes above the context length. Using -s or --start_vector_id to specify the start vector ID in the CLI argument. This will allow us to run this app multiple times on multiple documents on the same vector collection. Example: the above example but to append the London guide to the end of an existing collection starting from index 42. wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ csv_embed.wasm embedding default 768 london.csv -c 8192 -l 1 -s 42","s":"Options","u":"/1.0.0/creator-guide/knowledge/csv","h":"#options","p":890},{"i":901,"t":"You can create a snapshot of the collection, which can be shared and loaded into a different Qdrant database. You can find the snapshot file in the qdrant_snapshots directory, or the ~/gaianet/qdrant/snapshots directory in the Gaia node. curl -X POST 'http://localhost:6333/collections/default/snapshots' We also recommend you to compress the snapshot file. tar czvf my.snapshot.tar.gz my.snapshot Finally, upload the my.snapshot.tar.gz file to Huggingface so that the Gaia node can download and use it.","s":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/csv","h":"#create-a-vector-snapshot","p":890},{"i":903,"t":"Start a new Gaia node Customize the Gaia node Have fun!","s":"Next steps","u":"/1.0.0/creator-guide/knowledge/csv","h":"#next-steps","p":890},{"i":905,"t":"The LLM app requires both long-term and short-term memories. Long-term memory includes factual knowledge, historical facts, background stories etc. They are best added to the context as complete chapters instead of small chunks of text to maintain the internal consistency of the knowledge. RAG is an important technique to inject contextual knowledge into an LLM application. It improves accuracy and reduces the hallucination of LLMs. An effective RAG application combines real-time and user-specific short-term memory (chunks) with stable long-term memory (chapters) in the prompt context. Since the application's long-term memory is stable (even immutable), we package it in a vector database tightly coupled with the LLM. The client app assembles the short-term memory in the prompt and is supplemented with the long-term memory on the LLM server. We call the approach \"server-side RAG\". The long context length supported by modern LLMs are especially well-suited for long-term knowledge that are best represented by chapters of text. A Gaia node is an OpenAI compatible LLM service that is grounded by long-term knowledge on the server side. The client application can simply chat with it or provide realtime / short-term memory since the LLM is already aware of the domain or background. For example, if you ask ChatGPT the question What is Layer 2, the answer is that Layer 2 is a concept from the computer network. However, if you ask a blockchain person, they answer that Layer 2 is a way to scale the original Ethereum network. That's the difference between a generic LLM and knowledge-supplemented LLMs. We will cover the external knowledge preparation and how a knowledge-supplemented LLM completes a conversation. If you have learned how a RAG application works, go to Build a RAG application with Gaia to start building one. Create embeddings for your own knowledge as the long-term memory. Lifecycle of a user query on a knowledge-supplemented LLM. For this solution, we will use a chat model like Llama-3-8B for generating responses to the user. a text embedding model like nomic-embed-text for creating and retrieving embeddings. a Vector DB like Qdrant for storing embeddings.","s":"Gaia nodes with long-term knowledge","u":"/1.0.0/creator-guide/knowledge/concepts","h":"","p":904},{"i":907,"t":"The first step is to create embeddings for our knowledge base and store the embeddings in a vector DB. First of all, we split the long text into sections (i.e, chunks). All LLMs have a maximum context length. The model can't read the context if the text is too long. The most used rule for a Gaia node is to put the content in one chapter together. Remember, insert a blank line between two chunks. You can also use other algorithms to chunk your text. After chunking the document, we can convert these chunks into embeddings leveraging the embedding model. The embedding model is trained to create embeddings based on text and search for similar embeddings. We will use the latter function in the process of user query. Additionally, we will need a vector DB to store the embeddings so that we can retrieve these embeddings quickly at any time. On a Gaia node, we will get a database snapshot with the embeddings to use at last. Check out how to create your embeddings from a plain text file, and from a markdown file.","s":"Workflow for creating knowledge embeddings","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#workflow-for-creating-knowledge-embeddings","p":904},{"i":909,"t":"Next, let's learn the lifecycle of a user query on a knowledge-supplemented LLM. We will take a Gaia Node with Gaia knowledge as an example.","s":"Lifecycle of a user query on a knowledge-supplemented LLM","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#lifecycle-of-a-user-query-on-a-knowledge-supplemented-llm","p":904},{"i":911,"t":"when you send a question in human language to the node, the embedding model will first convert your question to embedding.","s":"Ask a question","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#ask-a-question","p":904},{"i":913,"t":"Then, the embedding model will search all the embeddings stored in the Qdrant vector DB and retrieve the embeddings that are similar to the question embeddings.","s":"Retrieve similar embeddings","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#retrieve-similar-embeddings","p":904},{"i":915,"t":"The embedding node will return the retrieved embeddings to the chat model. The chat model will use the retrieved embeddings plus your input questions as context to answer your queries finally.","s":"Response to the user query","u":"/1.0.0/creator-guide/knowledge/concepts","h":"#response-to-the-user-query","p":904},{"i":917,"t":"In this section, we will discuss how to create a vector collection snapshot from a markdown file. The snapshot file can then be loaded by a Gaia node as its knowledge base. The markdown file is segmented into multiple sections by headings. See an example. Each section is turned into a vector, and when retrieved, added to the prompt context for the LLM.","s":"Knowledge base from a markdown file","u":"/1.0.0/creator-guide/knowledge/markdown","h":"","p":916},{"i":919,"t":"Install the WasmEdge Runtime, the cross-platform LLM runtime. curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install_v2.sh | bash -s Download an embedding model. curl -LO https://huggingface.co/gaianet/Nomic-embed-text-v1.5-Embedding-GGUF/resolve/main/nomic-embed-text-v1.5.f16.gguf The embedding model is a special kind of LLM that turns sentences into vectors. The vectors can then be stored in a vector database and searched later. When the sentences are from a body of text that represents a knowledge domain, that vector database becomes our RAG knowledge base.","s":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#prerequisites","p":916},{"i":921,"t":"By default, we use Qdrant as the vector database. You can start a Qdrant instance by starting a Gaia node with a knowledge snapshot. note Or, you can start a Qdrant server using Docker. The following command starts it in the background. mkdir qdrant_storage mkdir qdrant_snapshots nohup docker run -d -p 6333:6333 -p 6334:6334 \\ -v $(pwd)/qdrant_storage:/qdrant/storage:z \\ -v $(pwd)/qdrant_snapshots:/qdrant/snapshots:z \\ qdrant/qdrant","s":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#start-a-vector-database","p":916},{"i":923,"t":"Delete the default collection if it exists. curl -X DELETE 'http://localhost:6333/collections/default' Create a new collection called default. Notice that it is 768 dimensions. That is the output vector size of the embedding model nomic-embed-text-v1.5. If you are using a different embedding model, you should use a dimension that fits the model. curl -X PUT 'http://localhost:6333/collections/default' \\ -H 'Content-Type: application/json' \\ --data-raw '{ \"vectors\": { \"size\": 768, \"distance\": \"Cosine\", \"on_disk\": true } }' Download a program to segment the markdown document and create embeddings. curl -LO https://github.com/GaiaNet-AI/embedding-tools/raw/main/markdown_embed/markdown_embed.wasm It chunks the document based on markdown sections. You can check out the Rust source code here and modify it if you need to use a different chunking strategy. Next, you can run the program by passing a collection name, vector dimension, and the source document. You can pass in the desired markdown heading level for chunking using the --heading_level option. The --ctx_size option matches the embedding model's context window size, which in this case is 8192 tokens allowing it to process long sections of text. Make sure that Qdrant is running on your local machine. The model is preloaded under the name embedding. The wasm app then uses the embedding model to create the 768-dimension vectors from paris.md and saves them into the default collection. curl -LO https://huggingface.co/datasets/gaianet/paris/raw/main/paris.md wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ markdown_embed.wasm embedding default 768 paris.md --heading_level 1 --ctx_size 8192","s":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#create-the-vector-collection-snapshot","p":916},{"i":925,"t":"You can pass the following options to the program. Using -c or --ctx_size to specify the context size of the input. This defaults to 512. Using -l or --heading_level to specify the markdown heading level for each vector. This defaults to 1. Using -m or --maximum_context_length to specify a context length in the CLI argument. That is to truncate and warn for each text segment that goes above the context length. Using -s or --start_vector_id to specify the start vector ID in the CLI argument. This will allow us to run this app multiple times on multiple documents on the same vector collection. Example: the above example but to append the London guide to the end of an existing collection starting from index 42. wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ markdown_embed.wasm embedding default 768 london.md -c 8192 -l 1 -s 42","s":"Options","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#options","p":916},{"i":927,"t":"You can create a snapshot of the collection, which can be shared and loaded into a different Qdrant database. You can find the snapshot file in the qdrant_snapshots directory, or the ~/gaianet/qdrant/snapshots directory in the Gaia node. curl -X POST 'http://localhost:6333/collections/default/snapshots' We also recommend you to compress the snapshot file. tar czvf my.snapshot.tar.gz my.snapshot Finally, upload the my.snapshot.tar.gz file to Huggingface so that the Gaia node can download and use it.","s":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#create-a-vector-snapshot","p":916},{"i":929,"t":"Start a new Gaia node Customize the Gaia node Have fun!","s":"Next steps","u":"/1.0.0/creator-guide/knowledge/markdown","h":"#next-steps","p":916},{"i":931,"t":"In this section, we will discuss how to create a vector collection snapshot from a PDF file. First, we will parse the unstructured PDF file to a structured markdown file. Then, we will follow the steps from Knowledge base from a markdown file to create embedding for your PDF files.","s":"Knowledge base from a PDF file","u":"/1.0.0/creator-guide/knowledge/pdf","h":"","p":930},{"i":934,"t":"LlamaParse is a tool to parse files for optimal RAG. You will need a LlamaCloud key from https://cloud.llamaindex.ai. First, install the dependencies. we are assuming that you already have Node.JS 20+ installed. git clone https://github.com/alabulei1/llamaparse-integration.git cd llamaparse-integration npm install llamaindex npm install dotenv Then, edit the .env file to set up the PDF file path and LlamaCloud Key. In this case, you don't need to care about the LLM-related settings. After that, run the following command line to parse your pdf into a markdown file. npx tsx transMd.ts The output markdown file will be located in this folder named output.md by default. You can change the path in the .env file.","s":"Tool #1: LlamaParse","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#tool-1-llamaparse","p":930},{"i":936,"t":"GPTPDF is an open-source tool using GPT-4o to parse PDF into markdown. You will need an OpenAI key here. First, install the gptpdf software. pip install gptpdf Then, enter the Python environment. python Next, use the following command to parse your pdf. from gptpdf import parse_pdf api_key = 'Your OpenAI API Key' content, image_paths = parse_pdf(Your_Pdf_Path, api_key=api_key) print(content) The output markdown files called output.md will be located in your root directory.","s":"Tool #2: GPTPDF","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#tool-2-gptpdf","p":930},{"i":938,"t":"Please follow the tutorial Knowledge base from a markdown file to convert your markdown file to a snapshot of embeddings that can be imported into a GaiaNet node.","s":"Create embeddings from the markdown files","u":"/1.0.0/creator-guide/knowledge/pdf","h":"#create-embeddings-from-the-markdown-files","p":930},{"i":940,"t":"You could fine-tune an open-source LLM to Teach it to follow conversations. Teach it to respect and follow instructions. Make it refuse to answer certain questions. Give it a specific \"speaking\" style. Make it response in certain formats (e.g., JSON). Give it focus on a specific domain area. Teach it certain knowledge. To do that, you need to create a set of question and answer pairs to show the model the prompt and the expected response. Then, you can use a fine-tuning tool to perform the training and make the model respond the expected answer for each question.","s":"Fine-tune LLMs","u":"/1.0.0/creator-guide/finetune/intro","h":"","p":939},{"i":942,"t":"GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides: a web-based chatbot UI Chat with a GaiaNet node that is an expert on the Rust programming language. an OpenAI compatible API. See how to use a GaiaNet node as a drop-in OpenAI replacement in your favorite AI agent app. 100% of today's AI agents are applications in the OpenAI ecosystem. With our API approach, GaiaNet is an alternative to OpenAI. Each GaiaNet node has the ability to be customized with a fine-tuned model supplemented by domain knowledge which eliminates the generic responses many have come to expect. For example, a GaiaNet node for a financial analyst agent can write SQL code to query SEC 10K filings to respond to user questions. Similar GaiaNet nodes are organized into GaiaNet domains, to provide stable services by load balancing across the nodes. GaiaNet domains have public-facing URLs and promote agent services to their communities. When a user or an agent app sends an API request to the domain's API endpoint URL, the domain is responsible for directing the request to a node that is ready.","s":"Overview","u":"/1.0.0/intro","h":"","p":941},{"i":945,"t":"If you are an end user of AI agent applications, you can: Find a list of interesting GaiaNet nodes you can chat with on the web, or access via API. Use a GaiaNet node as the backend AI engine for your favorite AI agent apps.","s":"Users","u":"/1.0.0/intro","h":"#users","p":941},{"i":947,"t":"If you are interested in running GaiaNet nodes, you can Get started with a GaiaNet node. Customize the GaiaNet node with a finetuned model and custom knowledge base. Join the Gaia Protocol","s":"Node operators","u":"/1.0.0/intro","h":"#node-operators","p":941},{"i":949,"t":"If you are a Gaia Domain Name owner, you can Launch your domain.","s":"Domain operators","u":"/1.0.0/intro","h":"#domain-operators","p":941},{"i":951,"t":"If you are a creator or knowledge worker interested in creating your own AI agent service, you can: Create your own knowledge base. Finetune a model to \"speak\" like you.","s":"Creators","u":"/1.0.0/intro","h":"#creators","p":941},{"i":953,"t":"This guide provides all the information you need to quickly set up and run a Gaia Domain. Note: Ensure that you are the owner of a Gaia Domain Name before proceeding. You can verify your Gaia Domain Name in the \"Assets\" section of your profile. Gaia simplifies the process for domain operators to launch and host a Gaia Domain service in just a few clicks.","s":"Quick Start with Launching Gaia Domain","u":"/1.0.0/domain-guide/quick-start","h":"","p":952},{"i":955,"t":"Access the Create Gaia Domain Page Click LAUNCH DOMAIN in the \"Domain\" or \"Assets\" section under your profile. This will take you to the Create Gaia Domain page. Fill in Domain Details Enter the general information for your domain, including: Domain profile Domain Name Description System Prompt Choose a Gaia Domain Name Select a Gaia domain name from your assets. Select a Supplier Currently, Gaia Cloud is the only supplier. Pick a Gaia Domain Tier Choose a tier to enhance your domain's rewards, which is necessary. Configure Server and Management Options Confirm the server configuration for running your domain. Set management preferences, such as whether nodes can join automatically and the specific LLM to use. After completing these six steps, your Gaia Domain will be successfully launched and other nodes can join your domain.","s":"Steps to Launch Your Gaia Domain","u":"/1.0.0/domain-guide/quick-start","h":"#steps-to-launch-your-gaia-domain","p":952},{"i":957,"t":"The popular llama.cpp tool comes with a finetune utility. It works well on CPUs! This fine-tune guide is reproduced with permission from Tony Yuan's Finetune an open-source LLM for the chemistry subject project.","s":"llama.cpp","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"","p":956},{"i":959,"t":"The finetune utility in llama.cpp can work with quantized GGUF files on CPUs, and hence dramatically reducing the hardware requirements and expenses for fine-tuning LLMs. Check out and download the llama.cpp source code. git clone https://github.com/ggerganov/llama.cpp cd llama.cpp Build the llama.cpp binary. mkdir build cd build cmake .. cmake --build . --config Release If you have NVIDIA GPU and CUDA toolkit installed, you should build llama.cpp with CUDA support. mkdir build cd build cmake .. -DLLAMA_CUBLAS=ON -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc cmake --build . --config Release","s":"Build the fine-tune utility from llama.cpp","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#build-the-fine-tune-utility-from-llamacpp","p":956},{"i":961,"t":"We are going to use Meta's Llama2 chat 13B model as the base model. Note that we are using a Q5 quantized GGUF model file directly to save computing resources. You can use any of the Llama2 compatible GGUF models on Hugging Face. cd .. # change to the llama.cpp directory cd models/ curl -LO https://huggingface.co/gaianet/Llama-2-13B-Chat-GGUF/resolve/main/llama-2-13b-chat.Q5_K_M.gguf","s":"Get the base model","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#get-the-base-model","p":956},{"i":963,"t":"Next we came up with 1700+ pairs of QAs for the chemistry subject. It is like the following in a CSV file. Question Answer What is unique about hydrogen? It's the most abundant element in the universe, making up over 75% of all matter. What is the main component of Jupiter? Hydrogen is the main component of Jupiter and the other gas giant planets. Can hydrogen be used as fuel? Yes, hydrogen is used as rocket fuel. It can also power fuel cells to generate electricity. What is mercury's atomic number? The atomic number of mercury is 80 What is Mercury? Mercury is a silver colored metal that is liquid at room temperature. It has an atomic number of 80 on the periodic table. It is toxic to humans. We used GPT-4 to help me come up many of these QAs. Then, we wrote a Python script to convert each row in the CSV file into a sample QA in the Llama2 chat template format. Notice that each QA pair starts with as an indicator for the fine-tune program to start a sample. The result train.txt file can now be used in fine-tuning. Put the train.txt file in the llama.cpp/models directory with the GGUF base model.","s":"Create a question and answer set for fine-tuning","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#create-a-question-and-answer-set-for-fine-tuning","p":956},{"i":965,"t":"Use the following command to start the fine-tuning process on your CPUs. I am putting it in the background so that it can run continuously now. It could take several days or even a couple of weeks depending on how many CPUs you have. nohup ../build/bin/finetune --model-base llama-2-13b-chat.Q5_K_M.gguf --lora-out lora.bin --train-data train.txt --sample-start '' --adam-iter 1024 & You can check the process every few hours in the nohup.out file. It will report the loss for each iteration. You can stop the process when the loss goes consistently under 0.1. Note 1 If you have multiple CPUs (or CPU cores), you can speed up the fine-tuning process by adding a -t parameter to the above command to use more threads. For example, if you have 60 CPU cores, you could do -t 60 to use all of them. Note 2 If your fine-tuning process is interrupted, you can restart it from checkpoint-250.gguf. The next file it outputs is checkpoint-260.gguf. nohup ../build/bin/finetune --model-base llama-2-13b-chat.Q5_K_M.gguf --checkpoint-in checkpoint-250.gguf --lora-out lora.bin --train-data train.txt --sample-start '' --adam-iter 1024 &","s":"Finetune!","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#finetune","p":956},{"i":967,"t":"The fine-tuning process updates several layers of the LLM's neural network. Those updated layers are saved in a file called lora.bin and you can now merge them back to the base LLM to create the new fine-tuned LLM. ../build/bin/export-lora --model-base llama-2-13b-chat.Q5_K_M.gguf --lora lora.bin --model-out chemistry-assistant-13b-q5_k_m.gguf The result is this file. curl -LO https://huggingface.co/juntaoyuan/chemistry-assistant-13b/resolve/main/chemistry-assistant-13b-q5_k_m.gguf Note 3 If you want to use a checkpoint to generate a lora.bin file, use the following command. This is needed when you believe the final lora.bin is an overfit. ../build/bin/finetune --model-base llama-2-13b-chat.Q5_K_M.gguf --checkpoint-in checkpoint-250.gguf --only-write-lora --lora-out lora.bin","s":"Merge","u":"/1.0.0/creator-guide/finetune/llamacpp","h":"#merge","p":956},{"i":969,"t":"GaiaNet has developed a tool for making vector collection snapshot files, so everyone can easily create their own knowledge base. Access it here: https://tools.gaianet.xyz/","s":"Build a knowledge base using Gaia web tool","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"","p":968},{"i":971,"t":"First, copy unformatted text into a txt file. Then follow the two rules to chunk your content, ie putting similar content together. Each title and related content are a chunk. There is no blank lines in one chunk. Use a blank line to recognize different chunks. After that, save it as a txt file. For example, below is your source. After formatted, it will look like the following. What is a blockchain? A blockchain is a distributed, cryptographically-secure database structure that allows network participants to establish a trusted and immutable record of transactional data without the need for intermediaries. A blockchain can execute a variety of functions beyond transaction settlement, such as smart contracts. Smart contracts are digital agreements that are embedded in code and can have limitless formats and conditions. Blockchains have proven themselves as superior solutions for securely coordinating data, but they are capable of much more, including tokenization, incentive design, attack-resistance, and reducing counterparty risk. The very first blockchain was the Bitcoin blockchain, which was itself a culmination of over a century of advancements in cryptography and database technology. What is blockchain software? Blockchain software is like any other software. The first of its kind was Bitcoin, which was released as open source software, making it available to anyone to use or change. There are a wide variety of efforts across the blockchain ecosystem to improve upon Bitcoin's original software. Ethereum has its own open source blockchain software. Some blockchain software is proprietary and not available to the public.","s":"Segment your text file","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#segment-your-text-file","p":968},{"i":973,"t":"Visit this URL: https://tools.gaianet.xyz/, upload the above prepared txt file. Edit your dbname . Note: Do not include spaces or special characters in the dbname. Choose Embedding model, we suggest use nomic-embed-text-v1.5.f16. Click the \"Make RAG\" button and wait. When finished, the chatbot will display GaiaNet Node config info. It is a JSON format as follows. { \"embedding\": \"https://huggingface.co/gaianet/Nomic-embed-text-v1.5-Embedding-GGUF/resolve/main/nomic-embed-text-v1.5.f16.gguf\", \"embedding_ctx_size\": 768, \"snapshot\": \"https://huggingface.co/datasets/max-id/gaianet-qdrant-snapshot/resolve/main/test/test.snapshot\" }","s":"Generate the snapshot file","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#generate-the-snapshot-file","p":968},{"i":975,"t":"Run the following gaianet config \\ --snapshot https://huggingface.co/datasets/max-id/gaianet-qdrant-snapshot/resolve/main/test/test.snapshot \\ --embedding-url https://huggingface.co/gaianet/Nomic-embed-text-v1.5-Embedding-GGUF/resolve/main/nomic-embed-text-v1.5.f16.gguf \\ --embedding-ctx-size 768 and then gaianet init gaianet start Have fun!","s":"Update the node config","u":"/1.0.0/creator-guide/knowledge/web-tool","h":"#update-the-node-config","p":968},{"i":978,"t":"Specialized, finetuned and RAG-enhanced open-source Large Language Models are key elements in emerging AI agent applications. However, those agent apps also present unique challenges to the traditional cloud computing and SaaS infrastructure, including new requirements for application portability, virtualization, security isolation, costs, data privacy, and ownership. GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. A GaiaNet node consists of a high-performance and cross-platform application runtime, a finetuned LLM, a knowledge embedding model, a vector database, a prompt manager, an open API server, and a plugin system for calling external tools and functions using LLM outputs. It can be deployed by any knowledge worker as a digital twin and offered as a web API service. A new class of tradeable assets and a marketplace could be created from individualized knowledge bases and components. Similar GaiaNet nodes are organized into GaiaNet domains, which offer trusted and reliable AI agent services to the public. The GaiaNet node and domains are governed by the GaiaNet DAO (Decentralized Autonomous Organization). Through Purpose Bound Money smart contracts, the GaiaNet network is a decentralized marketplace for AI agent services.","s":"Abstract","u":"/1.0.0/litepaper","h":"#abstract","p":976},{"i":980,"t":"The emergence of ChatGPT and Large Language Model (LLM) has revolutionized how humans produce and consume knowledge. Within a year, AI-native applications have evolved from chatbots to copilots, to agents. AI agents would increasingly evolve from supportive tools (akin to Copilots) to autonomous entities capable of completing tasks independently. โ€” Dr. Andrew Ng at Sequoia Capital AI Ascent 2024 Summit Agents are software applications that can complete tasks on its own autonomously like a human. The agent can understand the task, plan the steps to complete the task, execute all the steps, handle errors and exceptions, and deliver the results. While a powerful LLM could act as the โ€œbrainโ€ for the agent, we need to connect to external data sources (eyes and ears), domain-specific knowledge base and prompts (skills), context stores (memory), and external tools (hands). For agent tasks, we often need to customize the LLM itself to reduce hallucinations in a specific domain. to generate responses in a specific format (e.g., a JSON schema). to answer โ€œpolitically incorrectโ€ questions (e.g., to analyze CVE exploits for an agent in the security domain). and to answer requests in a specific style (e.g., to mimic a person). Agents are complex software that require significant amount of engineering and resources. Today, most agents are close-source and hosted on SaaS-based LLMs. Popular examples include GPTs and Microsoft/GitHub copilots on OpenAI LLMs, and Duet on Googleโ€™s Gemini LLMs. However, as we discussed, a key requirement for agents is to customize and adapt its underlying LLM and software stack for domain-specific tasks โ€” an area where centralized SaaS perform very poorly. For example, with ChatGPT, every small task must be handled by a very large model. It is also enormously expensive to fine-tune or modify any ChatGPT models. The one-size-fits-all LLMs are detrimental to the agent use case in capabilities, alignment, and cost structure. Furthermore, the SaaS hosted LLMs lack privacy controls on how the agentโ€™s private knowledge might be used and shared. Because of these shortcomings, it is difficult for individual knowledge workers to create and monetize agents for his or her own domain and tasks on SaaS platforms like OpenAI, Google, Anthropic, Microsoft and AWS. In this paper, we propose a decentralized software platform and protocol network for AI agents for everyone. Specifically, our goals are two-folds. Goal #1: Empower individuals to incorporate his/her private knowledge and expertise into personal LLM agent apps. Those apps aim to perform knowledge tasks and use tools just as the individual would, but also reflect the individualโ€™s style and values. Goal #2: Enable individuals to provide and scale their LLM agents as services, and get compensated for their expertise and work. GaiaNet is โ€œYouTube for knowledge and skills.โ€","s":"Introduction","u":"/1.0.0/litepaper","h":"#introduction","p":976},{"i":982,"t":"As of April 2024, there are over 6000 open-source LLMs published on Hugging face. Compared with close-source LLMs, such as GPT-4, open-source LLMs offer advantages in privacy, cost, and systematic bias. Even with general QA performance, open-source LLMs are closing the gap with close-source counterparties quickly. For AI agent use cases, it has been demonstrated that smaller but task-specific LLMs often outperform larger general models. However, it is difficult for individuals and businesses to deploy and orchestrate multiple finetuned LLMs on their own heterogeneous GPU infrastructure. The complex software stack for agents, as well as the complex interaction with external tools, are fragile and error-prone. Furthermore, LLM agents have entirely different scaling characteristics than past application servers. LLM is extremely computationally intensive. A LLM agent server can typically only serve one user at a time, and it often blocks for seconds at a time. The scaling need is no longer to handle many async requests on a single server, but to load balance among many discrete servers on the internet scale. The GaiaNet project provides a cross-platform and highly efficient SDK and runtime for finetuned open-source LLMs with proprietary knowledge bases, customized prompts, structured responses, and external tools for function calling. A GaiaNet node can be started in minutes on any personal, cloud, or edge device. It can then offer services through an incentivized web3 network.","s":"Open-source and decentralization","u":"/1.0.0/litepaper","h":"#open-source-and-decentralization","p":976},{"i":984,"t":"The basic operational unit in the GaiaNet network is a node. A GaiaNet node is a streamlined software stack that allows any technically competent person to run an AI agent of his own. The software stack on the GaiaNet node consists of the following 7 key components. 1 Application runtime. GaiaNet applications run in a lightweight, secure and high-performance sandbox called WasmEdge. As an open-source project managed by the Linux Foundation and CNCF, WasmEdge runtime works seamlessly with leading cloud native tools such as Docker, containerd, CRI-O, Podman and Kubernetes. It is also the virtual machine of choice by leading public blockchains to securely and efficiently execute on-chain and off-chain smart contracts. WasmEdge is a high-performance and cross-platform runtime. It can run AI models on almost all CPUs, GPUs, and AI accelerators at native speed, making it an ideal runtime for decentralized AI agents. 2 Finetuned LLM. The GaiaNet node supports almost all open-source LLMs, multimodal models (eg Large Vision Models or LVMs), text-to-image models (eg Stable Diffusion) and text-to-video models. That includes all finetuned models using personal or proprietary data. The node owner can finetune open-source models using a wide variety of tools. For example, the node owner can finetune an LLM using personal chat histories so that the finetuned LLM can mimic his own speaking style. He can also finetune an LLM to focus it on a specific knowledge domain to reduce hallucinations and improve answer quality for questions in that domain. A finetuned LLM can guarantee to output JSON text that matches a pre-determined schema for use with external tools. Besides LLMs, the node owner could finetune Stable Diffusion models with her own photos to generate images that look like her. 3 Embedding model. The GaiaNet node needs to manage a body of public or proprietary knowledge for the AI agent. It is a key feature that enables the agent to specialize and outperform much larger models in a specific domain. The embedding models are specially trained LLMs that turns input sentences into a vector representation, instead of generating completions. Since the embedding models are trained from LLMs, they can โ€œembedโ€ the โ€œmeaningโ€ of the sentences into the vectors so that similar sentences are located close together in the high dimensional space occupied by those vectors. With the embedding model, a GaiaNet node can ingest a body of text, images, PDFs, web links, audio and video files, and generate a collection of embedding vectors based on their contents. The embedding model also turns user questions and conversations into vectors, which allows the GaiaNet node to quickly identify contents in its knowledge base that are relevant to the current conversation. 4 Vector database. The embedding vectors that form GaiaNet nodeโ€™s knowledge base are stored on the node itself for optimal performance and maximum privacy. The GaiaNet node includes a Qdrant vector database. 5 Custom prompts. Besides finetuning and knowledge arguments, the easiest way to customize an LLM for new applications is simply to prompt it. Like humans, LLMs are remarkable one-shot learners. You can simply give it an example of how to accomplish a task, and it will learn and do similar tasks on its own. Prompt engineering is a practical field to research and develop such prompts. Furthermore, effective prompts could be highly dependent on the model in use. A prompt that works well for a large model, such as Mixtral 8x22b, is probably not going to work well for a small model like Mistral 7b. The GaiaNet node can support several different prompts that are dynamically chosen and used in applications. For example, The system_prompt is a general introduction to the agent task the node is supposed to perform. It often contains a persona to help the LLM respond with the right tone. For example, the system_prompt for a college teaching assistant could be: โ€œYou are a teaching assistant for UC Berkeleyโ€™s computer science 101 class. Please explain concepts and answer questions in detail. Do not answer any question that is not related to math or computer science.โ€ The rag_prompt is a prefix prompt to be dynamically inserted in front of knowledge base search results in an RAG chat. It could be something like this: โ€œPlease answer the question based on facts and opinions in the context below. Do not make anything that is not in the context. ---------โ€ The LLM community has developed many useful prompts for different application use cases. GaiaNet node allows you to easily manage and experiment with them. Through the our developer SDK, GaiaNet owners and operators could customize the logic of dynamic prompt generation in their own way. For example, a GaiaNet node could perform a Google search for any user question, and add the search results into the prompt as context. 6 Function calls and tool use. The LLM not only is great at generating human language, but also excels at generating machine instructions. Through finetuning and prompt engineering, we could get some LLMs to consistently generate structured JSON objects or computer code in many language tasks, such as summarizing and extracting key elements from a paragraph of text. The GaiaNet node allows you to specify the output format of the generated text. You can give it a grammar specification file to enforce that responses will always conform to a pre-defined JSON schema. Once the LLM returns a structured JSON response, the agent typically need to pass the JSON to a tool that performs the task and comes back with an answer. For example, the user question might be. What is the weather like in Singapore? The LLM generates the following JSON response. {\"tool\":\"get_current_weather\", \"location\":\"Singapore\",\"unit\":\"celsius\"} The GaiaNet node must know what is the tool associated with get_current_weather and then invoke it. GaiaNet node owners and operators can configure any number of external tools by mapping a tool name with a web service endpoint. In the above example, the get_current_weather tool might be associated with a web service that takes this JSON data. The GaiaNet node sends the JSON to the web service endpoint via HTTPS POST and receives an answer. 42 It then optionally feeds the answer to the LLM to generate a human language answer. The current weather in Singapore is 42C. Through the GaiaNet node SDK, developers are not limited to using web services. They can write plugins to process LLM responses locally on the node. For example, the LLM might return Python code, which can be executed locally in a sandbox and for the GaiaNet node to perform a complex operation. 7 The API server. All GaiaNet nodes must have the same API for questions and answers. That allows front-end applications to work with, and potentially be load-balanced to any GaiaNet node. We choose to support the OpenAI API specification, which enables GaiaNet nodes to become drop-in replacements for OpenAI API endpoints for a large ecosystem of applications. The API server runs securely and cross-platform on the WasmEdge runtime. It ties together all the other components in the GaiaNet node. It receives user requests, generates an embedding from the request, searches the vector database, adds search results to the prompt context, generates an LLM response, and then optionally uses the response to perform function calling. The API server also provides a web-based chatbot UI for users to chat with the RAG-enhanced finetuned LLM on the node.","s":"GaiaNet node","u":"/1.0.0/litepaper","h":"#gaianet-node","p":976},{"i":986,"t":"While each GaiaNet node is already a powerful AI agent capable of answering complex questions and performing actions, individual nodes are not suitable for providing public services. There are several important reasons. For the public consumers and users, it is very hard to judge the trustworthiness of individual GaiaNet nodes. Harmful misinformation could be spread by malicious node operators. For GaiaNet node owners and operators, there is no economic incentive to provide such services to the public, which could be very costly to run. The AI agent servers have very different scaling characteristics than traditional internet application servers. When the agent is processing a user request, it typically takes up all the computing resources on the hardware. Instead of using software to scale concurrent users on a single server, the challenge of GaiaNet is to scale to many different identical nodes for a large application. Those challenges have given rise to the GaiaNet domain, which forms the basis of the GaiaNet web3 network. A GaiaNet domain is a collection of GaiaNet nodes available under a single Internet domain name. The domain operator decides which GaiaNet nodes can be registered under the domain and makes the node services available to the public. For example, a GaiaNet domain might be a Computer Science teaching assistant for UC Berkeley. The domain could provide services through https://cs101.gaianet.berkeley.edu. The domain operator needs to do the following. Verify and admit individual nodes to be registered under the domain. Those nodes must all meet requirements, such as the LLM, knowledge base, and prompts, set by the domain operator to ensure service quality. The node registration on a domain could be done via a whitelist or blacklist. It is up to the domain operator. Monitor each nodeโ€™s performance at real time and remove inactive ones. Promotes the โ€œteaching assistantโ€ chatbot apps to the target audience. Set the price for the API services. Load balance between active nodes. Getting paid by users. Pay nodes for their services. Each GaiaNet node has an unique node ID in the form of an ETH address. The private key associated with the ETH address is stored on the node. Once a node is successfully registered with a domain, it is entitled to receive payments from both service revenue and network awards from the domain. The domain could send payments directly to the node's ETH address. Or, the domain could provide a mechanism for a node operator to register multiple nodes under a single Metamask address, such as signing a challenge phrase using the node private keys. In that case, the node operator will receive aggregated payments in his Metamask account for all associated nodes. Each GaiaNet domain has an associated smart contract that is used for escrow payments. It is similar to OpenAIโ€™s credit payment model, where users purchase credits first, and then consume them over time. When the user pays into the smart contract, an access token will be automatically issued to him. He uses this token to make API calls to the domain, which is then load-balanced to random nodes in the domain. As the user consumes those services, his fund in the contract depletes and the access token stops working if he no longer has any balance. The pricing and payment of the API service are determined by the domain operator. It is typically denominated in USD stable coins. The domain operator pays a share of the revenue to node operators who provided the services. The GaiaNet network is a decentralized marketplace of agent services. The funds locked in GaiaNet domain contracts are for a single purpose of consuming API services. It is called Purpose Bound Money. A key aspect of the GaiaNet protocol is that the domain operators are โ€œtrust providersโ€ in the ecosystem of decentralized nodes. The protocol network is designed to incentivize the trust of the operators through tokenomics designs such as mining and staking. GaiaNet nodes, domains, users, and developers form a DAO to grow the network and benefit all contributors.","s":"GaiaNet network","u":"/1.0.0/litepaper","h":"#gaianet-network","p":976},{"i":988,"t":"The GaiaNet token is a utility token designed to facilitate transactions, support governance, and foster trust in the network. It serves three primary purposes. As a DAO governance token, holders can participate in setting the rules of the network. As a staking token, holders vouch for domain operatorsโ€™ trustworthiness. Stakers get a cut from the domain operatorโ€™s service revenue. But they could also be slashed if the domain operator misbehave, such as spreading misinformation or providing unreliable services. As a payment token, the GaiaNet token could be deposited into the domainโ€™s escrow contract and be used to pay for services over time. The payment utility of the GaiaNet token is designed to balance the network supply and demand. The value of the GaiaNet token asset is determined at the time when it enters or leaves the escrow smart contract based on real-time exchange rates. Service consumers could lock in savings from the potential appreciation of the token. For example, if a user deposits $100 worth of GaiaNet tokens into the contract, and when the domain and nodes get paid, the token value has gone up to $110, he would have received $110 worth of agent services. Conversely, if the token price drops, the service providers (domains and nodes) now have an opportunity to โ€œmineโ€ the tokens on the cheap. If the $100 initial tokens is only worth $90 now, service providers will get more tokens for each unit of electricity and compute they provide. That incentivizes more nodes to join the network and speculate on a later rise in token value. An exercise: OpenAI is projected to reach $5 billion in ARR in 2024. Assume that most enterprise customers pay quarterly, that is $1.25 billion of circulation market cap in addition to OpenAIโ€™s current enterprise value if they were to issue a payment token. The overall AI services market size is projected to reach $2 trillion in a few years. That translates to $500 billion market cap for a payment utility token alone.","s":"GaiaNet token","u":"/1.0.0/litepaper","h":"#gaianet-token","p":976},{"i":990,"t":"GaiaNet is a developer platform to create your agent services. We provide tools for you to do these. Tools to generate finetuning datasets and perform finetuning on CPU and GPU machines. Tools to ingest documents and create vector embeddings for the knowledge base. Rust-based SDK to dynamically generate and manage prompts. Rust-based SDK to extend the agentโ€™s capability for invoking tools and software on the node. For developers who do not wish to operate nodes, we are building a marketplace for finetuned models knowledge bases and datasets function-calling plugins All those components are blockchain-based assets represented by NFTs. A node operator could purchase NFTs for the components it wishes to use, and share service revenue with the component developers. That enables diverse and cashflow-generating assets to be issued from the GaiaNet ecosystem.","s":"Component marketplace for AI assets","u":"/1.0.0/litepaper","h":"#component-marketplace-for-ai-assets","p":976},{"i":992,"t":"GaiaNet provides open-source tools for individuals and teams to create agent services using their proprietary knowledge and skills. Developers could create finetuned LLMs, knowledge collections, and plugins for the agent, and issue assets based on those components. The GaiaNet protocol makes those nodes discoverable and accessible through GaiaNet domains.","s":"Conclusion","u":"/1.0.0/litepaper","h":"#conclusion","p":976},{"i":994,"t":"In this section, we will discuss how to create a vector collection snapshot from a plain text file. The snapshot file can then be loaded by a Gaia node as its knowledge base. The text file is segmented into multiple chunks by blank lines. See an example. Each chunk is turned into a vector, and when retrieved, added to the prompt context for the LLM.","s":"Knowledge base from a plain text file","u":"/1.0.0/creator-guide/knowledge/text","h":"","p":993},{"i":996,"t":"Install the WasmEdge Runtime, the cross-platform LLM runtime. curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install_v2.sh | bash -s Download an embedding model. curl -LO https://huggingface.co/gaianet/Nomic-embed-text-v1.5-Embedding-GGUF/resolve/main/nomic-embed-text-v1.5.f16.gguf The embedding model is a special kind of LLM that turns sentences into vectors. The vectors can then be stored in a vector database and searched later. When the sentences are from a body of text that represents a knowledge domain, that vector database becomes our RAG knowledge base.","s":"Prerequisites","u":"/1.0.0/creator-guide/knowledge/text","h":"#prerequisites","p":993},{"i":998,"t":"By default, we use Qdrant as the vector database. You can start a Qdrant instance by starting a Gaia node with a knowledge snapshot. note Or, you can start a Qdrant server using Docker. The following command starts it in the background. mkdir qdrant_storage mkdir qdrant_snapshots nohup docker run -d -p 6333:6333 -p 6334:6334 \\ -v $(pwd)/qdrant_storage:/qdrant/storage:z \\ -v $(pwd)/qdrant_snapshots:/qdrant/snapshots:z \\ qdrant/qdrant","s":"Start a vector database","u":"/1.0.0/creator-guide/knowledge/text","h":"#start-a-vector-database","p":993},{"i":1000,"t":"Delete the default collection if it exists. curl -X DELETE 'http://localhost:6333/collections/default' Create a new collection called default. Notice that it is 768 dimensions. That is the output vector size of the embedding model nomic-embed-text-v1.5. If you are using a different embedding model, you should use a dimension that fits the model. curl -X PUT 'http://localhost:6333/collections/default' \\ -H 'Content-Type: application/json' \\ --data-raw '{ \"vectors\": { \"size\": 768, \"distance\": \"Cosine\", \"on_disk\": true } }' Download a program to chunk a document and create embeddings. curl -LO https://github.com/GaiaNet-AI/embedding-tools/raw/main/paragraph_embed/paragraph_embed.wasm It chunks the document based on empty lines. So, you MUST prepare your source document this way -- segment the document into sections of around 200 words with empty lines. You can check out the Rust source code here and modify it if you need to use a different chunking strategy. The paragraph_embed.wasm program would NOT break up code listings even if there are empty lines with in the listing. Next, you can run the program by passing a collection name, vector dimension, and the source document. Make sure that Qdrant is running on your local machine. The model is preloaded under the name embedding. The wasm app then uses the embedding model to create the 768-dimension vectors from paris_chunks.txt and saves them into the default collection. curl -LO https://huggingface.co/datasets/gaianet/paris/raw/main/paris_chunks.txt wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ paragraph_embed.wasm embedding default 768 paris_chunks.txt -c 8192","s":"Create the vector collection snapshot","u":"/1.0.0/creator-guide/knowledge/text","h":"#create-the-vector-collection-snapshot","p":993},{"i":1002,"t":"You can pass the following options to the program. Using -m or --maximum_context_length to specify a context length in the CLI argument. That is to truncate and warn for each text segment that goes above the context length. Using -s or --start_vector_id to specify the start vector ID in the CLI argument. This will allow us to run this app multiple times on multiple documents on the same vector collection. Using -c or --ctx_size to specify the context size of the input. This defaults to 512. Example: the above example but to append the London guide to the end of an existing collection starting from index 42. wasmedge --dir .:. \\ --nn-preload embedding:GGML:AUTO:nomic-embed-text-v1.5.f16.gguf \\ paragraph_embed.wasm embedding default 768 london.txt -c 8192 -s 42","s":"Options","u":"/1.0.0/creator-guide/knowledge/text","h":"#options","p":993},{"i":1004,"t":"You can create a snapshot of the collection, which can be shared and loaded into a different Qdrant database. You can find the snapshot file in the qdrant_snapshots directory, or the ~/gaianet/qdrant/snapshots directory in the Gaia node. curl -X POST 'http://localhost:6333/collections/default/snapshots' We also recommend you to compress the snapshot file. tar czvf my.snapshot.tar.gz my.snapshot Finally, upload the my.snapshot.tar.gz file to Huggingface so that the Gaia node can download and use it.","s":"Create a vector snapshot","u":"/1.0.0/creator-guide/knowledge/text","h":"#create-a-vector-snapshot","p":993},{"i":1006,"t":"Start a new Gaia node Customize the Gaia node Have fun!","s":"Next steps","u":"/1.0.0/creator-guide/knowledge/text","h":"#next-steps","p":993},{"i":1008,"t":"After installing the GaiaNet software, you can use the gaianet CLI to manage the node. The following are the CLI options.","s":"GaiaNet CLI options","u":"/1.0.0/node-guide/cli-options","h":"","p":1007},{"i":1010,"t":"You can use gaianet --help to check all the available CLI options. gaianet --help ## Output Usage: gaianet {config|init|run|stop|OPTIONS} Subcommands: config Update the configuration. init Initialize the GaiaNet node. run|start Start the GaiaNet node. stop Stop the GaiaNet node. Options: --help Show this help message","s":"help","u":"/1.0.0/node-guide/cli-options","h":"#help","p":1007},{"i":1012,"t":"You can use gaianet --version to check your GaiaNet version. gaianet --version","s":"version","u":"/1.0.0/node-guide/cli-options","h":"#version","p":1007},{"i":1014,"t":"The gaianet init command initializes the node according to the $HOME/gaianet/config.json file. You can use some of our pre-set configurations. gaianet init will init the default node. It's an RAG application with Gaianet knowledge. gaianet init --config mua will init a node with the MUA project knowledge. gaianet init --base will init a node in an alternative directory. You can also use gaianet init url_your_config_json to init your customized settings for the node. You can customize your node using the Gaianet node link. If you're familiar with the Gaianet config.json, you can create your own manually. See an example here. gaianet init --config https://raw.githubusercontent.com/GaiaNet-AI/node-configs/main/pure-llama-3-8b/config.json","s":"init","u":"/1.0.0/node-guide/cli-options","h":"#init","p":1007},{"i":1016,"t":"The gaianet start is to start running the node. Use gaianet start to start the node according to the $HOME/gaianet/config.json file. Use gaianet start --base $HOME/gaianet-2.alt to start the node according to the $HOME/gaianet-2/config.json file. Use gaianet start --local-only to start the node for local use according to the $HOME/gaianet/config.json file.","s":"start","u":"/1.0.0/node-guide/cli-options","h":"#start","p":1007},{"i":1018,"t":"The gaianet stop is to stop the running node. Use gaianet stop to stop running the node. Use gaianet stop --force to force stop the GaiaNet node. Use gaianet stop --base $HOME/gaianet-2.alt to stop the node according to the $HOME/gaianet-2/config.json file.","s":"stop","u":"/1.0.0/node-guide/cli-options","h":"#stop","p":1007},{"i":1020,"t":"The gaianet config can update the key fields defined in the config.json file. gaianet config --help will list all the available arguments gaianet config --chat-url will change the download link of the chat model. gaianet config --prompt-template