From 2cece27208c4bce715d20000b845794dfb97843d Mon Sep 17 00:00:00 2001 From: Panos Vagenas <35837085+vagenas@users.noreply.github.com> Date: Mon, 28 Oct 2024 14:28:26 +0100 Subject: [PATCH] docs: update LlamaIndex docs for Docling v2 (#182) Signed-off-by: Panos Vagenas <35837085+vagenas@users.noreply.github.com> --- docs/examples/rag_llamaindex.ipynb | 137 +++++++++++++++++------------ docs/integrations/llamaindex.md | 11 +-- 2 files changed, 86 insertions(+), 62 deletions(-) diff --git a/docs/examples/rag_llamaindex.ipynb b/docs/examples/rag_llamaindex.ipynb index e5b8d68d1..0252bc4fd 100644 --- a/docs/examples/rag_llamaindex.ipynb +++ b/docs/examples/rag_llamaindex.ipynb @@ -14,13 +14,6 @@ "# RAG with LlamaIndex đŸĻ™" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "> ℹī¸ 👉 **The LlamaIndex Docling extension update to Docling v2 is ongoing; in the meanwhile, this notebook is showing current extension output, based on Docling v1.**" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -35,8 +28,8 @@ "This example leverages the official [LlamaIndex Docling extension](../../integrations/llamaindex/).\n", "\n", "Presented extensions `DoclingReader` and `DoclingNodeParser` enable you to:\n", - "- use PDF documents in your LLM applications with ease and speed, and\n", - "- harness Docling's rich format for advanced, document-native grounding." + "- use various document types in your LLM applications with ease and speed, and\n", + "- leverage Docling's rich format for advanced, document-native grounding." ] }, { @@ -69,7 +62,7 @@ } ], "source": [ - "%pip install -q --progress-bar off --no-warn-conflicts llama-index-core llama-index-readers-docling llama-index-node-parser-docling llama-index-embeddings-huggingface llama-index-llms-huggingface-api llama-index-readers-file python-dotenv" + "%pip install -q --progress-bar off --no-warn-conflicts llama-index-core llama-index-readers-docling llama-index-node-parser-docling llama-index-embeddings-huggingface llama-index-llms-huggingface-api llama-index-vector-stores-milvus llama-index-readers-file python-dotenv" ] }, { @@ -161,7 +154,7 @@ "output_type": "stream", "text": [ "Q: Which are the main AI models in Docling?\n", - "A: 1. A layout analysis model, an accurate object-detector for page elements. 2. TableFormer, a state-of-the-art table structure recognition model.\n", + "A: The main AI models in Docling are a layout analysis model, which is an accurate object-detector for page elements, and TableFormer, a state-of-the-art table structure recognition model.\n", "\n", "Sources:\n" ] @@ -170,11 +163,9 @@ "data": { "text/plain": [ "[('3.2 AI models\\n\\nAs part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure recognition model. We provide the pre-trained weights (hosted on huggingface) and a separate package for the inference code as docling-ibm-models . Both models are also powering the open-access deepsearch-experience, our cloud-native service for knowledge exploration tasks.',\n", - " {'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'Header_2': '3.2 AI models'}),\n", + " {'Header_2': '3.2 AI models'}),\n", " (\"5 Applications\\n\\nThanks to the high-quality, richly structured document conversion achieved by Docling, its output qualifies for numerous downstream applications. For example, Docling can provide a base for detailed enterprise document search, passage retrieval or classification use-cases, or support knowledge extraction pipelines, allowing specific treatment of different structures in the document, such as tables, figures, section structure or references. For popular generative AI application patterns, such as retrieval-augmented generation (RAG), we provide quackling , an open-source package which capitalizes on Docling's feature-rich document output to enable document-native optimized vector embedding and chunking. It plugs in seamlessly with LLM frameworks such as LlamaIndex [8]. Since Docling is fast, stable and cheap to run, it also makes for an excellent choice to build document-derived datasets. With its powerful table structure recognition, it provides significant benefit to automated knowledge-base construction [11, 10]. Docling is also integrated within the open IBM data prep kit [6], which implements scalable data transforms to build large-scale multi-modal training datasets.\",\n", - " {'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'Header_2': '5 Applications'})]" + " {'Header_2': '5 Applications'})]" ] }, "metadata": {}, @@ -243,23 +234,41 @@ "data": { "text/plain": [ "[('As part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure recognition model. We provide the pre-trained weights (hosted on huggingface) and a separate package for the inference code as docling-ibm-models . Both models are also powering the open-access deepsearch-experience, our cloud-native service for knowledge exploration tasks.',\n", - " {'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'path': '#/main-text/37',\n", - " 'heading': '3.2 AI models',\n", - " 'page': 3,\n", - " 'bbox': [107.36903381347656,\n", - " 330.07513427734375,\n", - " 506.29705810546875,\n", - " 407.3725280761719]}),\n", + " {'schema_name': 'docling_core.transforms.chunker.DocMeta',\n", + " 'version': '1.0.0',\n", + " 'doc_items': [{'self_ref': '#/texts/34',\n", + " 'parent': {'$ref': '#/body'},\n", + " 'children': [],\n", + " 'label': 'text',\n", + " 'prov': [{'page_no': 3,\n", + " 'bbox': {'l': 107.07593536376953,\n", + " 't': 406.1695251464844,\n", + " 'r': 504.1148681640625,\n", + " 'b': 330.2677307128906,\n", + " 'coord_origin': 'BOTTOMLEFT'},\n", + " 'charspan': [0, 608]}]}],\n", + " 'headings': ['3.2 AI models'],\n", + " 'origin': {'mimetype': 'application/pdf',\n", + " 'binary_hash': 14981478401387673002,\n", + " 'filename': '2408.09869v3.pdf'}}),\n", " ('With Docling , we open-source a very capable and efficient document conversion tool which builds on the powerful, specialized AI models and datasets for layout analysis and table structure recognition we developed and presented in the recent past [12, 13, 9]. Docling is designed as a simple, self-contained python library with permissive license, running entirely locally on commodity hardware. Its code architecture allows for easy extensibility and addition of new features and models.',\n", - " {'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'path': '#/main-text/10',\n", - " 'heading': '1 Introduction',\n", - " 'page': 1,\n", - " 'bbox': [107.33261108398438,\n", - " 83.3067626953125,\n", - " 504.0033874511719,\n", - " 136.45367431640625]})]" + " {'schema_name': 'docling_core.transforms.chunker.DocMeta',\n", + " 'version': '1.0.0',\n", + " 'doc_items': [{'self_ref': '#/texts/9',\n", + " 'parent': {'$ref': '#/body'},\n", + " 'children': [],\n", + " 'label': 'text',\n", + " 'prov': [{'page_no': 1,\n", + " 'bbox': {'l': 107.0031967163086,\n", + " 't': 136.7283935546875,\n", + " 'r': 504.04998779296875,\n", + " 'b': 83.30133056640625,\n", + " 'coord_origin': 'BOTTOMLEFT'},\n", + " 'charspan': [0, 488]}]}],\n", + " 'headings': ['1 Introduction'],\n", + " 'origin': {'mimetype': 'application/pdf',\n", + " 'binary_hash': 14981478401387673002,\n", + " 'filename': '2408.09869v3.pdf'}})]" ] }, "metadata": {}, @@ -335,7 +344,7 @@ "name": "stderr", "output_type": "stream", "text": [ - "Loading files: 100%|██████████| 1/1 [00:11<00:00, 11.15s/file]\n" + "Loading files: 100%|██████████| 1/1 [00:11<00:00, 11.27s/file]\n" ] }, { @@ -343,7 +352,7 @@ "output_type": "stream", "text": [ "Q: Which are the main AI models in Docling?\n", - "A: The main AI models in Docling are a layout analysis model and TableFormer. The layout analysis model is an accurate object-detector for page elements, and TableFormer is a state-of-the-art table structure recognition model.\n", + "A: 1. A layout analysis model, an accurate object-detector for page elements. 2. TableFormer, a state-of-the-art table structure recognition model.\n", "\n", "Sources:\n" ] @@ -352,35 +361,53 @@ "data": { "text/plain": [ "[('As part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure recognition model. We provide the pre-trained weights (hosted on huggingface) and a separate package for the inference code as docling-ibm-models . Both models are also powering the open-access deepsearch-experience, our cloud-native service for knowledge exploration tasks.',\n", - " {'file_path': '/var/folders/76/4wwfs06x6835kcwj4186c0nc0000gn/T/tmp4vsev3_r/2408.09869.pdf',\n", + " {'file_path': '/var/folders/76/4wwfs06x6835kcwj4186c0nc0000gn/T/tmp2ooyusg5/2408.09869.pdf',\n", " 'file_name': '2408.09869.pdf',\n", " 'file_type': 'application/pdf',\n", " 'file_size': 5566574,\n", - " 'creation_date': '2024-10-09',\n", - " 'last_modified_date': '2024-10-09',\n", - " 'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'path': '#/main-text/37',\n", - " 'heading': '3.2 AI models',\n", - " 'page': 3,\n", - " 'bbox': [107.36903381347656,\n", - " 330.07513427734375,\n", - " 506.29705810546875,\n", - " 407.3725280761719]}),\n", + " 'creation_date': '2024-10-28',\n", + " 'last_modified_date': '2024-10-28',\n", + " 'schema_name': 'docling_core.transforms.chunker.DocMeta',\n", + " 'version': '1.0.0',\n", + " 'doc_items': [{'self_ref': '#/texts/34',\n", + " 'parent': {'$ref': '#/body'},\n", + " 'children': [],\n", + " 'label': 'text',\n", + " 'prov': [{'page_no': 3,\n", + " 'bbox': {'l': 107.07593536376953,\n", + " 't': 406.1695251464844,\n", + " 'r': 504.1148681640625,\n", + " 'b': 330.2677307128906,\n", + " 'coord_origin': 'BOTTOMLEFT'},\n", + " 'charspan': [0, 608]}]}],\n", + " 'headings': ['3.2 AI models'],\n", + " 'origin': {'mimetype': 'application/pdf',\n", + " 'binary_hash': 14981478401387673002,\n", + " 'filename': '2408.09869.pdf'}}),\n", " ('With Docling , we open-source a very capable and efficient document conversion tool which builds on the powerful, specialized AI models and datasets for layout analysis and table structure recognition we developed and presented in the recent past [12, 13, 9]. Docling is designed as a simple, self-contained python library with permissive license, running entirely locally on commodity hardware. Its code architecture allows for easy extensibility and addition of new features and models.',\n", - " {'file_path': '/var/folders/76/4wwfs06x6835kcwj4186c0nc0000gn/T/tmp4vsev3_r/2408.09869.pdf',\n", + " {'file_path': '/var/folders/76/4wwfs06x6835kcwj4186c0nc0000gn/T/tmp2ooyusg5/2408.09869.pdf',\n", " 'file_name': '2408.09869.pdf',\n", " 'file_type': 'application/pdf',\n", " 'file_size': 5566574,\n", - " 'creation_date': '2024-10-09',\n", - " 'last_modified_date': '2024-10-09',\n", - " 'dl_doc_hash': '556ad9e23b6d2245e36b3208758cf0c8a709382bb4c859eacfe8e73b14e635aa',\n", - " 'path': '#/main-text/10',\n", - " 'heading': '1 Introduction',\n", - " 'page': 1,\n", - " 'bbox': [107.33261108398438,\n", - " 83.3067626953125,\n", - " 504.0033874511719,\n", - " 136.45367431640625]})]" + " 'creation_date': '2024-10-28',\n", + " 'last_modified_date': '2024-10-28',\n", + " 'schema_name': 'docling_core.transforms.chunker.DocMeta',\n", + " 'version': '1.0.0',\n", + " 'doc_items': [{'self_ref': '#/texts/9',\n", + " 'parent': {'$ref': '#/body'},\n", + " 'children': [],\n", + " 'label': 'text',\n", + " 'prov': [{'page_no': 1,\n", + " 'bbox': {'l': 107.0031967163086,\n", + " 't': 136.7283935546875,\n", + " 'r': 504.04998779296875,\n", + " 'b': 83.30133056640625,\n", + " 'coord_origin': 'BOTTOMLEFT'},\n", + " 'charspan': [0, 488]}]}],\n", + " 'headings': ['1 Introduction'],\n", + " 'origin': {'mimetype': 'application/pdf',\n", + " 'binary_hash': 14981478401387673002,\n", + " 'filename': '2408.09869.pdf'}})]" ] }, "metadata": {}, diff --git a/docs/integrations/llamaindex.md b/docs/integrations/llamaindex.md index af82da318..424532abb 100644 --- a/docs/integrations/llamaindex.md +++ b/docs/integrations/llamaindex.md @@ -2,11 +2,8 @@ Docling is available as an official LlamaIndex extension! -To get started, check out the [step-by-step guide in LlamaIndex \[↗\]](https://docs.llamaindex.ai/en/stable/examples/data_connectors/DoclingReaderDemo/). - -!!! info "Docling v2" - - The LlamaIndex Docling extension update to Docling v2 is ongoing. + +To get started, check out the [step-by-step guide \[↗\]](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/data_connectors/DoclingReaderDemo.ipynb) ## Components @@ -15,15 +12,15 @@ To get started, check out the [step-by-step guide in LlamaIndex \[↗\]](https:/ Reads document files and uses Docling to populate LlamaIndex `Document` objects — either serializing Docling's data model (losslessly, e.g. as JSON) or exporting to a simplified format (lossily, e.g. as Markdown). - đŸ’ģ [GitHub \[↗\]](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/readers/llama-index-readers-docling) -- 📖 [API docs \[↗\]](https://docs.llamaindex.ai/en/stable/api_reference/readers/docling/) - đŸ“Ļ [PyPI \[↗\]](https://pypi.org/project/llama-index-readers-docling/) - đŸĻ™ [LlamaHub \[↗\]](https://llamahub.ai/l/readers/llama-index-readers-docling) + ### Docling Node Parser Reads LlamaIndex `Document` objects populated in Docling's format by Docling Reader and, using its knowledge of the Docling format, parses them to LlamaIndex `Node` objects for downstream usage in LlamaIndex applications, e.g. as chunks for embedding. - đŸ’ģ [GitHub \[↗\]](https://github.com/run-llama/llama_index/tree/main/llama-index-integrations/node_parser/llama-index-node-parser-docling) -- 📖 [API docs \[↗\]](https://docs.llamaindex.ai/en/stable/api_reference/node_parser/docling/) - đŸ“Ļ [PyPI \[↗\]](https://pypi.org/project/llama-index-node-parser-docling/) - đŸĻ™ [LlamaHub \[↗\]](https://llamahub.ai/l/node_parser/llama-index-node-parser-docling) +