Skip to main content
Version: 2.19

HierarchicalDocumentSplitter

Use this component to create a multi-level document structure based on parent-children relationships between text segments.

Most common position in a pipelineIn indexing pipelines after Converters and DocumentCleaner
Mandatory init variables“block_sizes”: Set of block sizes to split the document into. The blocks are split in descending order.
Mandatory run variables“documents”: A list of documents to split into hierarchical blocks
Output variables“documents”: A list of hierarchical documents
API referencePreProcessors
GitHub linkhttps://github.com/deepset-ai/haystack/blob/dae8c7babaf28d2ffab4f2a8dedecd63e2394fb4/haystack/components/preprocessors/hierarchical_document_splitter.py

Overview

The HierarchicalDocumentSplitter divides documents into blocks of different sizes, creating a tree-like structure.

A block is one of the chunks of text that the splitter produces. It is similar to cutting a long piece of text into smaller pieces: each piece is a block. Blocks form a tree structure where your full document is the root block, and as you split it into smaller and smaller pieces you get child-blocks and leaf-blocks, down to whatever smallest size specified.

The AutoMergingRetriever component then leverages this hierarchical structure to improve document retrieval.

To initialize the component, you need to specify the block_size, which is the “maximum length” of each of the blocks, measured in the specific unit (see split_by parameter). Pass a set of sizes (for example, {20, 5}), and it will:

  • First, split the document into blocks of up to 20 units each (the “parent” blocks).
  • Then, it will split each of those into blocks of up to 5 units each (the “child” blocks).

This descending order of sizes builds the hierarchy.

These additional parameters can be set when the component is initialized:

  • split_by can be "word" (default), "sentence""passage""page".
  • split_overlap is an integer indicating the number of overlapping words, sentences, or passages between chunks, 0 being the default.

Usage

On its own

python
from haystack import Document
from haystack.components.preprocessors import HierarchicalDocumentSplitter

doc = Document(content="This is a simple test document")
splitter = HierarchicalDocumentSplitter(block_sizes={3, 2}, split_overlap=0, split_by="word")
splitter.run([doc])

In a pipeline

This Haystack pipeline processes .md files by converting them to documents, cleaning the text, splitting it into sentence-based chunks, and storing the results in an In-Memory Document Store.

python
from pathlib import Path

from haystack import Document
from haystack import Pipeline
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.converters.txt import TextFileToDocument
from haystack.components.preprocessors import DocumentCleaner
from haystack.components.preprocessors import HierarchicalDocumentSplitter
from haystack.components.writers import DocumentWriter

document_store = InMemoryDocumentStore()

Pipeline = Pipeline()
Pipeline.add_component(instance=TextFileToDocument(), name="text_file_converter")
Pipeline.add_component(instance=DocumentCleaner(), name="cleaner")
Pipeline.add_component(instance=HierarchicalDocumentSplitter(
block_sizes={10, 6, 3}, split_overlap=0, split_by="sentence", name="splitter"
)
Pipeline.add_component(instance=DocumentWriter(document_store=document_store), name="writer")
Pipeline.connect("text_file_converter.documents", "cleaner.documents")
Pipeline.connect("cleaner.documents", "splitter.documents")
Pipeline.connect("splitter.documents", "writer.documents")

path = "path/to/your/files"
files = list(Path(path).glob("*.md"))
Pipeline.run({"text_file_converter": {"sources": files}})