ZeroxPDFLoader
This sample provides a quick overview for getting started with ZeroxPDFLoader document loader. For detailed documentation of all ZeroxPDFLoader features and configurations head to the API reference.
Overview
ZeroxPDFLoader is a document loader that leverages the Zerox library. Zerox converts PDF documents into images, processes them using a vision-capable language model, and generates a structured Markdown representation. This loader allows for asynchronous operations and provides page-level document extraction.
Integration details
| Class | Package | Local | Serializable | JS support |
|---|---|---|---|---|
| PyMuPDFLoader | langchain_community | ✅ | ❌ | ❌ |
Loader features
| Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |
|---|---|---|---|---|
| ZeroxPDFLoader | ✅ | ❌ | ✅ | ✅ |
Setup
ZeroxPDFLoader enables PDF text extraction using vision-capable language models by converting each page into an image and processing it asynchronously. To use this loader, you need to specify a model and configure any necessary environment variables for Zerox, such as API keys.
If you're working in an environment like Jupyter Notebook, you may need to handle asynchronous code by using nest_asyncio. You can set this up as follows:
import nest_asyncio
nest_asyncio.apply()
import os
from getpass import getpass
# use nest_asyncio (only necessary inside of jupyter notebook)
from dotenv import load_dotenv
load_dotenv()
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
import nest_asyncio
nest_asyncio.apply()
Credentials
Appropriate credentials need to be set up in environment variables. The loader supports number of different models and model providers. See Usage header below to see few examples or Zerox documentation for a full list of supported models.
from getpass import getpass
load_dotenv()
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
nest_asyncio.apply()
If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
Install langchain_community and py-zerox.
%pip install -qU langchain_community py-zerox
Note: you may need to restart the kernel to use updated packages.
Initialization
Now we can instantiate our model object and load documents:
import warnings
import nest_asyncio
nest_asyncio.apply()
warnings.filterwarnings(
"ignore",
module=r"^pyzerox.models.modellitellm$",
message=r"\s*Custom system prompt was provided which.*",
)
from langchain_community.document_loaders.pdf import ZeroxPDFLoader
file_path = "./example_data/layout-parser-paper.pdf"
loader = ZeroxPDFLoader(file_path)
Load
docs = loader.load()
docs[0]
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'author': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'subject': '', 'title': '', 'trapped': 'False', 'total_pages': 16, 'source': './example_data/layout-parser-paper.pdf', 'num_pages': 16, 'page': 0}, page_content='# LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis\n\nZejiong Shen¹, Ruochen Zhang², Melissa Dell³, Benjamin Charles Germain Lee⁴, Jacob Carlson³, and Weining Li⁵\n\n¹ Allen Institute for AI \nshannons@allenai.org \n² Brown University \nruochen_zhang@brown.edu \n³ Harvard University \n{melissadell,jacob.carlson}@fas.harvard.edu \n⁴ University of Washington \nbgc@cs.washington.edu \n⁵ University of Waterloo \nw422ii@uwaterloo.ca \n\n**Abstract.** Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-world use cases. The library is publicly available at https://layout-parser.github.io\n\n**Keywords:** Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library · Toolkit.\n\n# 1 Introduction\n\nDeep Learning (DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks including document image classification [11]')
import pprint
pprint.pp(docs[0].metadata)
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': 'False',
'total_pages': 16,
'source': './example_data/layout-parser-paper.pdf',
'num_pages': 16,
'page': 0}
Lazy Load
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# do some paged operation, e.g.
# index.upsert(page)
pages = []
len(pages)
6
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
5.1 A Comprehensive Historical Document Digitization Pipeline
The digitization of historical docume
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': 'False',
'total_pages': 16,
'source': './example_data/layout-parser-paper.pdf',
'num_pages': 16,
'page': 10}
The metadata attribute contains at least the following keys:
- source
- page (if in mode page)
- total_page
- creationdate
- creator
- producer
Additional metadata are specific to each parser. These pieces of information can be helpful (to categorize your PDFs for example).
Splitting mode & custom pages delimiter
When loading the PDF file you can split it in two different ways:
- By page
- As a single text flow
By default PDFPlumberLoader will split the PDF by page.
Extract the PDF by page. Each page is extracted as a langchain Document object:
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
16
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': 'False',
'total_pages': 16,
'source': './example_data/layout-parser-paper.pdf',
'num_pages': 16,
'page': 0}
In this mode the pdf is split by pages and the resulting Documents metadata contains the page number. But in some cases we could want to process the pdf as a single text flow (so we don't cut some paragraphs in half). In this case you can use the single mode :
Extract the whole PDF as a single langchain Document object:
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
1
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': 'False',
'total_pages': 16,
'source': './example_data/layout-parser-paper.pdf',
'num_pages': 16}
Logically, in this mode, the ‘page_number’ metadata disappears. Here's how to clearly identify where pages end in the text flow :
Add a custom pages_delimiter to identify where are ends of pages in single mode:
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
# LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis
Zejian Shen¹, Ruochen Zhang², Melissa Dell³, Benjamin Charles Germain Lee⁴, Jacob Carlson³, and Weining Li⁵
¹ Allen Institute for AI
shannon@allenai.org
² Brown University
ruohen_zhang@brown.edu
³ Harvard University
{melissadell, jacob.carlson}@fas.harvard.edu
⁴ University of Washington
bgcl@cs.washington.edu
⁵ University of Waterloo
w422ii@uwaterloo.ca
**Abstract.** Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-world use cases. The library is publicly available at [https://layout-parser.github.io](https://layout-parser.github.io).
**Keywords:** Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library · Toolkit.
## 1 Introduction
Deep Learning (DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks including document image classification [11]
-------THIS IS A CUSTOM END OF PAGE-------
[37], layout detection [38, 22], table detection [26], and scene text detection [4]. A generalized learning-based framework dramatically reduces the need for the manual specification of complicated rules, which is the status quo with traditional methods. DL has the potential to transform DIA pipelines and benefit a broad spectrum of large-scale document digitization projects.
However, there are several practical difficulties for taking advantages of recent advances in DL-based methods: 1) DL models are notoriously convoluted for reuse and extension. Existing models are developed using distinct frameworks like TensorFlow [1] or PyTorch [24], and the high-level parameters can be obfuscated by implementation details [8]. It can be a time-consuming and frustrating experience to debug, reproduce, and adapt existing models for DIA, and many researchers who could benefit the most from using these methods lack the technical background to implement them from scratch. 2) Document images contain diverse and disparate patterns across domains, and customized training is often required to achieve a desirable detection accuracy. Currently there is no full-fledged infrastructure for easily curating the target document image datasets and fine-tuning or re-training the models. 3) DIA usually requires a sequence of models and other processing to obtain the final outputs. Often research teams use DL models and then perform further document analyses in separate processes, and these pipelines are not documented in any central location (and often not documented at all). This makes it difficult for research teams to learn about how to reinvent the DIA wheel.
LayoutParser provides a unified toolkit to support DL-based document image analysis and processing. To address the aforementioned challenges, LayoutParser is built with the following components:
1. An off-the-shelf toolkit for applying DL models for layout detection, character recognition, and other DIA tasks (Section 3)
2. A rich repository of pre-trained neural network models (Model Zoo) that underlies the off-the-shelf usage
3. Comprehensive tools for efficient document image data annotation and model tuning to support different levels of customization
4. A DL model hub and community platform for the easy sharing, distribution, and discussion of DIA models and pipelines, to promote reusability, reproducibility, and extensibility (Section 4)
The library implements simple and intuitive Python APIs without sacrificing generalizability and versatility, and can be easily installed via pip. Its convenient functions for handling document image data can be seamlessly integrated with existing DIA pipelines. With detailed documentations and carefully curated tutorials, we hope this tool will benefit a variety of end-users, and will lead to advances in applications in both industry and academic research.
LayoutParser is well aligned with recent efforts for improving DL model reusability in other disciplines like natural language processing [8, 34] and computer vision [35], but with a focus on unique challenges in DIA. We show LayoutParser can be applied in sophisticated and large-scale digitization projects.
-------THIS IS A CUSTOM END OF PAGE-------
2 Related Work
Recently, various DL models and datasets have been developed for layout analysis tasks. The dhSegment utilizes fully convolutional networks for segmentation tasks on histor
This could simply be \n, or \f to clearly indicate a page change, or <!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect.
Extract images from the PDF
You can extract images from your PDFs with a choice of three different solutions:
- rapidOCR (lightweight Optical Character Recognition tool)
- Tesseract (OCR tool with high precision)
- Multimodal language model
Because Zerox uses a multimodal LLM, it is responsible for OCR analysis. Nevertheless, to maintain compatibility with other parsers, we simulate image_parsers where possible.
You can tune these functions to choose the output format of the extracted images among html, markdown or text
The result is inserted between the last and the second-to-last paragraphs of text of the page.
%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import RapidOCRBlobParser
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.

Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the coordinate information and extra features like block text, types, and reading orders; a Layout object is a list of all possible layout elements, including other Layout objects. They all support the same set of transformation and operation APIs for maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained on 5 different datasets. Description of the training dataset is provided alongside with the trained models such that users can quickly identify the most suitable models for their tasks. Additionally, when such a model is not readily available, LayoutParser also supports training customized layout models and community sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data structures and operations that can be used to efficiently process and manipulate the layout elements. In document image analysis pipelines, various post-processing on the layout analysis model outputs is usually required to obtain the final outputs. Traditionally, this requires exporting DL model outputs and then loading the results into other pipelines. All model outputs from LayoutParser will be stored in carefully engineered data types optimized for further processing, which makes it possible to build an end-to-end document digitization pipeline within LayoutParser. There are three key components in the data structure, namely the Coordinate system, the TextBlock, and the Layout. They provide different levels of abstraction for the layout data, and a set of APIs are supported for transformations or operations on these classes.
With this simulation, RapidOCR is not invoked.
Extract images from the PDF with Tesseract:
%pip install -qU pytesseract
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers import TesseractBlobParser
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.

Coordinate supports three kinds of variation; TextBlock consists of the coordinate information and extra features like block text, types, and reading orders; a Layout object is a list of all possible layout elements, including other Layout objects. They all support the same set of transformation and operation APIs for maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained on 5 different datasets. Description of the training dataset is provided alongside with the trained models such that users can quickly identify the most suitable models for their tasks. Additionally, when such a model is not readily available, LayoutParser also supports training customized layout models and community sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data structures and operations that can be used to efficiently process and manipulate the layout elements. In document image analysis pipelines, various post-processing on the layout analysis model outputs is usually required to obtain the final outputs. Traditionally, this requires exporting DL model outputs and then loading the results into other pipelines. All model outputs from LayoutParser will be stored in carefully engineered data types optimized for further processing, which makes it possible to build an end-to-end document digitization pipeline within LayoutParser. There are three key components in the data structure, namely the Coordinate system, the TextBlock, and the Layout. They provide different levels of abstraction for the layout data, and a set of APIs are supported for transformations or operations on these classes.
With this simulation, Tesseract is not invoked.
Extract images from the PDF with multimodal model:
%pip install -qU langchain_openai
Note: you may need to restart the kernel to use updated packages.
import os
from dotenv import load_dotenv
load_dotenv()
True
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)

- **Coordinate** supports three kinds of variation;
- **TextBlock** consists of the coordinate information and extra features like block text, types, and reading orders;
- A **Layout** object is a list of all possible layout elements, including other Layout objects.
- They all support the same set of transformation and operation APIs for maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained on 5 different datasets. Description of the training dataset is provided alongside with the trained models such that users can quickly identify the most suitable models for their tasks. Additionally, when such a model is not readily available, LayoutParser also supports training customized layout models and community sharing of the models (detailed in Section 3.5).
### 3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data structures and operations that can be used to efficiently process and manipulate the layout elements. In document image analysis pipelines, various post-processing on the layout analysis model outputs is usually required to obtain the final outputs. Traditionally, this requires exporting DL model outputs and then loading the results into other pipelines. All model outputs from LayoutParser will be stored in carefully engineered data types optimized for further processing, which makes it possible to build an end-to-end document digitization pipeline within LayoutParser. There are three key components in the data structure, namely the Coordinate system, the TextBlock, and the Layout. They provide different levels of abstraction for the layout data, and a set of APIs are supported for transformations or operations on these classes.
With this simulation, the format is injected into the prompt.
Extract tables from the PDF
With PyMUPDF you can extract tables from your PDFs in html, markdown or csv format :
loader = ZeroxPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
| Dataset | Base Model | Large Model | Notes |
|---------------|------------|-------------|-----------------------------------------------------|
| PubLayNet | F / M | M | Layouts of modern scientific documents |
| PRImA | F / M | - | Layouts of scanned modern magazines and scientific reports |
| Newspaper | F | - | Layouts of scanned US newspapers from the 20th century |
| TableBank | F | F | Table regions on modern scientific and business documents |
| HJDataset | F / M | - | Layouts of history Japanese documents |
For each dataset, we train several models of different sizes for different needs (the trade-off between accuracy vs. computational cost). For "base model" and "large model", we refer to using the ResNet 50 or ResNet 101 backbones respectively. One can train models of different architectures, like Faster R-CNN and Mask R-CNN. For example, an F in the Large Model column indicates it has a Faster R-CNN model trained using the ResNet backbone. The platform is maintained and a number of additions will be made to the model zoo in coming months.
### 3.1 Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and generates a list of rectangular boxes for the target content regions. Different from traditional methods, it relies on deep convolutional neural networks rather than manually curated rules to identify content regions. It is formulated as an object detection problem and state-of-the-art models like Faster R-CNN yield prediction results of high accuracy and make it possible to build a concise, generalized interface for layout detection. LayoutParser, built upon Detectron2, provides a minimal API that can perform layout detection with only four lines of Python:
\`\`\`python
import layoutparser as lp
image = cv2.imread("image_file") # load images
model = lp.Detectron2LayoutModel(
"lp://PubLayNet/faster_rcnn_R_50_FPN_3x/config"
)
layout = model.detect(image)
\`\`\`
LayoutParser provides a wealth of pre-trained model weights using various datasets covering different languages, time periods, and document types. Due to domain shift, the prediction performance can notably drop when models are applied to target samples that are significantly different from the training dataset. As document structures and layouts vary greatly in different domains, it is important to select models trained on a dataset similar to the test samples. A semantic syntax is used for initializing the model weights in LayoutParser, using both the dataset name and model name lp://<dataset-name>/<model-architecture-name>.
Working with Files
Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use open to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.
As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded. You can use this strategy to analyze different files, with the same parsing parameters.
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers.pdf import ZeroxPDFParser
loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=ZeroxPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
# LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis
Zejing Shen¹, Ruochen Zhang², Melissa Dell³, Benjamin Charles Germain Lee⁴, Jacob Carlson³, and Weining Li⁵
¹ Allen Institute for AI
shannons@allenai.org
² Brown University
ruochen_zhang@brown.edu
³ Harvard University
{melissadell,jacob_carlson}@fas.harvard.edu
⁴ University of Washington
bgcl@cs.washington.edu
⁵ University of Waterloo
w422ii@uwaterloo.ca
**Abstract.** Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-world use cases. The library is publicly available at https://layout-parser.github.io
**Keywords:** Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library · Toolkit.
## 1 Introduction
Deep Learning (DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks including document image classification [11].
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'author': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'subject': '',
'title': '',
'trapped': 'False',
'total_pages': 16,
'source': 'example_data/layout-parser-paper.pdf',
'num_pages': 16,
'page': 0}
It is possible to work with files from cloud storage.
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3:/mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=ZeroxPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
API reference
ZeroxPDFLoader
This loader class initializes with a file path and model type, and supports custom configurations via zerox_kwargs for handling Zerox-specific parameters.
Arguments:
file_path(Union[str, Path]): Path to the PDF file.model(str): Vision-capable model to use for processing in format<provider>/<model>. Some examples of valid values are:model = "gpt-4o-mini" ## openai modelmodel = "azure/gpt-4o-mini"model = "gemini/gpt-4o-mini"model="claude-3-opus-20240229"model = "vertex_ai/gemini-1.5-flash-001"- See more details in Zerox documentation
- Defaults to
"gpt-4o-mini".
**zerox_kwargs(dict): Additional Zerox-specific parameters such as API key, endpoint, etc.
Methods:
lazy_load: Generates an iterator ofDocumentinstances, each representing a page of the PDF, along with metadata including page number and source.
See full API documentaton here
Related
- Document loader conceptual guide
- Document loader how-to guides