pdf-craft converts PDF files into various other formats, with a focus on handling scanned book PDFs.
This project is based on DeepSeek OCR for document recognition. It supports the recognition of complex content such as tables and formulas. With GPU acceleration, pdf-craft can complete the entire conversion process from PDF to Markdown or EPUB locally. During the conversion, pdf-craft automatically identifies document structure, accurately extracts body text, and filters out interfering elements like headers and footers. For academic or technical documents containing footnotes, formulas, and tables, pdf-craft handles them properly, preserving these important elements. The final Markdown or EPUB files maintain the content integrity and readability of the original book.
Starting from the official v1.0.0 release, pdf-craft fully embraces DeepSeek OCR and no longer relies on LLM for text correction. This change brings significant performance improvements: the entire conversion process is completed locally without network requests, eliminating the long waits and occasional network failures of the old version.
However, the new version has also removed the LLM text correction feature. If your use case still requires this functionality, you can continue using the old version v0.2.8.
We provide an online demo platform that lets you experience PDF Craft's conversion capabilities without any installation. You can directly upload PDF files and convert them.
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
pip install pdf-craftThis project uses DeepSeek OCR, which depends on a CUDA environment. The above command only ensures Python can read types without errors, but cannot actually run OCR recognition. For specific CUDA environment installation instructions, please refer to the Installation Guide.
from pdf_craft import transform_markdown
transform_markdown(
pdf_path="input.pdf",
markdown_path="output.md",
markdown_assets_path="images",
)from pdf_craft import transform_epub, BookMeta
transform_epub(
pdf_path="input.pdf",
epub_path="output.epub",
book_meta=BookMeta(
title="Book Title",
authors=["Author"],
),
)from pdf_craft import transform_markdown
transform_markdown(
pdf_path="input.pdf",
markdown_path="output.md",
markdown_assets_path="images",
analysing_path="temp", # Optional: specify temporary folder
model="gundam", # Optional: tiny, small, base, large, gundam
models_cache_path="models", # Optional: model cache path
includes_footnotes=True, # Optional: include footnotes
ignore_fitz_errors=False, # Optional: continue on PDF rendering errors
generate_plot=False, # Optional: generate visualization charts
)from pdf_craft import transform_epub, BookMeta, TableRender, LaTeXRender
transform_epub(
pdf_path="input.pdf",
epub_path="output.epub",
analysing_path="temp", # Optional: specify temporary folder
model="gundam", # Optional: OCR model size
models_cache_path="models", # Optional: model cache path
includes_cover=True, # Optional: include cover
includes_footnotes=True, # Optional: include footnotes
ignore_fitz_errors=False, # Optional: continue on PDF rendering errors
generate_plot=False, # Optional: generate visualization charts
book_meta=BookMeta(
title="Book Title",
authors=["Author 1", "Author 2"],
publisher="Publisher",
language="en",
),
lan="en", # Optional: language (zh/en)
table_render=TableRender.HTML, # Optional: table rendering method
latex_render=LaTeXRender.MATHML, # Optional: formula rendering method
)pdf-craft depends on DeepSeek OCR models, which are automatically downloaded from Hugging Face on first run. You can control model storage and loading behavior through the models_cache_path and local_only parameters.
In production environments, it is recommended to download models in advance to avoid downloading on first run:
from pdf_craft import predownload_models
predownload_models(
models_cache_path="models", # Specify model cache directory
revision=None, # Optional: specify model version
)By default, models are downloaded to the system's Hugging Face cache directory. You can customize the cache location through the models_cache_path parameter:
from pdf_craft import transform_markdown
transform_markdown(
pdf_path="input.pdf",
markdown_path="output.md",
models_cache_path="./my_models", # Custom model cache directory
)If you have pre-downloaded the models, you can use local_only=True to disable network downloads and ensure only local models are used:
from pdf_craft import transform_markdown
transform_markdown(
pdf_path="input.pdf",
markdown_path="output.md",
models_cache_path="./my_models",
local_only=True, # Use local models only, do not download from network
)Supports the following DeepSeek OCR models:
tiny- Smallest model, fastest speedsmall- Small modelbase- Base modellarge- Large modelgundam- Largest model, highest quality (default)
TableRender.HTML- HTML format (default)TableRender.MARKDOWN- Markdown formatTableRender.TEXT- Plain text format
LaTeXRender.MATHML- MathML format (default)LaTeXRender.IMAGE- Image formatLaTeXRender.TEXT- Plain text format
You can use ignore_fitz_errors=True to continue processing when individual pages fail to render, inserting a placeholder message for failed pages instead of stopping the entire conversion.
epub-translator uses AI large language models to automatically translate EPUB e-books while 100% preserving the original book's format, illustrations, table of contents, and layout. It also generates bilingual versions for convenient language learning or international sharing. When combined with this library, you can convert and translate scanned PDF books. For a demonstration, see this video: Convert PDF scanned books to EPUB format and translate to bilingual books.
This project is licensed under the MIT License. See the LICENSE file for details.
Starting from v1.0.0, pdf-craft has fully migrated to DeepSeek OCR (MIT license), removing the previous AGPL-3.0 dependency, allowing the entire project to be released under the more permissive MIT license. Thanks to the community for their support and contributions!


