Tikara is a modern, type-hinted Python wrapper for Apache Tika, supporting over 1600 file formats for content extraction, metadata analysis, and language detection. It provides direct JNI integration through JPype for optimal performance.
from tikara import Tika
tika = Tika()
content, metadata = tika.parse("document.pdf")
- Modern Python 3.12+ with complete type hints
- Direct JVM integration via JPype (no HTTP server required)
- Streaming support for large files
- Recursive document unpacking
- Language detection
- MIME type detection
- Custom parser and detector support
- Comprehensive metadata extraction
- Ships with embedded Tika JAR: works in air-gapped networks. No need to manage libraries.
- Opinionated Pydantic wrapper over Tika's metadata model, with access to the raw metadata.
🌈 1682 supported media types and counting!
pip install tikara
- Python 3.12+
- Java Development Kit 11+ (OpenJDK recommended)
-
Tesseract OCR (strongly recommended if you process images) (Reference ⇗)
# Ubuntu apt-get install tesseract-ocr
Additional language packs for Tesseract (optional):
# Ubuntu apt-get install tesseract-ocr-deu tesseract-ocr-fra tesseract-ocr-ita tesseract-ocr-spa
-
ImageMagick for advanced image processing (Reference ⇗)
# Ubuntu apt-get install imagemagick
-
FFMPEG for enhanced multimedia file support (Reference ⇗)
# Ubuntu apt-get install ffmpeg
-
PDFBox ⇗ for enhanced PDF support (Reference ⇗)
# Ubuntu apt-get install pdfbox
Enhanced PDF support with PDFBox Reference ⇗
-
EXIFTool for metadata extraction from images Reference ⇗
# Ubuntu apt-get install libimage-exiftool-perl
-
GDAL for geospatial file support (Reference ⇗)
# Ubuntu apt-get install gdal-bin
-
MSCore Fonts for enhanced Office file handling (Reference ⇗)
# Ubuntu apt-get install xfonts-utils fonts-freefont-ttf fonts-liberation ttf-mscorefonts-installer
For more OS dependency information including MSCore fonts setup and additional configuration, see the official Apache Tika Dockerfile.
from tikara import Tika
from pathlib import Path
tika = Tika()
# Basic string output
content, metadata = tika.parse("document.pdf")
# Stream large files
stream, metadata = tika.parse(
"large.pdf",
output_stream=True,
output_format="txt"
)
# Save to file
output_path, metadata = tika.parse(
"input.docx",
output_file=Path("output.txt"),
output_format="txt"
)
from tikara import Tika
tika = Tika()
result = tika.detect_language("El rápido zorro marrón salta sobre el perro perezoso")
print(f"Language: {result.language}, Confidence: {result.confidence}")
from tikara import Tika
tika = Tika()
mime_type = tika.detect_mime_type("unknown_file")
print(f"Detected type: {mime_type}")
from tikara import Tika
from pathlib import Path
tika = Tika()
results = tika.unpack(
"container.docx",
output_dir=Path("extracted"),
max_depth=3
)
for item in results:
print(f"Extracted {item.metadata['Content-Type']} to {item.file_path}")
-
Ensure that you have the system dependencies installed
-
Install uv:
pip install uv
-
Install python dependencies and create the Virtual Environment:
uv sync
make ruff # Format and lint code
make test # Run test suite
make docs # Generate documentation
make stubs # Generate Java stubs
make prepush # Run all checks (ruff, test, coverage, safety)
- Python applications needing document processing
- Microservices and containerized environments
- Data processing pipelines (Ray, Dask, Prefect)
- Applications requiring direct Tika integration without HTTP overhead
For detailed documentation on:
- Custom parser implementation
- Custom detector creation
- MIME type handling
See the Example Jupyter Notebooks 📔
Tikara builds on the shoulders of giants:
- Apache Tika - The powerful content detection and extraction toolkit
- tika-python - The original Python Tika wrapper using HTTP that inspired this project
- JPype - The bridge between Python and Java
- Process isolation: Tika crashes will affect the host application
- Memory management: Large documents require careful handling
- JVM startup: Initial overhead for first operation
- Custom implementations: Parser/detector development requires Java interface knowledge
- Use streaming for large files
- Monitor JVM heap usage
- Consider process isolation for critical applications
- Reuse Tika instances
- Use appropriate output formats
- Implement custom parsers for specific needs
- Configure JVM parameters for your use case
- Input validation
- Resource limits
- Secure file handling
- Access control for extracted content
- Careful handling of custom parsers
Contributions welcome! The project uses Make for development tasks:
make prepush # Run all checks (format, lint, test, coverage, safety)
For developing custom parsers/detectors, Java stubs can be generated:
make stubs # Generate Java stubs for Apache Tika interfaces
Note: Generated stubs are git-ignored but provide IDE support and type hints when implementing custom parsers/detectors.
- Verify Java installation and
JAVA_HOME
environment variable - Ensure Tesseract and required language packs are installed
- Check file permissions and paths
- Monitor memory usage when processing large files
- Use streaming output for large documents
See API Documentation for complete details.
Apache License 2.0 - See LICENSE for details.