Skip to content

Conversation

pctablet505
Copy link
Collaborator

@pctablet505 pctablet505 commented Sep 17, 2025

Added support for model export to keras-hub models.

This PR requires keras-team/keras#21674 as prerequisite, the export feature in keras.
Then it is built on top of that.

Colab Notebook for numeric verifications.

Verified models:

  • llama3.2_1b
  • gemma3_1b
  • gpt2_base_en
  • resnet_50_imagenet
  • efficientnet_b0_ra_imagenet
  • densenet_121_imagenet
  • mobilenet_v3_small_100_imagenet
  • dfine_nano_coco
  • retinanet_resnet50_fpn_coco
  • deeplab_v3_plus_resnet50_pascalvoc

pctablet505 and others added 11 commits September 1, 2025 19:11
This reverts commit 62d2484.
This reverts commit de830b1.
export working 1st commit
Refactored exporter and registry logic for better type safety and error handling. Improved input signature methods in config classes by extracting sequence length logic. Enhanced LiteRT exporter with clearer verbose handling and stricter error reporting. Registry now conditionally registers LiteRT exporter and extends export method only if dependencies are available.
@github-actions github-actions bot added the Gemma Gemma model specific issues label Sep 17, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @pctablet505, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive and extensible framework for exporting Keras-Hub models to various formats, with an initial focus on LiteRT. The system is designed to seamlessly integrate with Keras-Hub's model architecture, particularly by addressing the unique challenge of handling dictionary-based model inputs during the export process. This enhancement significantly improves the deployability of Keras-Hub models by providing a standardized and robust export pipeline, alongside crucial compatibility fixes for TensorFlow's SavedModel/TFLite export mechanisms.

Highlights

  • New Model Export Framework: Introduced a new, extensible framework for exporting Keras-Hub models, designed to support various formats and model types.
  • LiteRT Export Support: Added specific support for exporting Keras-Hub models to the LiteRT format, verified for models like gemma3, llama3.2, and gpt2.
  • Registry-Based Configuration: Implemented an ExporterRegistry to manage and retrieve appropriate exporter configurations and exporters based on model type and target format.
  • Input Handling for Keras-Hub Models: Developed a KerasHubModelWrapper to seamlessly convert Keras-Hub's dictionary-based inputs to the list-based inputs expected by the underlying Keras LiteRT exporter.
  • TensorFlow Export Compatibility: Added compatibility shims (_get_save_spec and _trackable_children) to Keras-Hub Backbone models to ensure proper functioning with TensorFlow's SavedModel and TFLite export utilities.
  • Automated Export Method Extension: The Task class in Keras-Hub models is now automatically extended with an export method, simplifying the model export process for users.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: model exporting to liteRT. The implementation is well-structured, using a modular and extensible registry pattern. However, there are several areas that require attention. The most critical issue is the complete absence of tests for the new export functionality, which is a direct violation of the repository's style guide stating that testing is non-negotiable. Additionally, I've identified a critical bug in the error handling logic within the lite_rt.py exporter that includes unreachable code. There are also several violations of the style guide regarding the use of type hints in function signatures across all new files. I've provided specific comments and suggestions to address these points, which should help improve the robustness, maintainability, and compliance of this new feature.

Comment on lines 55 to 59
def _get_sequence_length(self) -> int:
"""Get sequence length from model or use default."""
if hasattr(self.model, 'preprocessor') and self.model.preprocessor:
return getattr(self.model.preprocessor, 'sequence_length', self.DEFAULT_SEQUENCE_LENGTH)
return self.DEFAULT_SEQUENCE_LENGTH
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The _get_sequence_length method is duplicated across CausalLMExporterConfig, TextClassifierExporterConfig, Seq2SeqLMExporterConfig, and TextModelExporterConfig. To improve maintainability and reduce code duplication, this method should be moved to the base class KerasHubExporterConfig in keras_hub/src/export/base.py.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have different kinds of models in keras-hub, some deal with text and have sequence length, while other models don't have that, so we currently can't generalize it for all models.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't be good choice for now, as we don't have this for models that don't have sequence_length parameter for example image segmentation model.

Introduces the keras_hub.api.export submodule and updates the main API to expose it. The new export module imports various exporter configs and functions from the internal export package, making them available through the public API.
Added ImageClassifierExporterConfig, ImageSegmenterExporterConfig, and ObjectDetectorExporterConfig to the export API. Improved input shape inference and dummy input generation for image-related exporter configs. Refactored LiteRTExporter to better handle model type checks and input signature logic, with improved error handling for input mapping.
Moved the 'import keras' statement to the top of the module and removed redundant local imports within class methods. This improves code clarity and avoids repeated imports.
Deleted the debug_object_detection.py script, which was used for testing object detection model outputs and export issues. This cleanup removes unused debugging code from the repository.
Renames all references of 'LiteRT' to 'Litert' across the codebase, including file names, class names, and function names. Updates exporter registry and API imports to use the new 'litert' naming. Also improves image model exporter configs to dynamically determine input dtype from the model, enhancing flexibility for different input types. Adds support for ImageSegmenter model type detection in the exporter registry.
Refactored InputSpec definitions in exporter configs for improved readability by placing each argument on a separate line. Updated import path in litert.py to import from keras.src.export.litert instead of keras.src.export.litert_exporter.
@divyashreepathihalli
Copy link
Collaborator

@pctablet505 can you update the Colab to use the changes from this PR? and keep the demo short where you load a model and export it, and reload and verify numerics?

@divyashreepathihalli
Copy link
Collaborator

Also if you have addressed the Gemini comments, please respond on it. Is this PR dependent on the PR in Keras repo? - keras-team/keras#21674

Simplifies and unifies input signature and dummy input generation for text and image models by removing redundant helper methods and centralizing logic. Updates model building in KerasHubExporter to use input signatures and improves error handling. Refactors LiteRT exporter to use the new parameterized input signature and model building approach, reducing code duplication and improving maintainability.
Removed redundant registry initialization in export_model and clarified model building comments in KerasHubExporter. Switched to using warnings.warn for import errors in models/__init__.py instead of print statements for better error reporting.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist please review it again

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is a significant and valuable addition to KerasHub, enabling model exportation to LiteRT. The overall structure is well-designed, following a modular and extensible pattern similar to Hugging Face's Optimum.

I've reviewed this pull request, focusing on correctness, maintainability, and adherence to the repository's style guide. The style guide was instrumental in this review, particularly the sections on Testing and Validation (lines 401-496) and Docstrings and Type Hinting (lines 364-398).

My main findings are:

  • Critical Issue: Missing Tests

    • The pull request introduces substantial new functionality for model exporting across several new files (base.py, configs.py, litert.py, registry.py), but it lacks corresponding tests.
    • The repository style guide is explicit that "Testing is a non-negotiable part of every contribution" (line 403) and "Every .py file containing logic...must have a corresponding _test.py file" (line 406).
    • Please add comprehensive unit tests for the new export logic, covering different model types, configurations, and edge cases. This is crucial to ensure the robustness and correctness of this feature.
  • Other Findings

    • I've also left several inline comments regarding a bug in model type detection, incorrect dtype handling, and violations of the docstring style guide. Please address these to improve code quality and consistency.

Refined dtype extraction logic in image and object model exporter configs to better handle different dtype representations. Updated LiteRT exporter to use Keras io_utils for progress messages and improved verbose flag handling. Added ObjectDetector and ImageSegmenter to export registry model type checks. Enhanced docstrings for clarity and consistency in base exporter classes.
Eliminates the get_dummy_inputs methods from KerasHubExporterConfig and its subclasses. Model building now relies solely on shape-based initialization, simplifying the export process and removing fallback logic for dummy data.
@pctablet505
Copy link
Collaborator Author

@gemini-code-assist please review the code changes. And see if something can be improved or optimized, or removed if not necessary.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a well-structured and extensible export functionality for Keras-Hub models, with an initial focus on LiteRT. The design, which utilizes configurations, exporters, and a registry, is solid. The changes also include necessary compatibility shims in the Backbone class to support TensorFlow export. My review focuses on enhancing robustness and ensuring adherence to the repository's style guide. I have identified a few areas for improvement, such as replacing fragile error-message parsing with more direct introspection, addressing the use of private APIs, and ensuring docstrings fully conform to the specified style.

Comment on lines +220 to +268
except ValueError as e:
error_msg = str(e)
# If that fails, try direct tensor input (positional format)
if (
"doesn't match the expected structure" in error_msg
and "Expected: keras_tensor" in error_msg
):
# The model expects a single tensor, not a dictionary
if len(inputs) == 1:
return self.keras_hub_model(
inputs[0], training=training, mask=mask
)
else:
# Multiple inputs - try as positional arguments
return self.keras_hub_model(
*inputs, training=training, mask=mask
)
elif "Missing data for input" in error_msg:
# Extract the actual expected input names from the error
if "Expected the following keys:" in error_msg:
# Parse the expected keys from error message
start = error_msg.find(
"Expected the following keys: ["
)
if start != -1:
start += len("Expected the following keys: [")
end = error_msg.find("]", start)
if end != -1:
keys_str = error_msg[start:end]
actual_input_names = [
k.strip().strip("'\"")
for k in keys_str.split(",")
]

# Map inputs to actual expected names
input_dict = {}
for i, actual_name in enumerate(
actual_input_names
):
if i < len(inputs):
input_dict[actual_name] = inputs[i]

return self.keras_hub_model(
input_dict, training=training, mask=mask
)

# If we still can't figure it out, re-raise the original
# error
raise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The error handling logic in the call method of KerasHubModelWrapper relies on parsing exception messages (e.g., "doesn't match the expected structure", "Missing data for input"). This is a brittle approach, as the error messages from underlying libraries like Keras or TensorFlow can change in future versions, which would break this export functionality.

A more robust approach would be to programmatically inspect the model's input signature instead of reacting to errors. Here are some suggestions:

  • For functional models, you can use self.keras_hub_model.input_names to get the expected keys.
  • For subclassed models, you could use Python's inspect.signature(self.keras_hub_model.call) to determine the expected arguments.

By proactively determining the input structure, you can avoid this fragile error parsing and make the exporter more resilient to changes in dependencies.

Comment on lines +142 to +148
def register_config(cls, model_type, config_class):
"""Register a configuration class for a model type.
Args:
model_type: The model type (e.g., "causal_lm")
config_class: The configuration class
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstrings for register_config, register_exporter, get_config_for_model, get_exporter, and _detect_model_type are missing type information in the Args section. According to the style guide, type information should be provided in the format arg_name: type. description.1

For example, this docstring should be:

        Args:
            model_type: str. The model type (e.g., "causal_lm").
            config_class: KerasHubExporterConfig. The configuration class.

Please update the docstrings for all class methods in ExporterRegistry to include type information.

Style Guide References

Footnotes

  1. The style guide requires type information to be provided in the Args section of docstrings.

Comment on lines +41 to +44
Returns:
Dict[str, Any]: Dictionary mapping input names to their
specifications
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Returns section of the docstring includes a type hint (Dict[str, Any]). The style guide states that KerasHub does not use type hints, and the provided example for a Returns section does not include a type.1

To maintain consistency with the style guide, please remove the type hint from the Returns section. This applies to all get_input_signature methods in this file.

For example:

        Returns:
            A dictionary mapping input names to their specifications.
Suggested change
Returns:
Dict[str, Any]: Dictionary mapping input names to their
specifications
"""
Returns:
Dictionary mapping input names to their
specifications

Style Guide References

Footnotes

  1. The style guide specifies that type hints should not be used in signatures, and the examples suggest they should be omitted from docstring return descriptions as well.

Comment on lines +354 to +358
# Import _DictWrapper safely
try:
from tensorflow.python.trackable.data_structures import _DictWrapper
except ImportError:
return children
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This method imports _DictWrapper from a private TensorFlow module (tensorflow.python.trackable.data_structures). Relying on private APIs is risky because they are not guaranteed to be stable and can be changed or removed without notice in future TensorFlow releases, which could break the export functionality.

While I understand this might be a necessary workaround for current issues with TensorFlow export, it would be ideal to find a solution that uses public APIs if possible. If no public API is available, consider adding a comment here warning future developers about the dependency on a private API and potentially pinning the TensorFlow version more strictly if this is critical.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Gemma Gemma model specific issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants