Skip to content

Conversation

@Shekar-77
Copy link

This PR improves backend compatibility checks for DatasetAdapter.
It raises clear errors when a unsupported backend (other than numpy, torch, tf, jax) is used.

Changes

Added backend-type validation to DatasetAdapter.init in:

-tf_dataset_adapter.py

-py_dataset_adapter.py

-torch_data_loader_adapter.py

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Shekar-77, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the robustness of the DatasetAdapter by introducing explicit backend compatibility checks during its initialization. The changes ensure that only supported backends (TensorFlow, NumPy, PyTorch, and JAX) are used, preventing potential runtime issues and providing clear error messages to developers when an incompatible backend is encountered. This enhancement makes the data adaptation process more reliable and user-friendly.

Highlights

  • Enhanced Backend Compatibility: Implemented robust checks within DatasetAdapter initialization to ensure compatibility with supported backends (TensorFlow, NumPy, PyTorch, JAX).
  • Clearer Error Handling: Introduced specific ValueError exceptions when an unsupported backend is detected, providing immediate and actionable feedback to users.
  • Widespread Validation: Applied backend-type validation across tf_dataset_adapter.py, py_dataset_adapter.py, and torch_data_loader_adapter.py to cover various data loading scenarios.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces backend compatibility checks for data adapters, which is a valuable addition. My review focuses on a critical placement issue of one of the checks, code duplication, and improving the clarity of error messages to align with the repository's style guide. Addressing these points will make the implementation more robust and maintainable.

Comment on lines 93 to 99
backend_name = backend.backend()

if backend_name not in ("tensorflow", "numpy", "torch", "jax"):
raise ValueError(
f"Incompatible backend '{backend_name}'"
"Supported backends TensorFlow , numpy , torch , jax backend."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This backend compatibility check is placed in the PyDataset class body, which means it will be executed at import time. This is a critical issue because it can prevent the module from being imported if an unsupported backend is configured, even if PyDataset is not used with an adapter. This check should be removed from here and moved into the __init__ method of the PyDatasetAdapter class, which is the component with backend-specific dependencies.

Additionally, the error message is poorly formatted and inconsistent. A better error message would be:
f"Unsupported backend: '{backend.backend()}'. PyDatasetAdapter only supports the following backends: ['tensorflow', 'numpy', 'torch', 'jax']."

This validation logic is also duplicated in other adapters and should ideally be centralized in a utility function.

References
  1. Error messages should be contextual, informative, and actionable, clearly stating what happened, what was expected, and how to fix it. The current error message is poorly formatted and lacks clarity. (link)

Comment on lines 22 to 27
backend_name = backend.backend()
if backend_name not in ("tensorflow", "numpy", "torch", "jax"):
raise ValueError(
f"Incompatible backend '{backend_name}'"
"Supported backends TensorFlow , numpy , torch , jax backend."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message can be improved for better readability and to be more informative, as per the style guide (lines 140-143). The current message has formatting issues (missing space, inconsistent casing) and could be clearer.

Additionally, this validation logic is duplicated across py_dataset_adapter.py and torch_data_loader_adapter.py. It would be better to extract this into a shared utility function to avoid code duplication and ensure consistency.

        backend_name = backend.backend()
        supported_backends = ("tensorflow", "numpy", "torch", "jax")
        if backend_name not in supported_backends:
            raise ValueError(
                f"Unsupported backend: '{backend_name}'. `TFDatasetAdapter` "
                f"only supports the following backends: {list(supported_backends)}."
            )
References
  1. Error messages should be contextual, informative, and actionable, clearly stating what happened, what was expected, and how to fix it. The current error message is poorly formatted and lacks clarity. (link)

Comment on lines 15 to 21
backend_name = backend.backend()

if backend_name not in ("tensorflow", "numpy", "torch", "jax"):
raise ValueError(
f"Incompatible backend '{backend_name}'"
"Supported backends TensorFlow , numpy , torch , jax backend."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message can be improved for better readability and to be more informative, as per the style guide (lines 140-143). The current message has formatting issues (missing space, inconsistent casing) and could be clearer.

Additionally, this validation logic is duplicated across py_dataset_adapter.py and tf_dataset_adapter.py. It would be better to extract this into a shared utility function to avoid code duplication and ensure consistency.

Suggested change
backend_name = backend.backend()
if backend_name not in ("tensorflow", "numpy", "torch", "jax"):
raise ValueError(
f"Incompatible backend '{backend_name}'"
"Supported backends TensorFlow , numpy , torch , jax backend."
)
backend_name = backend.backend()
supported_backends = ("tensorflow", "numpy", "torch", "jax")
if backend_name not in supported_backends:
raise ValueError(
f"Unsupported backend: '{backend_name}'. `TorchDataLoaderAdapter` "
f"only supports the following backends: {list(supported_backends)}."
)
References
  1. Error messages should be contextual, informative, and actionable, clearly stating what happened, what was expected, and how to fix it. The current error message is poorly formatted and lacks clarity. (link)

@codecov-commenter
Copy link

codecov-commenter commented Dec 8, 2025

Codecov Report

❌ Patch coverage is 50.00000% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.29%. Comparing base (f0a48a6) to head (524ca7e).

Files with missing lines Patch % Lines
...s/src/trainers/data_adapters/py_dataset_adapter.py 50.00% 1 Missing and 1 partial ⚠️
...rainers/data_adapters/torch_data_loader_adapter.py 50.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21905      +/-   ##
==========================================
- Coverage   76.30%   76.29%   -0.01%     
==========================================
  Files         580      580              
  Lines       60029    60037       +8     
  Branches     9432     9434       +2     
==========================================
+ Hits        45803    45807       +4     
- Misses      11750    11752       +2     
- Partials     2476     2478       +2     
Flag Coverage Δ
keras 76.16% <50.00%> (-0.01%) ⬇️
keras-jax 62.12% <50.00%> (-0.01%) ⬇️
keras-numpy 57.31% <50.00%> (-0.01%) ⬇️
keras-openvino 34.30% <37.50%> (+<0.01%) ⬆️
keras-torch 63.21% <50.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants