Skip to content

Conversation

@LakshmiKalaKadali
Copy link

GPT2 model conversion from keras to hf safetensors format. colab gist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @LakshmiKalaKadali, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the model export capabilities by enabling the conversion of Keras GPT-2 models to the Hugging Face Safetensors format. This allows for seamless interoperability between Keras-based GPT-2 implementations and the broader Hugging Face ecosystem, facilitating easier model sharing and deployment. The changes involve adding specific conversion logic for GPT-2's architecture and tokenizer, integrating this into the existing export framework, and providing a robust testing mechanism to ensure the fidelity of the converted models.

Highlights

  • GPT-2 Safetensors Export: Introduced comprehensive functionality to convert Keras GPT-2 models, including their configurations and weights, into the Hugging Face Safetensors format.
  • Tokenizer Asset Saving: Added a save_assets method to GPT2Tokenizer for proper serialization of vocabulary and merge files, crucial for Hugging Face compatibility.
  • Hugging Face Exporter Integration: Updated the general hf_exporter.py utility to recognize and correctly handle GPT-2 models and their tokenizers, including specific logic for tied weights and different tokenizer asset formats.
  • Conversion Script Enhancement: Modified the convert_gpt2_checkpoints.py script to incorporate the new Safetensors export capability, allowing users to generate Hugging Face compatible GPT-2 models directly.
  • Robust Testing: Included a dedicated test suite (gpt2_test.py) to validate the accuracy of the GPT-2 model conversion, comparing Keras and Hugging Face model configurations and output logits.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds functionality to convert Keras-based GPT-2 models to the Hugging Face safetensors format. This involves a new export script for GPT-2, updates to the generic Hugging Face exporter, and modifications to the GPT-2 tokenizer and checkpoint conversion script. The changes are generally well-structured, but there are some critical issues. I've identified violations of the backend-agnostic principle, a potential runtime error in the exporter logic, and some maintainability concerns. My feedback includes suggestions to use keras.ops for backend-agnostic tensor operations, fix a critical bug, and improve code style and dependency management.

Comment on lines +99 to +107
if (
"lm_head.weight" in weights_dict_torch
and "transformer.wte.weight" in weights_dict_torch
):
wte = weights_dict_torch["transformer.wte.weight"]
lm = weights_dict_torch["lm_head.weight"]

if wte.data_ptr() == lm.data_ptr():
weights_dict_torch["lm_head.weight"] = lm.clone().contiguous()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a potential NameError here. The variables wte and lm are defined within an if block but used in a subsequent, un-nested if block on line 106. If the condition on line 99 is false, wte and lm will not be defined, causing a runtime error. The second if statement should be nested inside the first.

Suggested change
if (
"lm_head.weight" in weights_dict_torch
and "transformer.wte.weight" in weights_dict_torch
):
wte = weights_dict_torch["transformer.wte.weight"]
lm = weights_dict_torch["lm_head.weight"]
if wte.data_ptr() == lm.data_ptr():
weights_dict_torch["lm_head.weight"] = lm.clone().contiguous()
if (
"lm_head.weight" in weights_dict_torch
and "transformer.wte.weight" in weights_dict_torch
):
wte = weights_dict_torch["transformer.wte.weight"]
lm = weights_dict_torch["lm_head.weight"]
if wte.data_ptr() == lm.data_ptr():
weights_dict_torch["lm_head.weight"] = lm.clone().contiguous()

num_params = PRESET_MAP[FLAGS.preset][0]
hf_model_name = PRESET_MAP[FLAGS.preset][1]

os.system("pip install requests")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Running pip install from within a script is not a recommended practice for managing dependencies. It can lead to an unpredictable environment and conflicts. Project dependencies should be managed declaratively, for example, in a requirements.txt file.

Comment on lines +76 to +78
import json
import os
import shutil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better readability and consistency, imports should be at the top of the file. Please move import json, import os, and import shutil to the top-level of the module to follow standard Python conventions.

Comment on lines +46 to +54
q_w = keras_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.kernel
k_w = keras_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.kernel
v_w = keras_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.kernel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Accessing private layer attributes like _self_attention_layer and its sub-layers makes this code brittle. If the internal structure of TransformerDecoder changes, this export script will break. It would be more robust to expose these weights via a public API on the layer to create a more stable interface.

@sachinprasadhs
Copy link
Collaborator

Resolve the gemini suggested changes and mark them as resolved for the respective comments, make the changes as backend agnostic, not specific to tf.
Once these comments are addressed, I will go thorough the files in detail.

@LakshmiKalaKadali
Copy link
Author

LakshmiKalaKadali commented Nov 26, 2025 via email

LakshmiKalaKadali and others added 5 commits November 26, 2025 09:38
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants