-
Notifications
You must be signed in to change notification settings - Fork 6.6k
fix: Revert the change in generative_ai/embeddings/batch_example.py #13396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…tch_predict at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @Feiyang1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here to provide a summary of this pull request. This PR, authored by Feiyang1, aims to fix an issue in the generative_ai/embeddings/batch_example.py
sample. The core problem is that the gemini-embedding-001
model currently does not support batch prediction, which was introduced in a previous change (likely PR 13388, as mentioned in the description). To resolve this, the PR reverts the model used in the batch prediction example back to textembedding-gecko@003
, which does support this functionality.
Highlights
- Fix: Batch Prediction Model: Reverts the model used in the batch prediction example (
generative_ai/embeddings/batch_example.py
) fromgemini-embedding-001
back totextembedding-gecko@003
becausegemini-embedding-001
does not currently support batch prediction.
Changelog
- generative_ai/embeddings/batch_example.py
- Changed the model name on line 42 from
gemini-embedding-001
totextembedding-gecko@003
to ensure the batch prediction example uses a supported model.
- Changed the model name on line 42 from
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Model name changed,
Batch predict needs older friend,
Code now runs again.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This PR correctly reverts the model in generative_ai/embeddings/batch_example.py
to textembedding-gecko@003
. This is a necessary fix, as the PR description clarifies that gemini-embedding-001
currently does not support batch_predict
. This change is crucial for the example's functionality. I have one suggestion to further enhance code clarity and maintainability regarding this specific model choice.
Summary of Findings
- Code Clarity and Maintainability: Suggested adding an inline comment in
generative_ai/embeddings/batch_example.py
to explain the choice oftextembedding-gecko@003
for batch predictions. This is due to current limitations withgemini-embedding-001
's support forbatch_predict
. Such a comment improves code clarity and helps prevent future regressions or misunderstandings.
Merge Readiness
This pull request correctly addresses a functional bug by reverting to a model compatible with batch_predict
. The change is crucial for the example's usability. I've suggested a minor addition of an inline comment to further improve code clarity and maintainability. While I, as an AI, cannot approve pull requests, I recommend considering this suggestion. If the suggested change is incorporated, the PR should be in a good state for merging.
@glasnt fyi |
Revert the change in generative_ai/embeddings/batch_example.py from #13388 because gemini-embedding-001 doesn't support batch_predict at the moment.
Description
Fixes #
Note: Before submitting a pull request, please open an issue for discussion if you are not associated with Google.
Checklist
nox -s py-3.9
(see Test Environment Setup)nox -s lint
(see Test Environment Setup)