-
Notifications
You must be signed in to change notification settings - Fork 982
feat(ai): make generateContent methods return inferenceSource #9315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: d07920b The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Size Report 1Affected Products
Test Logs |
Size Analysis Report 1Affected Products
Test Logs |
Vertex AI Mock Responses Check
|
API proposal doc (internal): https://docs.google.com/document/d/1vfBbh5uKS_9k0l9CmzwUt1E-8OQgiYlN2uJAPHU2VI8/edit?resourcekey=0-JczpCefVsiSxNcYBZdZDTw&tab=t.0#heading=h.x94y9zzc13xn
This adds an
inferenceSource
property toEnhancedGenerateContentResponse
s returned bygenerateContent()
andgenerateContentStream()
that indicates whether on-device or in-cloud inference was used. This is only relevant if the developer is using the hybrid inference feature where some inference may be done on-device in the Chrome browser.