Gemini-Nano Chrome Built In AI Prompt API Demo

A starting point for a client-side, LLM in the browser. This is for online and offline Javascript use. (Run it once and download this single page website)

NOT YET FOR MOBILE, DO NOT TRY YET ON MOBILE!

By Jeremy Ellis uses at your own risk
Github for these gitpages at https://github.com/hpssjellis/my-examples-of-web-llm
Main Demo index at https://hpssjellis.github.io/my-examples-of-web-llm/public/index.html

Help to setup Chrome flags if needed

This page demonstrates the core features of the Gemini Nano Prompt API (`LanguageModel` API) available in Chrome 138+. Ensure you have enabled the necessary flags. Copy the link below and paste it into your Chrome address bar:
Then set #optimization-guide-on-device-model (to Enabled BypassPrefRequirement)
Then search for and set all of them to enabled:

  1. #prompt-api-for-gemini-nano (set to Enabled)
  2. #prompt-api-for-gemini-nano-multimodal-input(set to Enabled)
  3. #summarization-api-for-gemini-nano(set to Enabled)
  4. #writer-api-for-gemini-nano(set to Enabled)
  5. #rewriter-api-for-gemini-nano(set to Enabled)
  6. #proofreader-api-for-gemini-nano(set to Enabled)
The Gemini Nano model will download the first time you use it. That will be about 4.0 GB of download and will need about 20 GB saving space for the final folders.

Note: User Ctrl-Shift-i to show comments



Edit Prompt:

, ,

Output:

HTML Code Preview and Viewer

Use at your own risk!
By Jeremy Ellis LinkedIn
Google Reference: developer.chrome.com/docs/ai/built-in