Skip to content

Supporting Gemini Canvas #55

@raymondsheh

Description

@raymondsheh

Gemini Canvas (accessed via https://gemini.google.com and selecting "Canvas" from tools) can be used to generate projects in, e.g., HTML with embedded Javascript in response to simple prompts. These can then be refined over the course of a session. The user can edit the code alongside Gemini. The system works for smaller projects but tends to require careful management once you get to thousands of lines of code.

Although not as capable as dedicated AI-assisted coding tools, Gemini Canvas is highly accessible, especially to those who are not familiar with software development. This accessibility also does mean that it is likely a greater proportion of those generating code with Gemini Canvas would be unfamiliar with secure coding practices and unable to audit the code themselves. Simple instructions for referencing CodeGuard rules might therefore be quite helpful.

In my experience, the code generated by Gemini Canvas often has a variety of security issues. I can prompt it with a generic "Please check the code for security issues," and it does a reasonable job, but it would be nice to be able to have it specifically reference CodeGuard rules.

Gemini Canvas (at least the version I have access to) can't connect to MCP servers or directly reference skill.md files. An early attempt at simply pointing Gemini Canvas at the .md files of CodeGuard resulted in some sensible output, but on further investigation it turned out that Canvas was giving generic security recommendations rather than specifically referencing CodeGuard. When I asked it to give me specific CodeGuard rules/.md files that were referenced, it "hallucinated" references (that looked more like OWASP references, which I would expect to have appeared in the training data).

I've asked Gemini itself how to best incorporate rules like this into its code generation. While I do still need to test it, here is what it has advised.

  • It has a hard time pulling these kinds of things in via URLs and, it turns out, it can't actually crawl a file tree.
  • Attaching them is better but isn't emphasized.
  • The best option is to "flatten" the skill file (i.e., concatenate the skill.md file with the files that are referenced, separating them by "# file_name.md" headers). This should be quite doable as only a subset are used for, e.g., Javascript. It claims that this means that the rules are given higher attention.
  • It claims to be able to handle 20,000-50,000 lines of text in a single prompt.
  • It might periodically need to be reminded to follow the rules (e.g., with a prompt "Remember to stick to the CodeGuard rules for this edit") but it claims that it doesn't need to re-read the rules each time.

I'll continue having a play with this and report back if/how this works out.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions