Skip to content

Add AI workflow tests and usage docs #52

@bryan-shea

Description

@bryan-shea

Summary

Add automated coverage for AI-related prompt and command behavior, and document practical best practices for AI-assisted review with Annotative.

Why

AI-related flows are easy to break with small routing or formatting changes, and users get more value when they understand how annotations, exports, and commands fit together in practice. fileciteturn7file9

Scope

In scope:

  • add tests for common prompt-generation paths
  • add tests for key AI-related command behavior and edge cases
  • keep coverage focused on high-value public-core workflows
  • document a few realistic AI-assisted review patterns
  • explain where annotations, exports, and commands fit together
  • keep guidance specific to current public-repo capabilities

Out of scope:

  • external AI service integration tests
  • general AI policy documentation

Done when

  • AI-related prompt and command behavior has practical regression coverage
  • docs show clear, realistic AI-assisted review workflows for Annotative

Metadata

Metadata

Assignees

No one assigned

    Labels

    aiAI-assisted review flows, prompt packaging, and AI-oriented exports.copilotGitHub Copilot-specific integration and workflow tightening.sub-issueChild issue tied to a larger feature or implementation track.testAutomated test coverage, regression protection, and test fixture improvements.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions