Apply AI object and context fixes#98
Conversation
Signed-off-by: Quanyi Ma <eli@patch.sh>
|
Claude finished @genedna's task in 3m 12s —— View job Code Review Complete ✅I've completed a comprehensive review of PR #98: "Apply AI object and context fixes". This PR introduces a new 📊 SummaryChanges Reviewed: 13 files, 352 additions, 14 deletions
🔍 Key FindingsCritical Issues (Must Fix)
High Priority
Medium Priority
Documentation & Design
✨ Positive Aspects
🎯 RecommendationsBefore Merging:
Follow-up Work:
📈 Testing & Performance
🔐 Security & Safety
Overall Assessment: This is a solid enhancement that improves the AI object model with better typing and relationship tracking. The changes are well-structured and follow existing patterns. Address the critical issues before merging, and consider the recommendations for follow-up improvements. |
| fn get_type(&self) -> ObjectType { | ||
| ObjectType::Intent | ||
| } | ||
|
|
There was a problem hiding this comment.
Security Issue: Avoid unwrap_or with zero for size calculation
The get_size() method uses unwrap_or(0) which could hide serialization errors and return an incorrect size. This violates the repository's coding convention to avoid unsafe operations and return proper Result types.
| fn get_size(&self) -> usize { | |
| serde_json::to_vec(self).map(|v| v.len()).unwrap_or(0) |
Recommended fix:
Since ObjectTrait::get_size() returns usize, consider either:
- Changing the trait to return
Result<usize, GitError>(breaking change) - Documenting that this is a best-effort size and logging errors
- Using a cached size field updated during serialization
This same issue appears in other object types - consider a holistic fix across the codebase.
| Snippet, | ||
| /// Command or terminal output. | ||
| Command, | ||
| /// Image or other binary visual content. |
There was a problem hiding this comment.
API Design: Consider using an enum variant instead of Other(String)
The Other(String) variant allows arbitrary strings, which makes it harder to:
- Validate and reason about valid context item kinds
- Maintain backward compatibility when new kinds are added
- Perform exhaustive pattern matching
| /// Image or other binary visual content. | |
| Other(String), |
Recommendation:
Unless there's a strong need for extensibility, consider either:
- Making this a closed enum (remove
Other) and add new variants as needed - If extensibility is required, document the expected format/conventions for
Othervalues
Example:
/// Other context item kind with a documented format.
/// Should follow the pattern: "provider:kind" (e.g., "github:issue", "jira:ticket")
Other(String),
src/internal/object/types.rs
Outdated
| /// // specific fields... | ||
| /// } | ||
| /// ``` | ||
| fn default_updated_at() -> DateTime<Utc> { |
There was a problem hiding this comment.
Code Smell: Function defined inside struct definition comment block
The default_updated_at() helper function is defined between the doc comment and the struct definition. While this compiles, it's unconventional and reduces readability.
| fn default_updated_at() -> DateTime<Utc> { | |
| fn default_updated_at() -> DateTime<Utc> { | |
| Utc::now() | |
| } | |
| #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] | |
| pub struct Header { |
Recommendation:
Move this function definition before the struct's doc comment (around line 547), or move it to a module-level helpers section. This follows standard Rust conventions where helpers are defined before the types that use them.
| let previous_updated_at = self.updated_at; | ||
| match compute_integrity_hash(object) { | ||
| Ok(checksum) => { | ||
| self.checksum = Some(checksum); |
There was a problem hiding this comment.
Potential Bug: seal() updates updated_at even on error
The current implementation updates updated_at to Utc::now() before attempting to compute the checksum. If checksumming fails, we restore the old checksum but keep the new updated_at, which is semantically incorrect.
Current flow:
- Save old checksum and updated_at
- Compute new checksum (may fail)
- If success: update both checksum and updated_at
- If error: restore checksum but keep new updated_at ❌
Recommended fix:
pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> {
let previous_checksum = self.checksum.take();
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
self.updated_at = Utc::now(); // Only update on success
Ok(())
}
Err(err) => {
self.checksum = previous_checksum; // Restore on error
Err(err)
}
}
}This ensures updated_at is only modified when the seal operation succeeds.
|
|
||
| fn to_data(&self) -> Result<Vec<u8>, GitError>; | ||
|
|
||
| fn object_hash(&self) -> Result<ObjectHash, GitError> { |
There was a problem hiding this comment.
Good Addition: Default object_hash() implementation
This is a nice improvement that reduces boilerplate! All types implementing ObjectTrait now get a default hash computation.
Minor suggestion: Consider adding a doc comment explaining when this might need to be overridden:
/// Computes the object hash from serialized data.
///
/// Default implementation serializes the object and computes hash from the data.
/// Override only if you need custom hash computation or caching.
fn object_hash(&self) -> Result<ObjectHash, GitError> {
let data = self.to_data()?;
Ok(ObjectHash::from_type_and_data(self.get_type(), &data))
}| }; | ||
|
|
||
| /// Normalized token usage across providers. | ||
| #[derive(Debug, Clone, Serialize, Deserialize, PartialEq)] |
There was a problem hiding this comment.
Good: Structured token usage
Replacing the untyped serde_json::Value with a strongly-typed TokenUsage struct is a significant improvement for:
- Type safety
- API clarity
- Documentation
Minor enhancement suggestion:
Consider adding validation or helper methods:
impl TokenUsage {
/// Validates that total_tokens equals input_tokens + output_tokens
pub fn is_consistent(&self) -> bool {
self.total_tokens == self.input_tokens + self.output_tokens
}
/// Computes cost per token if cost is available
pub fn cost_per_token(&self) -> Option<f64> {
self.cost_usd.map(|cost| cost / self.total_tokens as f64)
}
}| } | ||
|
|
||
| pub fn token_usage(&self) -> Option<&serde_json::Value> { | ||
| /// Normalized temperature if available. |
There was a problem hiding this comment.
API Design: Fallback to parameters field is clever but may cause confusion
The temperature() and max_tokens() methods fall back to parsing from the parameters JSON field if the dedicated fields aren't set. While this provides backward compatibility, it creates ambiguity:
- Which field is the "source of truth"?
- What happens if both are set but differ?
- Should setters also update the JSON field?
Recommendation:
Document the precedence and migration strategy:
/// Returns the temperature parameter.
///
/// Checks the dedicated `temperature` field first, then falls back to
/// parsing from `parameters["temperature"]` for backward compatibility.
///
/// When creating new Provenance objects, prefer setting the dedicated field
/// via `set_temperature()` rather than including it in `parameters`.
pub fn temperature(&self) -> Option<f64> {
self.temperature.or_else(|| {
self.parameters
.as_ref()
.and_then(|p| p.get("temperature"))
.and_then(|v| v.as_f64())
})
}| pub kind: ContextItemKind, | ||
| pub path: String, | ||
| pub content_id: IntegrityHash, | ||
| #[serde(default)] |
There was a problem hiding this comment.
Missing Documentation: What is content_preview for?
The new content_preview field lacks documentation explaining:
- What should be stored here (truncated content? summary?)
- When should it be populated vs left as
None? - What's the expected length/format?
- How does it relate to the actual content referenced by
content_id?
| #[serde(default)] | |
| #[serde(default)] | |
| pub content_preview: Option<String>, |
Recommendation:
/// Optional preview/summary of the content (e.g., first 200 chars).
/// Used for display purposes without loading the full content via content_id.
/// Should be kept under 500 characters for performance.
#[serde(default)]
pub content_preview: Option<String>,| diff_artifact: Option<ArtifactRef>, | ||
| #[serde(default)] | ||
| touched_files: Vec<TouchedFile>, | ||
| #[serde(default)] |
There was a problem hiding this comment.
Design Question: Should there be constraints on superseded patchsets?
The supersedes_patchset_ids field allows tracking patchset versions, but there's no validation:
- Can a patchset supersede itself?
- Can it supersede patchsets from different tasks/plans?
- Should there be ordering constraints (older supersedes newer)?
Recommendation:
Consider adding validation methods:
impl PatchSet {
/// Validates that superseded patchsets don't create cycles
pub fn validate_supersedes(&self) -> Result<(), GitError> {
if self.supersedes_patchset_ids.contains(&self.header().object_id()) {
return Err(GitError::InvalidPatchSetObject(
"Patchset cannot supersede itself".to_string()
));
}
Ok(())
}
}Or document the expected constraints if validation is handled elsewhere.
| @@ -91,6 +93,7 @@ | |||
| header: Header::new(ObjectType::Plan, repo_id, created_by)?, | |||
| run_id, | |||
| plan_version: 1, | |||
There was a problem hiding this comment.
Missed Opportunity: previous_plan_id not set in constructor
The new previous_plan_id field is initialized to None in the constructor, but the next_version() method (line 115) would be the perfect place to set it. This creates a linked list of plan versions.
Current behavior: Creates a new plan version but doesn't link it to the previous one.
Suggested enhancement:
pub fn next_version(
&self,
repo_id: Uuid,
created_by: ActorRef,
) -> Result<Self, String> {
let next_version = self.plan_version.checked_add(1)
.ok_or_else(|| "plan_version overflow".to_string())?;
let mut new_plan = Self {
header: Header::new(ObjectType::Plan, repo_id, created_by)?,
run_id: self.run_id,
plan_version: next_version,
previous_plan_id: Some(self.header.object_id()), // Link to previous version
steps: Vec::new(),
};
Ok(new_plan)
}This would automatically maintain version history.
| blob.id.to_string(), | ||
| "5dd01c177f5d7d1be5346a5bc18a569a7410c2ef" | ||
| ); | ||
| let hash_from_trait = blob.object_hash().unwrap(); |
There was a problem hiding this comment.
Test Coverage: Good addition but could be more thorough
The new test validates that object_hash() matches the stored id, which is good. However, consider testing:
- Hash computation for different content sizes (empty, small, large)
- Consistency across multiple calls (deterministic hashing)
- Different hash kinds (SHA-1 vs SHA-256)
Enhancement suggestion:
#[test]
fn test_blob_hash_consistency() {
set_hash_kind_for_test(HashKind::Sha1);
let blob1 = Blob::from_content("test data");
let hash1 = blob1.object_hash().unwrap();
let hash2 = blob1.object_hash().unwrap();
assert_eq!(hash1, hash2, "Hash should be deterministic");
// Test with SHA-256
set_hash_kind_for_test(HashKind::Sha256);
let blob2 = Blob::from_content("test data");
let hash3 = blob2.object_hash().unwrap();
assert_ne!(hash1.to_string(), hash3.to_string(), "Different algorithms produce different hashes");
}| impl ObjectTrait for Intent { | ||
| fn from_bytes(data: &[u8], _hash: ObjectHash) -> Result<Self, GitError> | ||
| where | ||
| Self: Sized, |
There was a problem hiding this comment.
Error Handling: Wrong error variant used
The from_bytes method uses GitError::InvalidObjectInfo for deserialization errors, but there's a specific GitError::InvalidIntentObject variant added in this PR (line 94 of errors.rs).
| Self: Sized, | |
| serde_json::from_slice(data).map_err(|e| GitError::InvalidObjectInfo(e.to_string())) |
Recommended fix:
serde_json::from_slice(data).map_err(|e| GitError::InvalidIntentObject(e.to_string()))This provides better error specificity and uses the error variant that was specifically added for this type.
| fn get_size(&self) -> usize { | ||
| serde_json::to_vec(self).map(|v| v.len()).unwrap_or(0) | ||
| } | ||
|
|
There was a problem hiding this comment.
Consistency Issue: Same error handling as from_bytes
The to_data method also uses InvalidObjectInfo when it should use InvalidIntentObject for consistency.
| serde_json::to_vec(self).map_err(|e| GitError::InvalidObjectInfo(e.to_string())) |
Recommended fix:
serde_json::to_vec(self).map_err(|e| GitError::InvalidIntentObject(e.to_string()))| parent_id: Option<Uuid>, | ||
| root_id: Option<Uuid>, | ||
| task_id: Option<Uuid>, | ||
| result_commit_sha: Option<IntegrityHash>, |
There was a problem hiding this comment.
API Design: Consider builder pattern for complex initialization
The Intent struct has many optional fields that would typically be set after creation. The current API requires multiple setter calls:
let mut intent = Intent::new(repo_id, actor, "content")?;
intent.set_parent_id(Some(parent));
intent.set_task_id(Some(task));
intent.set_status(IntentStatus::Active);Suggestion: Consider a builder pattern for cleaner API:
impl Intent {
pub fn builder(repo_id: Uuid, created_by: ActorRef) -> IntentBuilder {
IntentBuilder::new(repo_id, created_by)
}
}
pub struct IntentBuilder { /* ... */ }
impl IntentBuilder {
pub fn content(mut self, content: impl Into<String>) -> Self { /* ... */ }
pub fn parent_id(mut self, id: Uuid) -> Self { /* ... */ }
pub fn task_id(mut self, id: Uuid) -> Self { /* ... */ }
pub fn build(self) -> Result<Intent, String> { /* ... */ }
}
// Usage:
let intent = Intent::builder(repo_id, actor)
.content("Refactor login")
.parent_id(parent_id)
.task_id(task_id)
.build()?;This is optional but improves ergonomics for types with many optional fields.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e8181d1e78
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| #[serde(default)] | ||
| max_tokens: Option<u64>, | ||
| #[serde(default)] | ||
| token_usage: Option<TokenUsage>, |
There was a problem hiding this comment.
Keep legacy provenance token_usage deserializable
Changing token_usage from Option<serde_json::Value> to Option<TokenUsage> makes Provenance::from_bytes reject previously valid objects unless they already contain all required normalized fields (input_tokens, output_tokens, total_tokens). Repositories with provenance records written by earlier versions (for example payloads like { "input": 10, "output": 5 }) will now fail to load after upgrade, so this needs a backward-compatible deserializer or fallback parsing path.
Useful? React with 👍 / 👎.
src/internal/object/types.rs
Outdated
| fn default_updated_at() -> DateTime<Utc> { | ||
| Utc::now() |
There was a problem hiding this comment.
Use deterministic default for missing
updated_at
Using Utc::now() as the serde default for updated_at makes deserialization of legacy headers time-dependent: the same stored bytes produce different in-memory values depending on when they are read. That mutates old objects on round-trip serialization and can make hash/checksum-related behavior non-reproducible for data created before this field existed.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This PR adds the new Intent AI object type and enhances existing AI objects with additional tracking fields and improved metadata handling. The changes extend the AI object model to support user intents, object versioning history, and better provenance tracking while maintaining backward compatibility through serde defaults.
Changes:
- Introduces Intent object type with status tracking and hierarchical relationships
- Adds
updated_attimestamp to Header that's automatically updated when objects are sealed - Enhances Provenance with structured TokenUsage and normalized accessor methods for temperature/max_tokens
- Extends ContextItemKind enum with Url, Snippet, Command, Image, and Other variants
- Adds intent_id to Task, previous_plan_id to Plan, and supersedes_patchset_ids to PatchSet for relationship tracking
- Adds object_hash() default method to ObjectTrait for computing Git-compatible object hashes
- Version bump to 0.5.1
Reviewed changes
Copilot reviewed 12 out of 13 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/internal/object/intent.rs | New Intent object implementation with status lifecycle and relationship fields |
| src/internal/object/types.rs | Adds Intent to ObjectType enum, implements updated_at timestamp in Header with seal() integration |
| src/internal/object/task.rs | Adds intent_id field to link Tasks to their originating Intent |
| src/internal/object/provenance.rs | Introduces TokenUsage struct and normalized accessors for LLM parameters |
| src/internal/object/plan.rs | Adds previous_plan_id field for tracking plan version history |
| src/internal/object/patchset.rs | Adds supersedes_patchset_ids to track superseded patch sets |
| src/internal/object/context.rs | Extends ContextItemKind with additional content types and content_preview field |
| src/internal/object/mod.rs | Exports intent module and adds object_hash() default method to ObjectTrait |
| src/internal/object/blob.rs | Adds test coverage for new object_hash() trait method |
| src/internal/pack/decode.rs | Adds Intent to list of AI object types that await encode implementation |
| src/errors.rs | Adds InvalidIntentObject error variant |
| Cargo.toml / Cargo.lock | Version bump to 0.5.1 |
Cargo.toml
Outdated
| [package] | ||
| name = "git-internal" | ||
| version = "0.5.0" | ||
| version = "0.5.1" |
There was a problem hiding this comment.
This PR adds significant new features (Intent object type, updated_at field, new ContextItemKind variants, object_hash method). According to semantic versioning, this should typically be a minor version bump (0.6.0) rather than a patch version bump (0.5.1), since patch versions are meant for bug fixes only. However, if the project follows a different versioning scheme for 0.x releases, this may be acceptable.
| version = "0.5.1" | |
| version = "0.6.0" |
| pub fn new_next( | ||
| repo_id: Uuid, | ||
| created_by: ActorRef, | ||
| run_id: Uuid, | ||
| previous_version: u32, | ||
| ) -> Result<Self, String> { | ||
| let next_version = previous_version | ||
| .checked_add(1) | ||
| .ok_or_else(|| "plan_version overflow".to_string())?; | ||
| Ok(Self { | ||
| header: Header::new(ObjectType::Plan, repo_id, created_by)?, | ||
| run_id, | ||
| plan_version: next_version, | ||
| previous_plan_id: None, | ||
| steps: Vec::new(), | ||
| }) |
There was a problem hiding this comment.
The new_next method creates a new plan version based on a previous_version number but initializes previous_plan_id to None. Consider accepting an optional previous_plan_id: Option<Uuid> parameter to allow linking to the previous plan at construction time, improving API ergonomics. Currently, callers must use set_previous_plan_id() as a separate step, which is less convenient and may be error-prone.
| pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> { | ||
| let previous = self.checksum.take(); | ||
| let previous_checksum = self.checksum.take(); | ||
| let previous_updated_at = self.updated_at; | ||
| match compute_integrity_hash(object) { | ||
| Ok(checksum) => { | ||
| self.checksum = Some(checksum); | ||
| self.updated_at = Utc::now(); | ||
| Ok(()) | ||
| } | ||
| Err(err) => { | ||
| self.checksum = previous; | ||
| self.checksum = previous_checksum; | ||
| self.updated_at = previous_updated_at; | ||
| Err(err) | ||
| } | ||
| } |
There was a problem hiding this comment.
The seal() method updates updated_at after computing the checksum. This creates an inconsistency: the checksum is computed with the old updated_at value, but then updated_at is changed, making the checksum no longer match the object's current state. This defeats the purpose of the integrity checksum. Consider one of these solutions: (1) don't include updated_at in checksum calculations (e.g., skip it during serialization for checksumming), (2) don't update updated_at in seal(), or (3) compute the checksum after updating updated_at, but ensure the checksum field itself is excluded from the hash computation.
Signed-off-by: Quanyi Ma <eli@patch.sh>
|
Claude finished @genedna's task in 3m 14s —— View job Comprehensive Code Review - PR #98: Apply AI Object and Context FixesI've completed a thorough analysis of this PR based on the changed files, previous review feedback, and repository coding conventions. Here's my comprehensive assessment: 📋 SummaryThis PR introduces significant enhancements to the AI object model:
🚨 Critical Issues (Must Fix Before Merge)1. Header::seal() Bug - Data Integrity Issue
|
| Category | Score | Notes |
|---|---|---|
| Code Quality | 7/10 | Good patterns, but critical seal() bug and unwrap_or violations |
| Security | 8/10 | No vulnerabilities, but error suppression is risky |
| Performance | 9/10 | No obvious issues, efficient serialization |
| Testing | 6/10 | Basic coverage present, needs edge cases |
| Documentation | 5/10 | Missing docs for new fields and fallback behavior |
| Type Safety | 9/10 | Excellent improvement with TokenUsage struct |
| Backward Compat | 7/10 | Serde defaults help, but TokenUsage may break |
Overall: 7.3/10 - Solid enhancement with good architecture, but needs critical bug fixes before merge.
🏁 Final Recommendation
Status:
This PR adds valuable functionality to the AI object model with thoughtful design. However, the critical Header::seal() bug and error handling issues must be resolved before merging. The semantic versioning should also be corrected.
Merge Checklist:
- Fix Header::seal() data integrity bug
- Use InvalidIntentObject error variant
- Update version to 0.6.0
- Link previous_plan_id in next_version()
- Add documentation for new fields
- Consider backward compatibility plan for TokenUsage
Once these issues are addressed, this will be an excellent addition to the codebase. The type safety improvements and relationship tracking are exactly what the AI object model needs.
|
Claude finished @genedna's task in 3m 25s —— View job Comprehensive Code Review - PR #98: Apply AI Object and Context FixesI've completed a thorough analysis of this PR across code quality, security, performance, testing, and documentation. Previous automated reviews have identified most issues comprehensively. This review focuses on additional findings and holistic assessment. 📊 Executive SummaryThis PR enhances the AI object model with a new Changes: 17 files, 494 additions, 30 deletions 🚨 Critical Issues (MUST FIX)1. CRITICAL: Header::seal() Checksum Integrity Bug ⛔Location: The Problem: The checksum is computed BEFORE
Current flow:
Impact: Data integrity violation. Any code that validates checksums will fail because the checksum doesn't match the actual serialized state. Fix Header::seal() integrity bug → Recommended fix: Don't update pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> {
let previous_checksum = self.checksum.take();
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
// Don't update updated_at here - caller should do it before sealing
Ok(())
}
Err(err) => {
self.checksum = previous_checksum;
Err(err)
}
}
}2. CRITICAL: Wrong Error Variants in Intent 🐛Location: The PR adds Impact: Compilation error or incorrect error types. 3. CRITICAL: Semantic Versioning Violation 📦Location: Version bump from 0.5.0 → 0.5.1 is incorrect per SemVer. This PR adds:
Correct version: 0.6.0 (minor bump for new features) Exception: If the project follows "0.x means unstable, patch versions ok for features" then document this in CLAUDE.md.
|
| Category | Score | Notes |
|---|---|---|
| Code Quality | 6/10 | Good patterns, but critical seal() bug and inconsistencies |
| Security | 8/10 | No vulnerabilities, minor concerns with unbounded strings |
| Performance | 9/10 | No regressions, efficient implementation |
| Testing | 5/10 | Basic coverage, missing edge cases and critical bug tests |
| Documentation | 4/10 | Missing explanations for new concepts and fallback behavior |
| Type Safety | 9/10 | Excellent improvement with TokenUsage struct |
| Backward Compat | 5/10 | Serde defaults help, but TokenUsage may break old data |
| Architecture | 8/10 | Well-designed relationships and hierarchies |
Overall: 6.5/10 - Good architectural design with critical bugs that must be fixed.
🏁 Final Recommendation
Status:
This PR adds valuable functionality with thoughtful design, but the Header::seal() checksum bug is a critical correctness issue that compromises data integrity. The semantic versioning and error handling issues must also be resolved.
Merge Blockers:
- Fix Header::seal() to not modify updated_at after checksum computation
- Use correct InvalidIntentObject error variant
- Fix semantic versioning (0.6.0) or document exception
Strongly Recommended:
- Add backward-compatible TokenUsage deserializer
- Document Intent object purpose and lifecycle
- Add tests for Header seal/unseal cycle
- Call validate_supersedes() in PatchSet mutators
Once the merge blockers are addressed, this will be an excellent addition to the codebase. The type safety improvements and relationship tracking are exactly what the AI object model needs.
Tasks Completed
- Read repository guidelines (CLAUDE.md)
- Review new Intent object implementation
- Review Header and types.rs changes
- Review Provenance enhancements
- Review Context, Plan, PatchSet modifications
- Analyze against code quality criteria
- Check security implications
- Evaluate performance impact
- Review test coverage
- Verify documentation
- Provide comprehensive feedback
| pub fn validate_supersedes(&self) -> Result<(), GitError> { | ||
| if self | ||
| .supersedes_patchset_ids | ||
| .contains(&self.header.object_id()) | ||
| { | ||
| return Err(GitError::InvalidPatchSetObject( | ||
| "PatchSet cannot supersede itself".to_string(), | ||
| )); | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
The validate_supersedes method is provided but not called automatically when adding supersede IDs via add_supersedes_patchset_id or set_supersedes_patchset_ids. This means callers must remember to call validate_supersedes() manually to detect self-reference bugs.
Consider either:
- Calling validate_supersedes() automatically in the add/set methods and returning Result
- Adding documentation to add_supersedes_patchset_id and set_supersedes_patchset_ids explaining that callers should call validate_supersedes() after modification
- Performing validation in the seal() method or to_data() method before serialization
| #[serde(default)] | ||
| token_usage: Option<TokenUsage>, |
There was a problem hiding this comment.
The token_usage field type changed from Option<serde_json::Value> to Option<TokenUsage>. This is a breaking change in the serialization format. Old serialized Provenance objects with token_usage as arbitrary JSON (like {"input": 10, "output": 5}) will fail to deserialize because they don't match the TokenUsage struct schema (which expects input_tokens, output_tokens, total_tokens fields).
Consider adding #[serde(default)] to the token_usage field to handle backward compatibility, or provide a custom deserializer that can handle both the old arbitrary JSON format and the new structured format. Alternatively, document this as a breaking change if that's intentional.
| #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] | ||
| #[serde(rename_all = "snake_case")] | ||
| pub enum ContextItemKind { | ||
| /// A regular file in the repository. | ||
| File, | ||
| /// A URL (web page, API endpoint, etc.). | ||
| Url, | ||
| /// A free-form text snippet (e.g. doc fragment, note). | ||
| Snippet, | ||
| /// Command or terminal output. | ||
| Command, | ||
| /// Image or other binary visual content. | ||
| Image, | ||
| Other(String), | ||
| } |
There was a problem hiding this comment.
The ContextItemKind::Other(String) variant will serialize differently from the unit variants due to the #[serde(rename_all = "snake_case")] attribute. Unit variants like File, Url, Snippet, Command, and Image will serialize as simple strings ("file", "url", etc.), while Other(String) will serialize as an object like {"other": "custom_type"}. This inconsistency could cause API confusion.
Consider one of these approaches:
- Add #[serde(untagged)] to the enum to make Other serialize as just the string value
- Add custom serialization logic to handle Other consistently
- Document this serialization behavior clearly in the enum's doc comments
| pub content_id: IntegrityHash, | ||
| /// Optional preview/summary of the content (for example, first 200 characters). | ||
| /// Used for display without loading the full content via `content_id`. | ||
| /// Should be kept under 500 characters for performance. |
There was a problem hiding this comment.
The documentation recommends keeping content_preview under 500 characters for performance, but there's no validation or enforcement of this limit in the ContextItem::new method or anywhere else. Consider adding validation to enforce this limit, or make it clear that this is just a guideline. If it's a hard requirement for performance, it should be enforced in code.
| /// Should be kept under 500 characters for performance. | |
| /// For performance, it is recommended (but not enforced) to keep this under 500 characters. |
| @@ -530,6 +539,10 @@ impl ArtifactRef { | |||
| } | |||
| } | |||
|
|
|||
There was a problem hiding this comment.
The default_updated_at function uses Utc::now() for deserializing objects that lack an updated_at field (for backward compatibility with older serialized objects). This means legacy objects will have their updated_at set to the deserialization time rather than a historical timestamp.
Consider documenting this behavior in a comment explaining that: (1) this is for backward compatibility, (2) legacy objects without updated_at will have it set to deserialization time, and (3) this is acceptable because updated_at represents the last modification/seal time and legacy objects are being "touched" during deserialization.
| /// Default value for `updated_at` during deserialization. | |
| /// | |
| /// This is intentionally set to [`Utc::now()`] for **backward compatibility** with | |
| /// legacy serialized objects that predate the `updated_at` field: | |
| /// - When deserializing such legacy objects, `updated_at` will be populated with | |
| /// the **deserialization time**, not a historical timestamp. | |
| /// - This is acceptable because `updated_at` represents the last modification | |
| /// or "seal" time, and these legacy objects are effectively being "touched" | |
| /// when they are deserialized into the current object model. | |
| /// | |
| /// Changing this behavior would alter the semantics for existing data, so | |
| /// callers should treat `updated_at` as "last touched by the current system" | |
| /// rather than as an immutable creation time. |
- Fix Header::seal() checksum integrity bug: remove updated_at mutation after checksum computation so the checksum matches the final object state - Fix Intent error variants: use InvalidIntentObject instead of generic InvalidObjectInfo in from_bytes() and to_data() - Fix semantic versioning: bump to 0.6.0 (not 0.5.1) since PR web3infra-foundation#98 added new public types, enum variants, and trait methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh>
|
Claude finished @genedna's task in 2m 49s —— View job Comprehensive Code Review - PR #98: Apply AI Object and Context FixesI've conducted a thorough analysis of this PR based on the extensive review comments from automated reviewers, the changed files metadata, and repository guidelines. This PR introduces the new 📋 Executive SummaryOverall Assessment: This PR makes architecturally sound improvements to the AI object model, but contains 3 critical bugs that must be fixed:
Scope: 17 files, 493 additions, 30 deletions 🚨 CRITICAL ISSUES (Must Fix Before Merge)1. Header::seal() Data Integrity Violation ⛔Location: The Bug: The checksum is computed BEFORE Why This is Critical:
Impact Analysis:
Root Cause: The current implementation: // Line 705-709: Compute checksum with OLD updated_at
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
self.updated_at = Utc::now(); // ← Too late! Checksum already computedRecommended Fix: Remove pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> {
let previous_checksum = self.checksum.take();
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
// Don't modify updated_at here - let callers do it BEFORE sealing
Ok(())
}
Err(err) => {
self.checksum = previous_checksum;
Err(err)
}
}
}Testing Required: Add integration test that:
Fix Header::seal() integrity bug → 2. Wrong Error Variants in Intent Object 🐛Location: The Bug: The PR adds Why This Matters:
Fix Required: // Line 129 in intent.rs - from_bytes()
serde_json::from_slice(data).map_err(|e| GitError::InvalidIntentObject(e.to_string()))
// Line 141 in intent.rs - to_data()
serde_json::to_vec(self).map_err(|e| GitError::InvalidIntentObject(e.to_string()))3. Semantic Versioning Violation 📦Location: The Issue: Version bump from 0.5.0 → 0.5.1 is incorrect per SemVer. This PR Adds:
SemVer Rule: Patch versions (0.x.y) are for bug fixes only. New features require minor bump (0.x.0). Correct Version: 0.6.0 Exception: If this project follows "0.x means unstable, any changes allowed in patches" then document this in CLAUDE.md.
|
| Category | Score | Notes |
|---|---|---|
| Code Quality | 6/10 | Good patterns, critical seal() bug |
| Security | 8/10 | No vulnerabilities, minor string concerns |
| Performance | 9/10 | No regressions |
| Testing | 5/10 | Basic coverage, critical bug untested |
| Documentation | 4/10 | Missing key explanations |
| Type Safety | 9/10 | Excellent TokenUsage improvement |
| Backward Compat | 5/10 | Serde defaults help, but TokenUsage risky |
| Architecture | 8/10 | Well-designed relationships |
Overall: 6.5/10 - Solid architecture undermined by critical bugs
🏁 FINAL RECOMMENDATION
Status:
This PR makes valuable improvements to the AI object model with thoughtful design, but the Header::seal() data integrity bug is a critical correctness issue that must be fixed before merge. The semantic versioning and error handling issues also need resolution.
Merge Blockers:
- Fix Header::seal() checksum integrity bug
- Use correct InvalidIntentObject error variant
- Fix semantic versioning (0.6.0) or document exception
- Link previous_plan_id in Plan::next_version()
Strongly Recommended:
- Add backward-compatible TokenUsage deserializer
- Add integration test for Header seal/unseal
- Document Intent object and Provenance fallback behavior
Once the merge blockers are addressed, this will be an excellent addition to the codebase. The type safety improvements and relationship tracking enhance the AI object model significantly.
AI extension types (ContextSnapshot, Decision, etc.) use u8 IDs >= 8 which cannot fit in the 3-bit pack header type field (values 1-7). Previously, the decode path silently returned Ok(None) for these types (dead code since from_pack_type_u8 already rejects them), and the parallel_encode path lacked an early check, letting AI objects fail deep in encode_one_object with a generic error. - Add ObjectType::is_ai_object() helper for classifying AI extensions - Replace silent Ok(None) in decode with explicit InvalidPackFile error - Add early AI type rejection in parallel_encode before batching Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh>
|
Claude finished @genedna's task in 4m 49s —— View job Comprehensive Code Review - PR #98: Apply AI Object and Context FixesI've completed a thorough analysis of this PR based on the automated reviews, changed files metadata, and repository coding conventions from CLAUDE.md. 📊 Executive SummaryOverall Assessment: This PR makes valuable architectural improvements to the AI object model by introducing a new Scope: 18 files changed, 522 additions, 42 deletions 🚨 CRITICAL ISSUES (Must Fix Before Merge)1. Data Integrity Bug in Header::seal() ⛔Location: The Problem: The checksum is computed BEFORE Current Flow:
Why This is Critical:
Recommended Fix: Remove pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> {
let previous_checksum = self.checksum.take();
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
// Don't modify updated_at here - let callers do it BEFORE sealing
Ok(())
}
Err(err) => {
self.checksum = previous_checksum;
Err(err)
}
}
}Fix Header::seal() integrity bug → 2. Wrong Error Variants in Intent Object 🐛Location: The Problem: The PR adds Impact:
Required Fix: // Line 129 in intent.rs - from_bytes()
serde_json::from_slice(data).map_err(|e| GitError::InvalidIntentObject(e.to_string()))
// Line 141 in intent.rs - to_data()
serde_json::to_vec(self).map_err(|e| GitError::InvalidIntentObject(e.to_string()))3. Semantic Versioning Violation 📦Location: The Issue: Version bump from 0.5.0 → 0.5.1 violates SemVer guidelines. This PR Adds:
SemVer Rule: Patch versions (0.x.y) are for bug fixes only. New features require a minor version bump (0.x.0). Correct Version: 0.6.0
|
| Category | Score | Notes |
|---|---|---|
| Code Quality | 6/10 | Good patterns, critical seal() bug |
| Security | 8/10 | No vulnerabilities, minor string concerns |
| Performance | 9/10 | No regressions |
| Testing | 5/10 | Basic coverage, critical bug untested |
| Documentation | 4/10 | Missing key explanations |
| Type Safety | 9/10 | Excellent TokenUsage improvement |
| Backward Compat | 5/10 | Serde defaults help, TokenUsage risky |
| Architecture | 8/10 | Well-designed relationships |
Overall: 6.5/10 - Solid architecture undermined by critical bugs
🏁 FINAL RECOMMENDATION
Status:
This PR makes valuable improvements to the AI object model with thoughtful design, but the Header::seal() data integrity bug is a critical correctness issue that must be fixed before merge. The semantic versioning and error handling issues also need resolution.
Merge Blockers:
- Fix Header::seal() checksum integrity bug
- Use correct InvalidIntentObject error variant
- Fix semantic versioning (0.6.0) or document exception
- Link previous_plan_id in Plan::next_version()
Strongly Recommended:
- Add backward-compatible TokenUsage deserializer
- Add integration test for Header seal/unseal cycle
- Document Intent object and Provenance fallback behavior
Once the merge blockers are addressed, this will be an excellent addition to the codebase. The type safety improvements and relationship tracking enhance the AI object model significantly.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 17 out of 18 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
src/internal/object/types.rs:737
- The seal() method should update the updated_at timestamp to the current time when sealing an object, but it currently doesn't do this. The test at line 934-946 expects updated_at to be greater than or equal to the value before seal() was called, which suggests the intention is for seal() to update this field. According to repository memory, Header.seal() should update both checksum and updated_at timestamp.
To fix this, add self.updated_at = Utc::now(); before setting the checksum in the Ok branch of the match statement.
pub fn seal<T: Serialize>(&mut self, object: &T) -> Result<(), serde_json::Error> {
let previous_checksum = self.checksum.take();
match compute_integrity_hash(object) {
Ok(checksum) => {
self.checksum = Some(checksum);
Ok(())
}
Err(err) => {
self.checksum = previous_checksum;
Err(err)
}
}
}
| ObjectType::Task => TASK_OBJECT_TYPE, | ||
| ObjectType::Intent => INTENT_OBJECT_TYPE, | ||
| ObjectType::ToolInvocation => TOOL_INVOCATION_OBJECT_TYPE, | ||
| _ => panic!("can put compute the delta hash value"), |
There was a problem hiding this comment.
The panic message contains a typo: "can put compute" should be "cannot compute". The wildcard pattern on line 164 should be unreachable since all ObjectType variants are matched, but if it's hit, the error message should be grammatically correct.
| _ => panic!("can put compute the delta hash value"), | |
| _ => panic!("cannot compute the delta hash value"), |
| assert_eq!(usage.input_tokens, 10); | ||
| assert_eq!(usage.output_tokens, 5); | ||
| assert_eq!(usage.total_tokens, 15); | ||
| assert_eq!(usage.cost_usd, Some(0.001)); |
There was a problem hiding this comment.
The test should verify that the TokenUsage instance has consistent values by calling is_consistent(). Since the test creates a TokenUsage with input_tokens=10, output_tokens=5, and total_tokens=15, it should assert that usage.is_consistent() returns true to validate the intended invariant.
| assert_eq!(usage.cost_usd, Some(0.001)); | |
| assert_eq!(usage.cost_usd, Some(0.001)); | |
| assert!(usage.is_consistent()); |
| pub fn add_supersedes_patchset_id(&mut self, patchset_id: Uuid) { | ||
| self.supersedes_patchset_ids.push(patchset_id); | ||
| } | ||
|
|
||
| pub fn set_supersedes_patchset_ids(&mut self, patchset_ids: Vec<Uuid>) { | ||
| self.supersedes_patchset_ids = patchset_ids; |
There was a problem hiding this comment.
The add_supersedes_patchset_id method adds a patchset ID without validation. Consider whether it should call validate_supersedes() after adding the ID to immediately reject self-references, or document that callers must explicitly call validate_supersedes(). The current API allows building an invalid state that's only caught when validate_supersedes() is explicitly called.
| pub fn add_supersedes_patchset_id(&mut self, patchset_id: Uuid) { | |
| self.supersedes_patchset_ids.push(patchset_id); | |
| } | |
| pub fn set_supersedes_patchset_ids(&mut self, patchset_ids: Vec<Uuid>) { | |
| self.supersedes_patchset_ids = patchset_ids; | |
| pub fn add_supersedes_patchset_id( | |
| &mut self, | |
| patchset_id: Uuid, | |
| ) -> Result<(), GitError> { | |
| self.supersedes_patchset_ids.push(patchset_id); | |
| if let Err(err) = self.validate_supersedes() { | |
| // Roll back the push to preserve invariants on failure. | |
| self.supersedes_patchset_ids.pop(); | |
| return Err(err); | |
| } | |
| Ok(()) | |
| } | |
| pub fn set_supersedes_patchset_ids( | |
| &mut self, | |
| patchset_ids: Vec<Uuid>, | |
| ) -> Result<(), GitError> { | |
| let previous = std::mem::replace(&mut self.supersedes_patchset_ids, patchset_ids); | |
| if let Err(err) = self.validate_supersedes() { | |
| // Restore previous state if the new set of IDs is invalid. | |
| self.supersedes_patchset_ids = previous; | |
| return Err(err); | |
| } | |
| Ok(()) |
| // `from_pack_type_u8` already rejects them, but guard explicitly here. | ||
| other => Err(GitError::InvalidPackFile(format!( | ||
| "AI object type `{other}` cannot appear in a pack file" | ||
| ))), |
There was a problem hiding this comment.
The comment states that from_pack_type_u8 already rejects AI object types, but the wildcard pattern here is technically unreachable because from_pack_type_u8 only returns Commit, Tree, Blob, Tag, OffsetZstdelta, OffsetDelta, or HashDelta (values 1-7). Consider using an unreachable!() macro with a similar error message instead of returning an error, or document that this is defensive programming for safety.
| // `from_pack_type_u8` already rejects them, but guard explicitly here. | |
| other => Err(GitError::InvalidPackFile(format!( | |
| "AI object type `{other}` cannot appear in a pack file" | |
| ))), | |
| // `from_pack_type_u8` already rejects them, so reaching this arm indicates | |
| // a logic error in the pack decoder or object type mapping. | |
| other => unreachable!( | |
| "AI object type `{other}` cannot appear in a pack file; from_pack_type_u8 should have rejected it" | |
| ), |
| #[serde(default = "default_updated_at")] | ||
| updated_at: DateTime<Utc>, |
There was a problem hiding this comment.
The default_updated_at function returns Utc::now(), which means when deserializing old Header objects without an updated_at field, they will get the current timestamp instead of their original creation time. Consider using a default that matches created_at for backward compatibility, or document this as an intentional breaking change in deserialization behavior. This could cause issues when loading existing objects from storage.
- Fix Header::seal() checksum integrity bug: remove updated_at mutation after checksum computation so the checksum matches the final object state - Fix Intent error variants: use InvalidIntentObject instead of generic InvalidObjectInfo in from_bytes() and to_data() - Fix semantic versioning: bump to 0.6.0 (not 0.5.1) since PR web3infra-foundation#98 added new public types, enum variants, and trait methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh>
…y enforcement (#99) * Apply AI object and context fixes Signed-off-by: Quanyi Ma <eli@patch.sh> * ai-objects: apply review feedback and tighten invariants Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix fmt Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix critical issues from PR #98 review - Fix Header::seal() checksum integrity bug: remove updated_at mutation after checksum computation so the checksum matches the final object state - Fix Intent error variants: use InvalidIntentObject instead of generic InvalidObjectInfo in from_bytes() and to_data() - Fix semantic versioning: bump to 0.6.0 (not 0.5.1) since PR #98 added new public types, enum variants, and trait methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Reject AI object types in pack encode/decode paths AI extension types (ContextSnapshot, Decision, etc.) use u8 IDs >= 8 which cannot fit in the 3-bit pack header type field (values 1-7). Previously, the decode path silently returned Ok(None) for these types (dead code since from_pack_type_u8 already rejects them), and the parallel_encode path lacked an early check, letting AI objects fail deep in encode_one_object with a generic error. - Add ObjectType::is_ai_object() helper for classifying AI extensions - Replace silent Ok(None) in decode with explicit InvalidPackFile error - Add early AI type rejection in parallel_encode before batching Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * ai-objects: update project metadata and documentation Update Cargo.toml description/keywords/categories, README.md overview and AI Object Model section, and CLAUDE.md with comprehensive AI object model guidance including end-to-end flow, file mapping, key patterns, and documentation links. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix backward compatibility for PlanStep.statuses and Header.updated_at - PlanStep.statuses: add #[serde(default)] with a Pending-entry default so legacy plans with the old single `status` field can still deserialize. - Header.updated_at: replace Utc::now() default with a custom Deserialize impl that falls back to created_at, avoiding nondeterministic metadata when loading legacy objects. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Address code review: fix panics, frame stability, seal semantics, is_base - ObjectType::to_bytes() returns Option instead of panicking on delta types; callers updated (pack/utils.rs, zlib/inflate.rs) - Intent::status() and PlanStep::status() return Option instead of panicking via expect() on potentially empty statuses Vec - ObjectType::is_base() returns false for AI types (they cannot participate in pack delta window selection) - ContextFrame gains stable frame_id assigned by ContextPipeline's monotonic counter; PlanStep iframes/oframes changed from Vec<u32> indices to Vec<u64> frame IDs that survive eviction - Header::seal() now updates updated_at timestamp - Updated docs/ai.md with frame ID semantics and usage examples Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> --------- Signed-off-by: Quanyi Ma <eli@patch.sh> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…y enforcement (web3infra-foundation#99) * Apply AI object and context fixes Signed-off-by: Quanyi Ma <eli@patch.sh> * ai-objects: apply review feedback and tighten invariants Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix fmt Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix critical issues from PR web3infra-foundation#98 review - Fix Header::seal() checksum integrity bug: remove updated_at mutation after checksum computation so the checksum matches the final object state - Fix Intent error variants: use InvalidIntentObject instead of generic InvalidObjectInfo in from_bytes() and to_data() - Fix semantic versioning: bump to 0.6.0 (not 0.5.1) since PR web3infra-foundation#98 added new public types, enum variants, and trait methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Reject AI object types in pack encode/decode paths AI extension types (ContextSnapshot, Decision, etc.) use u8 IDs >= 8 which cannot fit in the 3-bit pack header type field (values 1-7). Previously, the decode path silently returned Ok(None) for these types (dead code since from_pack_type_u8 already rejects them), and the parallel_encode path lacked an early check, letting AI objects fail deep in encode_one_object with a generic error. - Add ObjectType::is_ai_object() helper for classifying AI extensions - Replace silent Ok(None) in decode with explicit InvalidPackFile error - Add early AI type rejection in parallel_encode before batching Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * ai-objects: update project metadata and documentation Update Cargo.toml description/keywords/categories, README.md overview and AI Object Model section, and CLAUDE.md with comprehensive AI object model guidance including end-to-end flow, file mapping, key patterns, and documentation links. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Fix backward compatibility for PlanStep.statuses and Header.updated_at - PlanStep.statuses: add #[serde(default)] with a Pending-entry default so legacy plans with the old single `status` field can still deserialize. - Header.updated_at: replace Utc::now() default with a custom Deserialize impl that falls back to created_at, avoiding nondeterministic metadata when loading legacy objects. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Address code review: fix panics, frame stability, seal semantics, is_base - ObjectType::to_bytes() returns Option instead of panicking on delta types; callers updated (pack/utils.rs, zlib/inflate.rs) - Intent::status() and PlanStep::status() return Option instead of panicking via expect() on potentially empty statuses Vec - ObjectType::is_base() returns false for AI types (they cannot participate in pack delta window selection) - ContextFrame gains stable frame_id assigned by ContextPipeline's monotonic counter; PlanStep iframes/oframes changed from Vec<u32> indices to Vec<u64> frame IDs that survive eviction - Header::seal() now updates updated_at timestamp - Updated docs/ai.md with frame ID semantics and usage examples Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> --------- Signed-off-by: Quanyi Ma <eli@patch.sh> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
No description provided.