Conversation
Signed-off-by: Quanyi Ma <eli@patch.sh>
Signed-off-by: Quanyi Ma <eli@patch.sh>
Signed-off-by: Quanyi Ma <eli@patch.sh>
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Updates the crate to v0.7.0 while reshaping the AI-workflow object schemas (Intent/Plan/Header/etc.) and refreshing documentation to match the new lifecycle model and context tracking approach.
Changes:
- Bump crate version to 0.7.0 (plus packaging
excludetweaks). - Evolve AI object schemas:
Intent→spec+parents+context_pipeline,Planfwindow→iframes, andHeaderfields removed/simplified; broad addition ofserde(deny_unknown_fields). - Add/expand docs for agent workflow and Git protocol/object references.
Reviewed changes
Copilot reviewed 16 out of 20 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/internal/object/types.rs | Simplifies Header fields and deserialization; refines comments and integrity helpers. |
| src/internal/object/tool.rs | Updates lifecycle docs; adds strict serde deserialization on structs. |
| src/internal/object/task.rs | Updates lifecycle docs; adds strict serde deserialization on Task; adjusts tests for new header shape. |
| src/internal/object/run.rs | Updates lifecycle docs; adds strict serde deserialization on Environment/Run. |
| src/internal/object/provenance.rs | Updates lifecycle docs; adds strict serde deserialization on provenance structs. |
| src/internal/object/plan.rs | Replaces fwindow with iframes; tightens serde rules; updates tests/docs accordingly. |
| src/internal/object/pipeline.rs | Updates pipeline lifecycle narrative; adds strict serde deserialization on pipeline structs. |
| src/internal/object/patchset.rs | Adds strict serde deserialization; removes legacy serde aliases; updates docs. |
| src/internal/object/mod.rs | Updates top-level docs to reflect iframes and parents terminology. |
| src/internal/object/intent.rs | Major Intent schema/API update: spec, parents, eager ContextPipeline creation; expanded docs/tests. |
| src/internal/object/evidence.rs | Updates lifecycle docs; adds strict serde deserialization on Evidence. |
| src/internal/object/decision.rs | Updates lifecycle docs; adds strict serde deserialization on Decision. |
| src/internal/object/context.rs | Updates lifecycle docs; adds strict serde deserialization on context snapshot structs. |
| docs/git-protocol-guide.md | Adds a protocol architecture/design guide document. |
| docs/git-object.md | Adds an object-format reference document. |
| docs/agent.md | Updates agent docs for spec/parents/iframes and Thread draft; includes examples. |
| docs/agent-workflow.md | Adds a comprehensive agent workflow document and Thread draft model. |
| Cargo.toml | Bumps version to 0.7.0; expands exclude list. |
| Cargo.lock | Updates lockfile version for the crate to 0.7.0. |
Comments suppressed due to low confidence (1)
docs/agent.md:218
- The code example still treats
Intent::new(...)as returning a singleIntent, but it now returns(Intent, ContextPipeline). As written, this snippet will not compile and will confuse users following the docs. Update the example to destructure the tuple (and optionally show how to persist/link the returnedContextPipeline).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7dac30e8a6
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Signed-off-by: Quanyi Ma <eli@patch.sh>
Signed-off-by: Quanyi Ma <eli@patch.sh>
genedna
left a comment
There was a problem hiding this comment.
Comprehensive Code Review — PR #109: Bump to 0.7.0
This PR introduces a major architectural refactoring of the AI object model, moving from mutable, append-only objects to a three-layer design: immutable snapshots, append-only events, and Libra runtime projections. The version bump from 0.6.1 to 0.7.0 is appropriate given the scope of breaking changes.
Overall Assessment
The architectural direction is sound — separating immutable definitions from runtime execution events is a well-established event-sourcing pattern that improves auditability, concurrency safety, and data integrity. The refactoring is thorough and consistent across all AI object types.
1. Code Quality
Strengths:
- Clean separation of snapshot vs event objects with consistent naming conventions (
*Eventsuffix for all event types) - DAG-based parent references (
parents: Vec<Uuid>) replacing singly-linked lists is a meaningful improvement enabling merge-style history - Self-referential loop prevention in
add_parent()methods - Consistent doc-comment structure across all new objects ("How to use", "How it works with other objects", "How Libra should call it")
- Good use of
#[serde(deny_unknown_fields)]for strict deserialization
Concerns:
- The
ParentLiketrait inintent.rsis generic infrastructure for what appears to be a single-use case. The#[allow(private_bounds)]onnew_revision_chainis a code smell — consider making the trait public or using a simpler approach (accept&[Uuid]directly) - Several event objects (
IntentEvent,TaskEvent,RunEvent,PlanStepEvent) useGitError::InvalidObjectInfoas a catch-all deserialization error instead of having their own dedicated error variants. This makes error diagnosis harder - The
object_hash()implementation inmod.rsduplicates hash computation logic. Consider extracting the header-construction pattern into a shared helper
2. Security
No critical security issues found. The changes are data-model refactoring with no new I/O, network, or authentication surface. The use of deny_unknown_fields is a positive security hygiene improvement for deserialization strictness.
3. Performance
Concerns:
add_parent()usesVec::contains()for deduplication, which is O(n). For typical use (1-3 parents) this is fine, but the API doesn't enforce an upper bound. Consider documenting the expected cardinality or adding a guard- The new
object_hash()implementation inmod.rscallsto_bytes()on ObjectType, which returnsOption<&[u8]>. For AI object types that may not have a byte representation, this now returns an error instead of silently computing a hash — this is a behavioral change that could surface latent bugs in callers
4. Testing
Strengths:
- Each new event object has inline unit tests covering field access, optional field mutation, and key invariants
- Plan tests cover single-parent, multi-parent, deduplication, self-link rejection, and mixed-intent rejection
Gaps:
- No serialization roundtrip tests for the new event objects (
IntentEvent,TaskEvent,RunEvent,PlanStepEvent,RunUsage,ContextFrame). The oldpipeline.rshad thorough roundtrip tests that were deleted - No test for
object_hash()with AI object types after themod.rsrefactoring - No test verifying that
deny_unknown_fieldscorrectly rejects extra fields — important for the backward-compatibility implications - The
ContextFrametest doesn't verifyObjectTraitimplementation (from_bytes/to_dataroundtrip)
5. Documentation
Strengths:
- The new
docs/agent.mdanddocs/agent-workflow.mdare well-structured and clearly articulate the three-layer model - Module-level docs are thorough with ASCII diagrams and relationship tables
- Every public method now has a doc comment
Concerns:
CLAUDE.mdreferencesdocs/ai.md,pipeline.rs, and several removed types (ContextPipeline,IntentStatus,ApplyStatus,StepStatus,StatusEntry,StepStatusEntry,TokenUsage). It needs to be updated to reflect the new architecture- The
docs/ARCHITECTURE.mdreference inCLAUDE.mdmay also need updating - Several
#[serde(alias = ...)]attributes were removed (e.g.,diff_format,diff_artifact,touched_filesinPatchSet). This is a silent backward-compatibility break — existing serialized data using the old field names will fail to deserialize
6. Breaking Changes Summary
This is a major breaking release with the following API-incompatible changes:
| Component | Breaking Change |
|---|---|
Intent |
parent → parents (Vec), removed content, commit, plan, statuses, IntentStatus, StatusEntry |
Plan |
previous → parents, removed pipeline, fwindow; new() now requires intent param |
PlanStep |
removed statuses, iframes, oframes, task, outputs; added step_id |
Task |
removed runs, status; added origin_step_id |
Run |
removed status, patchsets, metrics, error |
PatchSet |
removed apply_status, ApplyStatus; added sequence |
Provenance |
removed token_usage, TokenUsage |
ContextPipeline |
deleted entirely (replaced by ContextFrame event) |
ObjectType |
added 6 new variants for event types |
| Serde | deny_unknown_fields + removed alias attributes = deserialization breaks |
Recommendation: Consider adding a migration guide section in the docs or release notes.
| //! ```text | ||
| //! ① User input (natural-language request) | ||
| //! │ | ||
| //! ▼ |
There was a problem hiding this comment.
Design concern: ParentLike trait with #[allow(private_bounds)]
The ParentLike trait is defined as a private trait but is used as a bound on the public new_revision_chain method, requiring #[allow(private_bounds)]. This is a code smell — callers outside this module cannot implement ParentLike for their own types.
Consider one of:
- Make the trait
pubif external implementations are desired - Accept
&[Uuid]directly and let callers extract IDs - Use
impl Into<Uuid>bounds instead
// Option 2: simpler API
pub fn new_revision_chain(
created_by: ActorRef,
prompt: impl Into<String>,
parent_ids: &[Uuid],
) -> Result<Self, String> { ... }| result_commit: Option<IntegrityHash>, | ||
| } | ||
|
|
||
| impl IntentEvent { |
There was a problem hiding this comment.
Inconsistent error type: IntentEvent uses GitError::InvalidObjectInfo for deserialization errors, while ContextFrame uses the dedicated GitError::InvalidContextFrameObject. Consider adding a dedicated GitError::InvalidIntentEventObject variant (and similarly for TaskEvent, RunEvent, PlanStepEvent, RunUsage) for consistent error diagnostics.
| //! AI workflow objects are split into three layers: | ||
| //! | ||
| //! - **Snapshot objects** in `git-internal` answer "what was the stored | ||
| //! fact at this revision?" |
There was a problem hiding this comment.
Duplicated hash computation logic: The SHA-1 and SHA-256 branches have identical structure (update type bytes, space, length, null, data). Consider extracting into a helper:
fn compute_object_hash(type_bytes: &[u8], data: &[u8]) -> ObjectHash {
match get_hash_kind() {
HashKind::Sha1 => { /* ... */ }
HashKind::Sha256 => { /* ... */ }
}
}This would also be consistent with the existing ObjectHash::from_type_and_data pattern that was replaced.
| //! - `Intent` is the canonical owner via `Plan.intent`. | ||
| //! - `Task.origin_step_id` points back to the logical step that spawned | ||
| //! delegated work. | ||
| //! - `PlanStepEvent` records runtime step status, produced context, |
There was a problem hiding this comment.
Good: Intent consistency validation on multi-parent merge
The check that all parents belong to the same intent is a good invariant enforcement. One minor suggestion: the error message could include the mismatched intent IDs for easier debugging:
return Err(format!(
"plan parents must belong to the same intent: expected {}, got {}",
first_parent.intent, parent.intent
));| //! and the object type enumeration used across pack/object modules. | ||
|
|
||
| use std::{ | ||
| collections::HashMap, |
There was a problem hiding this comment.
Note on ObjectType growth: With 6 new event type variants (IntentEvent, TaskEvent, RunEvent, PlanStepEvent, RunUsage, ContextFrame), the ObjectType enum is growing significantly. The numeric IDs and string mappings need to be kept in sync across to_u8, from_u8, to_bytes, Display, etc. Consider adding a compile-time or test-time assertion that all variants are covered in all conversion methods.
src/internal/object/mod.rs
Outdated
| let object_type = self.get_type(); | ||
| let type_bytes = object_type.to_bytes().ok_or_else(|| { | ||
| GitError::InvalidObjectType(format!( | ||
| "object type `{}` cannot be hashed as a loose object", | ||
| object_type | ||
| )) | ||
| })?; | ||
|
|
||
| Ok(match get_hash_kind() { | ||
| crate::hash::HashKind::Sha1 => { | ||
| let mut hash = sha1::Sha1::new(); | ||
| hash.update(type_bytes); | ||
| hash.update(b" "); | ||
| hash.update(data.len().to_string()); | ||
| hash.update(b"\0"); | ||
| hash.update(&data); | ||
| let result: [u8; 20] = hash.finalize().into(); | ||
| ObjectHash::Sha1(result) | ||
| } | ||
| crate::hash::HashKind::Sha256 => { | ||
| let mut hash = sha2::Sha256::new(); | ||
| hash.update(type_bytes); | ||
| hash.update(b" "); | ||
| hash.update(data.len().to_string()); | ||
| hash.update(b"\0"); | ||
| hash.update(&data); | ||
| let result: [u8; 32] = hash.finalize().into(); | ||
| ObjectHash::Sha256(result) | ||
| } | ||
| }) |
There was a problem hiding this comment.
ObjectTrait::object_hash reimplements the loose-object hashing logic that already exists as ObjectHash::from_type_and_data. Duplicating the algorithm in multiple places increases the risk of future divergence (e.g., header formatting, hash-kind selection). Consider delegating to ObjectHash::from_type_and_data(self.get_type(), &data) (or a shared helper) instead of open-coding the digest update sequence here.
| let object_type = self.get_type(); | |
| let type_bytes = object_type.to_bytes().ok_or_else(|| { | |
| GitError::InvalidObjectType(format!( | |
| "object type `{}` cannot be hashed as a loose object", | |
| object_type | |
| )) | |
| })?; | |
| Ok(match get_hash_kind() { | |
| crate::hash::HashKind::Sha1 => { | |
| let mut hash = sha1::Sha1::new(); | |
| hash.update(type_bytes); | |
| hash.update(b" "); | |
| hash.update(data.len().to_string()); | |
| hash.update(b"\0"); | |
| hash.update(&data); | |
| let result: [u8; 20] = hash.finalize().into(); | |
| ObjectHash::Sha1(result) | |
| } | |
| crate::hash::HashKind::Sha256 => { | |
| let mut hash = sha2::Sha256::new(); | |
| hash.update(type_bytes); | |
| hash.update(b" "); | |
| hash.update(data.len().to_string()); | |
| hash.update(b"\0"); | |
| hash.update(&data); | |
| let result: [u8; 32] = hash.finalize().into(); | |
| ObjectHash::Sha256(result) | |
| } | |
| }) | |
| ObjectHash::from_type_and_data(self.get_type(), &data) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0109846072
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
- Remove clone-on-copy in intent_event.rs - Delegate ObjectTrait::object_hash to ObjectHash::from_type_and_data - Add ObjectType round-trip tests for variant coverage - Include mismatched intent IDs in Plan error message - Replace ParentLike trait with direct &[Uuid] parameter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh>
Signed-off-by: Quanyi Ma <eli@patch.sh>
| ErrorRecovery, | ||
| Checkpoint, | ||
| ToolCall, | ||
| Other(String), | ||
| } |
There was a problem hiding this comment.
FrameKind::Other(String) is missing the #[serde(untagged)] pattern used elsewhere (e.g., EvidenceKind, DecisionType, IntentEventKind). As written, Other("custom") will serialize as an object form (e.g. { "other": "custom" }) instead of a plain string, and deserializing an unknown string kind (e.g. "my_kind") will fail. Consider adding #[serde(untagged)] to Other(String) (and ideally a roundtrip test) so custom/forward-compatible kinds use the same string representation as the built-in variants.
| Ci, | ||
| Style, | ||
| /// Catch-all for goal categories not covered by the predefined | ||
| /// variants. The inner string is the custom category name. | ||
| Other(String), | ||
| } |
There was a problem hiding this comment.
GoalType::Other(String) currently lacks #[serde(untagged)], which means custom goal types will serialize using Serde’s tagged representation (not a plain string). That contradicts the doc comment on as_str() (“canonical snake_case storage/display form”) and is inconsistent with enums like EvidenceKind/DecisionType that do use #[serde(untagged)] Other(String). Consider adding #[serde(untagged)] here (or adjusting the docs/as_str() contract) to make storage format consistent.
| /// Replace the parent set for this in-memory plan revision. | ||
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | ||
| self.parents = parents; |
There was a problem hiding this comment.
set_parents() bypasses the invariants enforced by add_parent() (no self-link, no duplicates). As a result, callers can accidentally introduce duplicate parent IDs or a self-cycle, which contradicts the parent-DAG intent. Consider normalizing/validating inside set_parents() (e.g., clear + add_parent each id, or filter/dedupe and drop self) so the struct can’t enter an invalid state via the public API.
| /// Replace the parent set for this in-memory plan revision. | |
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | |
| self.parents = parents; | |
| /// Replace the parent set for this in-memory plan revision, enforcing | |
| /// the same invariants as `add_parent` (no self-link, no duplicates). | |
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | |
| self.parents.clear(); | |
| for parent_id in parents { | |
| self.add_parent(parent_id); | |
| } |
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | ||
| self.parents = parents; |
There was a problem hiding this comment.
set_parents() directly assigns the provided Vec and can introduce duplicate parent IDs or a self-link, even though add_parent() explicitly prevents those states. Consider having set_parents() enforce the same invariants (dedupe + drop self.header.object_id()), or make it a private helper to avoid creating invalid intent DAGs via the public API.
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | |
| self.parents = parents; | |
| /// | |
| /// This applies the same invariants as `add_parent`: | |
| /// - drops any self-links (`self.header.object_id()`) | |
| /// - de-duplicates parent IDs while preserving order of first occurrence. | |
| pub fn set_parents(&mut self, parents: Vec<Uuid>) { | |
| self.parents.clear(); | |
| for parent_id in parents { | |
| self.add_parent(parent_id); | |
| } |
| input_tokens, | ||
| output_tokens, | ||
| total_tokens: input_tokens + output_tokens, | ||
| cost_usd, | ||
| }) |
There was a problem hiding this comment.
total_tokens: input_tokens + output_tokens can overflow u64 (panics in debug, wraps in release), silently producing an invalid RunUsage. Consider using checked_add and returning an error if the sum overflows (or using saturating_add if overflow should clamp), and apply the same approach anywhere totals are recomputed.
|
|
||
| /// Validate that the stored total matches input plus output. | ||
| pub fn is_consistent(&self) -> bool { | ||
| self.total_tokens == self.input_tokens + self.output_tokens |
There was a problem hiding this comment.
is_consistent() recomputes self.input_tokens + self.output_tokens, which has the same u64 overflow risk as construction (panic in debug / wrap in release). Consider using checked_add here too (and returning false on overflow) so the consistency check itself can’t overflow.
| self.total_tokens == self.input_tokens + self.output_tokens | |
| match self.input_tokens.checked_add(self.output_tokens) { | |
| Some(sum) => self.total_tokens == sum, | |
| None => false, | |
| } |
| version = "0.7.0" | ||
| edition = "2024" | ||
| authors = ["Eli Ma <genedna@gmail.com>"] | ||
| description = "High-performance Rust library for Git internal objects, Pack files, and AI-assisted development objects (Intent, Plan, Task, Run, Evidence, Decision) with delta compression, streaming I/O, and smart protocol support." |
There was a problem hiding this comment.
PR title is "Bump to 0.7.0", but this change set also makes a large AI object-model redesign (snapshot/event split, removes ContextPipeline, adds multiple new object types and docs). To reduce review/merge risk and help downstream consumers, consider updating the PR title/description to reflect the broader scope (or splitting version bump vs schema refactor).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7134ed9935
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| /// commit baseline". It does not itself accumulate logs or status | ||
| /// transitions after persistence. | ||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||
| #[serde(deny_unknown_fields)] |
There was a problem hiding this comment.
Allow legacy Run fields during deserialization
Run is now marked with #[serde(deny_unknown_fields)] while pre-0.7 run objects were serialized with fields such as status, patchsets, metrics, and error; Run::from_bytes still performs direct serde_json::from_slice, so those historical payloads now fail to parse with unknown-field errors. This breaks loading existing run history in upgraded repositories unless legacy keys are tolerated or migrated.
Useful? React with 👍 / 👎.
| /// current runtime status?". Current status is reconstructed from event | ||
| /// history in Libra. | ||
| #[derive(Debug, Clone, Serialize, Deserialize)] | ||
| #[serde(deny_unknown_fields)] |
There was a problem hiding this comment.
Accept pre-0.7 Task payloads during deserialization
The new Task schema enables deny_unknown_fields and removes previously persisted fields (runs and status), but Task::from_bytes still deserializes directly from stored JSON. Older task objects therefore error out on unknown fields when read, which makes historical task records unreadable after upgrading unless backward-compatible deserialization is restored.
Useful? React with 👍 / 👎.
| "run_event" => Ok(ObjectType::RunEvent), | ||
| "plan_step_event" => Ok(ObjectType::PlanStepEvent), | ||
| "run_usage" => Ok(ObjectType::RunUsage), | ||
| _ => Err(GitError::InvalidObjectType(s.to_string())), |
There was a problem hiding this comment.
Preserve
pipeline object-type parsing compatibility
ObjectType::from_string no longer accepts the legacy type name pipeline, even though previous versions serialized ContextPipeline objects with that label. Repositories containing historical pipeline objects now fail type parsing with InvalidObjectType before any migration logic can run, so backward-compatible aliasing is needed to keep old data readable.
Useful? React with 👍 / 👎.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh>
Signed-off-by: Quanyi Ma <eli@patch.sh>
| - name: Publish to crates.io | ||
| run: cargo publish |
There was a problem hiding this comment.
The publish job runs cargo publish without building/testing the tag and without --locked. Since this workflow triggers directly on tag push, a broken tag could be published. Consider adding a cargo test --all (or reusing the existing CI workflow as a required status check) and using cargo publish --locked to prevent dependency resolution drift at publish time.
| - name: Publish to crates.io | |
| run: cargo publish | |
| - name: Run tests | |
| run: cargo test --all --locked | |
| - name: Publish to crates.io | |
| run: cargo publish --locked |
| #[serde(rename_all = "snake_case")] | ||
| pub enum FrameKind { | ||
| IntentAnalysis, | ||
| StepSummary, | ||
| CodeChange, | ||
| SystemState, | ||
| ErrorRecovery, | ||
| Checkpoint, |
There was a problem hiding this comment.
FrameKind::Other(String) is currently serialized as an externally-tagged enum (e.g. { "other": "foo" }). That breaks the intended "string-like" representation and prevents forward-compatible parsing of unknown kinds emitted as plain strings (e.g. "waiting_for_human_review"). Consider matching the existing pattern used by EvidenceKind/DecisionType by adding #[serde(untagged)] Other(String) so unknown kinds round-trip as strings.
| #[serde(rename_all = "snake_case")] | |
| pub enum FrameKind { | |
| IntentAnalysis, | |
| StepSummary, | |
| CodeChange, | |
| SystemState, | |
| ErrorRecovery, | |
| Checkpoint, | |
| #[serde(untagged)] | |
| pub enum FrameKind { | |
| #[serde(rename = "intent_analysis")] | |
| IntentAnalysis, | |
| #[serde(rename = "step_summary")] | |
| StepSummary, | |
| #[serde(rename = "code_change")] | |
| CodeChange, | |
| #[serde(rename = "system_state")] | |
| SystemState, | |
| #[serde(rename = "error_recovery")] | |
| ErrorRecovery, | |
| #[serde(rename = "checkpoint")] | |
| Checkpoint, | |
| #[serde(rename = "tool_call")] |
| #[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] | ||
| #[serde(deny_unknown_fields)] | ||
| pub struct PlanStep { | ||
| /// Human-readable description of what this step should accomplish. | ||
| /// Stable logical step identity across Plan revisions. | ||
| /// | ||
| /// Set once at creation. The `alias = "intent"` supports legacy | ||
| /// serialized data where this field was named `intent`. | ||
| #[serde(alias = "intent")] | ||
| /// `step_id` is the cross-revision identity for the logical step. | ||
| /// One concrete stored step snapshot is identified by the pair | ||
| /// `(Plan.header.object_id(), step_id)`. | ||
| step_id: Uuid, | ||
| /// Human-readable description of what this step is supposed to do. |
There was a problem hiding this comment.
PlanStep now requires a step_id field and uses #[serde(deny_unknown_fields)]. This will make deserializing previously persisted plan-step JSON (which likely lacked step_id and had additional runtime fields in older schemas) fail hard in from_bytes (which currently just calls serde_json::from_slice). If older object history needs to remain readable, add a migration path (e.g., defaulting/deriving step_id, aliases for renamed fields, or a custom Deserialize/versioned from_bytes).
| //! Decision --run_id / chosen_patchset_id?> Run / PatchSet | ||
| //! ContextFrame --run_id? / plan_id? / step_id?> Run / Plan / PlanStep |
There was a problem hiding this comment.
In the relationship diagram, chosen_patchset_id?> and step_id?> look like typos (extra >). This makes the docs harder to read/search; consider changing to chosen_patchset_id? and step_id? (or otherwise standardizing the optional-field notation).
| //! Decision --run_id / chosen_patchset_id?> Run / PatchSet | |
| //! ContextFrame --run_id? / plan_id? / step_id?> Run / Plan / PlanStep | |
| //! Decision --run_id / chosen_patchset_id?-> Run / PatchSet | |
| //! ContextFrame --run_id? / plan_id? / step_id?-> Run / Plan / PlanStep |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: cb12e1b69c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| #[serde(deny_unknown_fields)] | ||
| pub struct ActorRef { |
There was a problem hiding this comment.
Accept legacy
auth_context in ActorRef deserialization
Pre-0.7 objects serialized created_by.auth_context (as null by default), but ActorRef now drops that field and enables deny_unknown_fields, so deserializing any historical AI object can fail on the nested created_by payload even if the top-level header compatibility is restored. Because Header is flattened into every AI object, this blocks reading existing repositories after upgrade.
Useful? React with 👍 / 👎.
| #[serde(deny_unknown_fields)] | ||
| pub struct ArtifactRef { | ||
| /// Store type: local_fs/s3 | ||
| /// External store identifier, such as `s3` or `local`. |
There was a problem hiding this comment.
Preserve legacy ArtifactRef fields when reading old objects
Older ArtifactRef payloads included metadata keys like content_type, size_bytes, hash, and expires_at (typically serialized as null), but the new struct keeps only store/key and adds deny_unknown_fields; this makes persisted objects with artifact references fail to deserialize (for example PatchSet.artifact, Evidence.report_artifacts, and ToolInvocation.artifacts). This is a backward-compatibility regression for existing run history that stores artifacts.
Useful? React with 👍 / 👎.
* chore: align intent pipeline lifecycle and comments Signed-off-by: Quanyi Ma <eli@patch.sh> * docs: add agent and protocol documentation set Signed-off-by: Quanyi Ma <eli@patch.sh> * refactor: replace plan fwindow with iframes and update docs Signed-off-by: Quanyi Ma <eli@patch.sh> * docs: expand agent object documentation Signed-off-by: Quanyi Ma <eli@patch.sh> * feat: add agent event objects Signed-off-by: Quanyi Ma <eli@patch.sh> * refactor: address review feedback on object model - Remove clone-on-copy in intent_event.rs - Delegate ObjectTrait::object_hash to ObjectHash::from_type_and_data - Add ObjectType round-trip tests for variant coverage - Include mismatched intent IDs in Plan error message - Replace ParentLike trait with direct &[Uuid] parameter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * Document and refine intent workflow model Signed-off-by: Quanyi Ma <eli@patch.sh> * ci: add crates.io publish workflow on tag push Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Quanyi Ma <eli@patch.sh> * ci: add least-privilege permissions to publish workflow Signed-off-by: Quanyi Ma <eli@patch.sh> --------- Signed-off-by: Quanyi Ma <eli@patch.sh> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
No description provided.